Page 2,517«..1020..2,5162,5172,5182,519..2,5302,540..»

Inspiring Innovation; New Short Talks Features Karl Schubert and Data Science Program – University of Arkansas Newswire

University of Arkansas

Karl Schubert

The November episode of Short Talks from the Hill features Karl Schubert, professor of practice and associate director of the Data Science Program. Schubert came back to the University of Arkansas after a 35-year career in private industry.

Schubert discusses his unusual path back to the university and his general desire to inspire innovation in students. He also discusses the creation of the multidisciplinary Data Science Program and a recent National Science Foundation grant of nearly $1 million to support low-income students interested in studying innovation in science, technology, engineering and math.

On the benefits of having a non-traditional background, Schubert says in the podcast: "I was viewed by the faculty as what I called non-denominational. You know, that is, that I wasn't in a department specifically. I was working for three deans, and so I didn't have any particular favoritism to any particular department or any particular college."

To listen to Schubert discuss his role at the university, go to ResearchFrontiers.uark.edu, the home of research news at the University of Arkansas, or visit the "On Air" and "Programs" link at KUAF.com.

Short Talks From the Hill highlights research, scholarly work, and creative activity at the University of Arkansas. Each segment features a university a faculty member discussing his or her work. Previous podcasts can be found under the 'Short Talks From the Hill' link at ResearchFrontiers.uark.edu.

Thank you for listening!

Read more from the original source:

Inspiring Innovation; New Short Talks Features Karl Schubert and Data Science Program - University of Arkansas Newswire

Read More..

How to succeed around data science projects – Information Age

Denise Gosnell, chief data officer at DataStax, discussed how preparation, process and open source can help to ensure success from data science projects

Its important to set out how your projects will support overall business goals.

For businesses, investment in machine learning, artificial intelligence (AI) and data science is growing. There is huge potential around data science to create new insights and services for internal and external customers. However, this investment can be wasted if data science projects dont fulfil their promises. How can we make sure that these projects succeed?

According to McKinsey, around half of all the companies they served have adopted AI in at least one function, and there is already a small cohort of companies that can ascribe at least 20% of their earnings before interest and taxes to AI. Around $341.8 billion will be spent on AI solutions during 2021, a rise of 15.2 percent year over year, according to IDC.

IDC also found around 28% of AI and ML initiatives have failed so far. Based on the figure above, that would equate to $88.1 billion of spend on tooling associated with failed projects. The analyst firm identified reasons for this including the lack of staff with necessary expertise, and a lack of production-ready data as reasons for this. Alongside this, feeling unconnected and lacking an integrated development environment was another reason for projects not being successful.

To improve your chances of success around your projects, it is worth spending time to look at how data science works in practice, and how your organisation operates. While it includes the word science in its title, in fact data science requires a blend of both art and science in order to produce the best results. Using this, its then possible to examine scaling up the results. This will help you successfully turn data science results into production operations for the business.

At the most simple level, data science involves coming up with ideas and then using data to test those theories. Using a mix of different algorithms, designs and approaches, data scientists can seek out new insights from the data that companies create. Based on trial, error and improvement, the teams involved can create a range of new insights and discoveries, which can then be used to inform decisions or create new products. This can then be used to develop machine learning (ML) algorithms and AI deployments.

We gauged the perspectives of experts in data science, asking them about the biggest emerging trends in data science. Read here

The biggest risk around these projects is the gap between business expectations and reality. AI has received a huge amount of hype and attention over the past few years. This means that many projects have unrealistic expectations.

Unrealistic expectations can be in scope, speed, and/or technologies. Great project managers understand how to navigate challenges in scope and speed; it is the misinterpretation of the promises of AI technologies which have been causing the biggest problems for new projects. Rather than being focused on improving a process or delivering one insight, AI gets envisioned as changing how a company runs from top to bottom, or that a single project will deliver a change in profitability within months.

To prevent this problem, its important to set out how your projects will support overall business goals. You can then start small with projects that are easy to understand and that can show improvements. Once you have set out some ground rules around what AI can deliver and punctured the hype balloon around AI to make this all business as usual you can keep the focus on the results that you deliver.

Another big problem is that teams dont have the necessary skills to translate their vision into effective processes. While the ideas might be sound, a lack of understanding around the nuances of applying machine learning and statistics in practice can lead to poor outcomes. This issue is also due to the hype around AI and ML the demand for data science skills means that there is a lot of competition for those with experience, while even those starting out can command big salaries. This lack of real world experience is what can lead to problems over time.

Even with a realistic vision and experienced staff in place, AI projects can still fail to deliver results. In this case, the reason is normally that poor processes, inconsistent communication, and gaps between teams exist.

To prevent these kinds of problems, its important to establish a smoothly operating engineering culture that weaves data science work into the overall production pipeline. Rather than data science being a distinct team, work on how to integrate your data scientists into the production deployment process. This will help minimise the gap from data research and development to production.

While it is important to support creativity around data science, any work should have the business goals in mind. This should put the emphasis on what result you are looking to achieve or discover by using data to prove (or disprove) a hypothesis based on how well that business goal was met.

The team at Netflix has written about this, and how their approach to shared hypothesis testing helps keep the team focused. By concentrating on specific objectives, you can avoid getting lost or spending time on projects that wont pay off.

Alongside this, its important to evaluate new technologies for any improvements in how they might help meet goals. Keeping at the cutting edge is important for data scientists, but it is essential to focus on how any new technology can help meet that specific and measurable business outcome.

Based on these ideas, you can help your data science team take their creativity and apply it to discover interesting results. Once this research starts to find insights, you can then look at how to push this into production. This involves creating bridges from the data science development and research team to those responsible for running production systems, so that new models can be passed across.

Jai Gandhi, vice-president of data and analytics at Ciklum, discusses what retailers can learn from Netflix when leveraging data to drive innovation and sales. Read here

One critical element here is that you should encourage everyone to use the same tools on each side. One of the biggest hurdles can be when the data science team delivers a new model and workflow around data, and then those responsible for running the model in production have to re-develop that model to work with the existing infrastructure that is in place. The emphasis here is to avoid the old trope of This worked on my laptop! as laptops cant be pushed to production and rework is expensive.

Using open source can help to achieve this consistency. From databases like Apache Cassandra, through to event streaming with Apache Pulsar, data enrichment with Apache Flink and analytics with Apache Spark, common tools used for working with data are mainly open source and easy to link together. Alongside this open data infrastructure, TensorFlow is important for how algorithms and machine learning models can be created and tested. You can use something like Apache Airflow to manage the workflow process that your team has in place. This makes it easier to build a stack that is common to everyone.

Alongside getting consistency on tools and infrastructure, both sides need to agree on common definitions and context. This involves setting the right goals and metrics so that everyone is aware of how the team will be evaluated over time. At the same time, it should also be an opportunity to keep re-assessing those metrics, so that the emphasis is always on delivering the right business outcomes. Anthropologist Marilyn Strathern described this as, When a measure becomes a target, it ceases to be a good measure. This sees teams concentrating too specifically on metrics and measurement to the detriment of the overall goal.

Lastly, the role of testing should not be overlooked. Once new models are developed that should have the desired impact, those models should be tested to ensure that they work as expected and are not falling foul of issues within the test data or any biases that were not accounted for. Testing using the same tools and processes as will be used in production not only helps solidify the value that data science creates, but makes it easier to scale that work out. Skipping this step or not giving it the right degree of rigour leads to problems over time.

Data science has huge potential to help businesses improve their operations. It can be used to develop new products, show where to invest, and help people make better decisions in their roles.

To avoid the risk of failure, look at how you can build on an open source data stack to make the process around moving from initial discovery through to full production easier. This consistency should make it easier for your data scientists to work around data, and for your operational staff to implement those insights into production.

Here is the original post:

How to succeed around data science projects - Information Age

Read More..

California Tries to Close the Gap in Math, but Sets Off a Backlash – The New York Times

If everything had gone according to plan, California would have approved new guidelines this month for math education in public schools.

But ever since a draft was opened for public comment in February, the recommendations have set off a fierce debate over not only how to teach math, but also how to solve a problem more intractable than Fermats last theorem: closing the racial and socioeconomic disparities in achievement that persist at every level of math education.

The California guidelines, which are not binding, could overhaul the way many school districts approach math instruction. The draft rejected the idea of naturally gifted children, recommended against shifting certain students into accelerated courses in middle school and tried to promote high-level math courses that could serve as alternatives to calculus, like data science or statistics.

The draft also suggested that math should not be colorblind and that teachers could use lessons to explore social justice for example, by looking out for gender stereotypes in word problems, or applying math concepts to topics like immigration or inequality.

The battle over math comes at a time when education policy, on issues including masks, testing and teaching about racism, has become entangled in bitter partisan debates. The Republican candidate for governor in Virginia, Glenn Youngkin, seized on those issues to help propel him to victory on Tuesday. Now, Republicans are discussing how these education issues can help them in the midterm elections next year.

Even in heavily Democratic California a state with six million public school students and an outsize influence on textbook publishing nationwide the draft guidelines encountered scathing criticism, with charges that the framework would inject woke politics into a subject that is supposed to be practical and precise.

People will really go to battle for maths to stay the same, said Jo Boaler, a professor of education at Stanford University who is working on the revision. Even parents who hated maths in school will argue to keep it the same for their kids.

The battle over math pedagogy is a tale as old as multiplication tables. An idea called new math, pitched as a more conceptual approach to the subject, had its heyday in the 1960s. About a decade ago, amid debates over the national Common Core standards, many parents bemoaned math exercises that they said seemed to dump line-by-line computation in favor of veritable hieroglyphs.

Today, the battles over the California guidelines are circling around a fundamental question: What, or whom, is math for?

Testing results regularly show that math students in the United States are lagging behind those in other industrialized nations. And within the country, there is a persistent racial gap in achievement. According to data from the civil rights office of the Education Department, Black students represented about 16 percent of high school students but 8 percent of those enrolled in calculus during the 2015-16 school year. White and Asian students were overrepresented in high-level courses.

We have a state and nation that hates math and is not doing well with it, Dr. Boaler said.

Critics of the draft said the authors would punish high achievers by limiting options for gifted programs. An open letter signed by hundreds of Californians working in science and technology described the draft as an endless river of new pedagogical fads that effectively distort and displace actual math.

Williamson M. Evers, a senior fellow at the Independent Institute and a former official with the Education Department during the administration of George W. Bush, was one of the authors of the letter and objected to the idea that math could be a tool for social activism.

I think thats really not right, he said in an interview. Math is math. Two plus two equals four.

Distress over the draft made it to Fox News. In May, Dr. Boalers name and photograph were featured on an episode of Tucker Carlson Tonight, an appearance she did not know about until she began receiving nasty letters from strangers.

Like some of the attempted reforms of decades past, the draft of the California guidelines favored a more conceptual approach to learning: more collaborating and problem solving, less memorizing formulas.

It also promoted something called de-tracking, which keeps students together longer instead of separating high achievers into advanced classes before high school.

The San Francisco Unified School District already does something similar. There, middle school math students are not split up but rather take integrated courses meant to build their understanding year by year, though older high school students can still opt into high-level classes like calculus.

Sophia Alemayehu, 16, a high school junior in San Francisco, advanced along that integrated track even though she did not always consider herself a gifted math student. She is now taking advanced calculus.

In eighth and ninth grade, I had teachers tell me, Oh, youre actually really good at the material, she said. So it made me think, maybe Im good at math.

The model has been in place since 2014, yielding a few years of data on retention and diversity that has been picked over by experts on both sides of the de-tracking debate. And while the data is complicated by numerous variables a pandemic now among them those who support San Franciscos model say it has led to more students, and a more diverse set of students, taking advanced courses, without bringing down high achievers.

Youll hear people say that its the least common denominator that discourages gifted kids from advancing, Elizabeth Hull Barnes, the math supervisor for the district, said. And then its like, nope, our data refutes that.

But Dr. Evers, the former Education Department official, pointed to research suggesting that the data on math achievement in places like San Francisco was more cherry-picked than conclusive. He added that Californias proposed framework could take a more nuanced approach to de-tracking, which he saw as a blunt tool that did not take the needs of individual districts into account.

Other critics of de-tracking say it amounts to a drag on children who would benefit from challenging material and that it can hurt struggling students who might need more targeted instruction.

Divya Chhabra, a middle school math teacher in Dublin, Calif., said the state should focus more on the quality of instruction by finding or training more certified, experienced teachers.

Without that, she said, students with potential would quickly fall behind, and it would only hurt them further to take away options for advanced learning. I feel so bad for these students, she said. We are cutting the legs of the students to make them equal to those who are not doing well in math.

Tracking is part of a larger debate about access to college. Under the current system, students who are not placed in accelerated courses by middle school may never get the opportunity to take calculus, which has long been an informal gatekeeper for acceptance to selective schools.

According to data from the Education Department, calculus is not even offered in most schools that serve a large number of Black and Latino students.

The role of calculus has been a talking point among math educators for years, said Trena Wilkerson, the president of the National Council of Teachers of Mathematics. If calculus is not the be-all, end-all thing, then we need everyone to understand what the different pathways can be, and how to prepare students for the future, she said.

Californias recommendations aim to expand the options for high-level math, so that students could take courses in, say, data science or statistics without losing their edge on college applications. (The move requires buy-in from colleges; in recent years, the University of California system has de-emphasized the importance of calculus credits.)

For now, the revision process has reached a sort of interlude: The draft is being revised ahead of another round of public comment, and it will not be until late spring, or maybe summer, that the states education board will decide whether to give its stamp of approval.

But even after that, districts will be free to opt out of the states recommendations. And in places that opt in, academic outcomes in the form of test scores, retention rates and college readiness will add to the stormy sea of data about what kinds of math instruction work best.

In other words, the conversation is far from over.

Weve had a really hard time overhauling math instruction in this country, said Linda Darling-Hammond, the president of Californias board of education. We cannot ration well-taught, thoughtful mathematics to only a few people. We have to make it widely available. In that sense, I dont disagree that its a social justice issue.

See original here:

California Tries to Close the Gap in Math, but Sets Off a Backlash - The New York Times

Read More..

Training students at the intersection of power engineering and computer science WSU Insider – WSU News

A WSU research team has received a $1.2 million U.S. Department of Education grant to train graduate students at the intersection of artificial intelligence (AI), data science, and engineering to address challenges of the future electric power grid.

Led by Assefaw Gebremedhin, associate professor in the School of Electrical Engineering and Computer Science, theGraduate Assistance in Areas of National Need (GAANN) grant aims to enhance teaching and research in areas of national need.

AI and the closely related area of data science affect nearly everything that we do, said Gebremedhin. We need to have power engineers who speak both languages who are trained to be good power engineers and are also able to do good data science.

In recent years, the US power grid has been rapidly evolving from a network of centralized fossil fuel-powered generation plants to a system that includes more distributed generation and renewable resources. As power becomes more decentralized, traditional ideas about power grid operations have been changing.

Distributed assets need to be controlled and managed differently than in the past.

Climate change is also leading to an increase in extreme weather events, which means that the power system has to be more resilient and operate under fast-changing conditions, says Gebremedhin. Changes in technology are also allowing customers to be more actively and directly involved in controlling their energy use.

These rapid transformations threaten power grid reliability, he said.

The US power industry is increasingly adopting machine learning and data analytics technologies to improve its reliability, resiliency, and efficiency.

Meanwhile, software that gets developed in the power industry as well as in many other engineering applications is increasingly getting more complex. Software engineers of the future would not only need to know how to build and maintain complex software, but they would also need to know how to extract knowledge from massive amounts of data and adapt that knowledge to consider different human factors.

As part of the grant, a total of eight U.S. PhD students will receive training, focusing on the application of AI and data science to power engineering and software engineering.

The new workforce needs to be trained in traditional topics on electric and power engineering along with having an understanding of data science and machine learning, information and communication technology, and control and automation, he said.

With programs in power engineering, machine learning and AI, and software engineering, the School of EECS presents a unique opportunity to bridge the fields of computer science and power engineering.

There are just a few schools in the country where you have these disciplines housed in the same school, which is a great asset, he said.

The three-year program will focus on recruitment of students from underrepresented groups in engineering and computer science, including women, black and Hispanic students. In addition to Gebremedhin, the program is led by three women faculty members in electrical engineering and computer science, Anamika Dubey, Venera Arnaoudova, and Noel Schulz. The students will receive training in teaching and mentoring and will also have opportunities to participate in internships through the Pacific Northwest National Laboratory.

Visit link:

Training students at the intersection of power engineering and computer science WSU Insider - WSU News

Read More..

A look at some of the AI and ML expert speakers at the iMerit ML DataOps Summit – TechCrunch

Calling all data devotees, machine-learning mavens and arbiters of AI. Clear your calendar to make room for the iMerit ML DataOps Summit on December 2, 2021. Join and engage with AI and ML leaders from multiple tech industries, including autonomous mobility, healthcare AI, technology and geospatial to name just a few.

Attend for free: Theres nothing wrong with your vision the iMerit ML DataOps Summit is 100% free, but you must register here to attend.

The summit is in partnership with iMerit, a leading AI data solutions company providing high-quality data across computer vision, natural language processing and content that powers machine learning and artificial intelligence applications. So, what can you expect at this free event?

Great topics require great speakers, and well have those in abundance. Lets highlight just three of the many AI and ML experts who will take the virtual stage.

Radha Basu: The founder and CEO of iMerit leads an inclusive, global workforce of more than 5,300 people 80% of whom come from underserved communities and 54% of whom are women. Basu has raised $23.5 million from investors, led the company to impressive revenue heights and has earned a long list of business achievements, awards and accolades.

Hussein Mehanna: Currently the head of Artificial Intelligence and Machine Learning at Cruise, Mehanna has spent more than 15 years successfully building and leading AI teams at Fortune 500 companies. He led the Cloud AI Platform organization at Google and co-founded the Applied Machine Learning group at Facebook, where his team added billions of revenue dollars.

DJ Patil: The former U.S. Chief Data Scientist, White House Office of Science and Technology Policy, Patils experience in data science and technology runs deep. He has held high-level leadership positions at RelateIQ, Greylock Partners, Color Labs, LinkedIn and eBay.

The iMerit ML DataOps Summit takes place on December 2, 2021. If your business involves data-, AI- and ML-driven technologies, this event is made for you. Learn, network and stay current with this fast-paced sector and do it for free. All you need to do is register. Start clicking.

Continued here:

A look at some of the AI and ML expert speakers at the iMerit ML DataOps Summit - TechCrunch

Read More..

Exploring, Monitoring and Modeling the Deep Ocean Are Goals of New Research – UT News – UT News | The University of Texas at Austin

AUSTIN, Texas A team led by scientists from The University of Texas at Austin is attempting to boldly go where no man has gone before: the Earths deepest oceans.

In the 1989 science fiction film The Abyss, a search and recovery team is tasked with finding a lost U.S. submarine that has vanished somewhere deep in uncharted waters of the Atlantic Ocean. Although the teams discovery of an extraterrestrial species living on the ocean floor is imaginative, it did highlight how little we know about what may be present in the deepest parts of the Earths oceans.

Water covers more than 70% of the planets surface, but only 10% of the undersea world has been explored. Oceans provide about 90% of living space on the planet by volume. They also absorb more than 90% of the Earths radiative heat imbalance leading to ocean warming, and about a third of anthropogenic carbon dioxide emissions leading to ocean acidification.

Now, more than 30 years since the release of The Abyss, scientists have gained some new insights. For example, the deep ocean (below 200 meters, from the mesopelagic zone downward) could provide a vast repository for biodiversity providing critical climate regulation and housing a wealth of hydrocarbon, mineral and genetic resources. Nevertheless, the deep ocean remains a mostly unknown realm of our planet. Deep-ocean habitats are under increasing pressure from climate change and human activities such as seafloor mining, fishing and contamination.

Through its Accelerating Research through International Network-to-Network Collaborations (AccelNet) program, the National Science Foundation is funding a team led by the Oden Institute for Computational Engineering and Sciences at UT Austin to implement a Deep-Ocean Observing Strategy (iDOOS). The initiative brings together U.S. and international networks engaged in deep-ocean observing, mapping, exploration, modeling, research and sustainable management to leverage each others efforts, knowledge and resources.

By connecting deep-ocean observers across disciplines, expanding the observing community to include nontraditional partners, and linking data providers to users, iDOOS will enhance the deep-ocean capabilities of the Global Ocean Observing System (GOOS) and target societal needs, said project lead Patrick Heimbach, director of the Computational Research in Ice and Ocean Systems group at the Oden Institute and faculty member at the Jackson School of Geosciences.

IDOOS will address several of the stated Challenges of the IOC United Nations Decade of Ocean Science for Sustainable Development (2021-2030), in particular: the goal to ensure a sustainable [deep] ocean observing system across all ocean basins that delivers accessible, timely, and actionable data and information to all users.

One of the first programs to be endorsed by the U.N. Ocean Decade initiative, the initiative also tackles another key challenge set engaging with a range of stakeholders to develop or contribute to a comprehensive ocean cyberinfrastructure that supports a digital-twin [deep] ocean, enabling applications from big data analytics to simulation-based science.

Through engagement with policymakers, regulators and science coordinators, iDOOS will raise awareness and support for deep-ocean science and bring science into critical decisions regarding climate, biodiversity and sustainability. It will foster a community of future leaders informed in deep-ocean observing, modeling, data science, sustainable development, and international law at a global level who are adept at communicating to regulators and policymakers, as well as to fellow scientists.

Heimbach and his research team at UT Austin will lead the project in partnership with experts from the Scripps Institution of Oceanography at the University of California San Diego, the Woods Hole Oceanographic Institution, the Monterey Bay Aquarium Research Institute, the University of Hawaii at Manoa, and The University of Rhode Island

Read the rest here:

Exploring, Monitoring and Modeling the Deep Ocean Are Goals of New Research - UT News - UT News | The University of Texas at Austin

Read More..

UVA Science and Engineering Faculty Win 12 NSF Career Awards – University of Virginia

From stopping deadly diseases to developing futuristic materials, from making self-driving vehicles smarter to studying global inequalities in pollution exposure, the University of Virginias early career faculty are more deeply involved than ever in making peoples lives safer, healthier and more efficient.

In 2021 so far, 12 UVA assistant professors have earned National Science Foundation Early Career Development Awards, among the most competitive and prestigious grants for science and engineering faculty in the first stages of their careers. Thats up from eight CAREER Awards in 2020, and four to five awards per year before then.

The CAREER Award is given to early career researchers who have the potential to make a significant impact through their careers as academic researchers and educators, Melur Ram Ramasubramanian, UVAs vice president for research, said. Getting 12 of these prestigious awards for our faculty so far this year is impressive, and really shows the great talent we have across the University.

Meet the most recent UVA CAREER Award winners, whom NSF expects to become the next great leaders and role models in research and education.

In May 2020, Partners for Automated Vehicle Education shared results from a poll of 1,200 Americans about attitudes around autonomous vehicle technology. Three in four believed the technology was not ready for primetime; almost half indicated they would never ride in a self-driving car; and a fifth do not believe that autonomous vehicles will ever be safe.

The poll outlines the deep skepticism surrounding self-driving vehicles. Methods to improve and prove safety will be needed for broad-based acceptance. Behls pioneering research at UVA is accelerating safety for autonomous vehicles.

Using auto racing as a platform, Behl has invented artificial intelligence methods to agilely maneuver an autonomous vehicle while pushing the limits of its steering, throttle and braking capabilities. His novel racing research is creating advanced algorithms that hold the key to safer autonomous vehicles, enabling them to avoid collisions even when they encounter unexpected challenges at high speeds while close to obstacles or other vehicles.

Demonstrating their skills in programming a full-sized, fully autonomous race car, Behl and his student Cavalier Autonomous Racing team clocked the fastest laps from a U.S. university team in the historic Indy Autonomous Challenge, held Oct. 23 at the Indianapolis Motor Speedway.

Fibrosis, the stiffening of normally soft or pliant living tissue, contributes significantly to about 40% of the overall deaths in the developed world.

Yeah, its a hell of a stat, Caliari said. But thats because fibrosis, or chronic scarring, itself isnt a disease; its an outcome of many different diseases.

The list includes some cancers, viral infections such as hepatitis, and idiopathic pulmonary fibrosis, a cruel condition in which scar tissue grows in the lungs, restricting the flow of oxygen. Idiopathic means the disease has no known cause. It has no cure, either.

Researchers like Caliari believe stopping or even reversing the progression of fibrosis is possible, but they need to know a lot more about what is happening in the body to make cells go from normal to a diseased state. Caliari is using biomaterials developed in his lab to open a window on that process.

Caliaris plans include partnering with the UVA chapter of the Society of Hispanic Professional Engineers to develop teaching modules on biomaterials concepts for elementary school students and initiating a high school summer research program involving labs in UVAs Fibrosis Initiative.

Holographic displays, color-changing frames and pliable screens are just a few of the innovations the next generation of smartphones may offer. And while engineers and coders will be responsible for making much of that technology possible, theres a good chance well also need to thank Gilliard.

Gilliard explores strategies for incorporating boron into chemical compounds to help him understand how to harness the elements unique capacity to carry and transfer electric charge and to produce the colors displayed by our cellphones and electronic devices.

In collaboration with the chemical engineering department here at UVA, weve already started to explore the applications of some of these boron-based materials, and were seeing that the utility is probably going to be pretty important going forward, Gilliard said.

His research may also make components in those devices more stable over time, less expensive to produce and less harmful to the environment.

This is the technology that has resulted in your lights in your home lasting much longer than they did even five years ago, Gilliard said.

We have a unique potential to be one of the leaders in this area of boron chemistry, he added. There are not many people in the United States exploring these areas of chemistry, and increasing our ability to compete globally in this area of science is extremely important.

As we struggle to come to terms with the fact that more than half a million lives have been lost to COVID-19 in the United States, it can be easy to forget that nearly as many people die from malaria worldwide every single year.

Malaria is caused by a single-celled, mosquito-borne parasite, but like a virus, it can adapt to survive a variety of challenges that could wipe it out completely. Gler studies how the malaria parasite responds to changes in its environment that are hostile to its survival.

Over the course of its life cycle, the malaria parasite must be able to adapt to the conditions that allow it to survive in the body of a mosquito, in the liver of an infected host or in a hosts bloodstream before it infects another mosquito. Evolution has also equipped it with the capacity to develop a resistance to the drugs that researchers develop to defeat it. Gler uses a powerful combination of laboratory studies and computational modeling to understanding the complexities of the cellular behaviors that make the malaria parasite so resilient.

The CAREER Award will help us look, specifically, at how the parasites respond to stress, so if theyre in one of these new environments and its stressful for them maybe theres a limiting nutrient or a drug present and its causing stress what sort of programs are going on inside that cell that allow it to survive? Gler said. Ultimately, what we learn could help us find a better way to treat this disease.

The National Science Foundation places a priority on inventing new computing and networking technologies. The need is urgent, because such technologies will help researchers use big data sets to find solutions for complex global challenges.

The problem is that the amount of data available globally has outpaced the processing power needed to analyze it. International Data Corporation predicts that the collective sum of the worlds data will grow to 175 zettabytes 175 trillion gigabytes by 2025, a massive data explosion compared to 4.4 zettabytes available in 2015.

Khan is developing revolutionary computer architectures that will make problem-solving with big data possible.

Datasets are so large they must be broken up into bundles across multiple computers in a data center, Khan said. Computations get bottlenecked as larger and larger data packets get moved from computer to computer in progression to a single processor.

Khans research aims to redesign programmable switches and smart network interface cards to allow data to be processed in transit instead, a fundamental redesign of outdated computer infrastructure. Her research team has built the first protype network that uses the revolutionary architecture, making data requests four times faster.

In the real world, this would mean people could update their social media or make online transactions, like purchasing tickets, lightning-fast compared to today.

Reducing the amount of data that needs to be moved to that single point of processing dramatically speeds things up and fuels the entire systems capacity, Khan said. We are expecting that processing in the reconfigured network will achieve more than 10-fold increases in processing speeds for scientific and machine-learning workloads.

Despite the fact that the STEM workforce has shown considerable growth in recent years, Black workers, and especially Black women, remain underrepresented in the fields of science, technology, engineering and math. And according to a study of trends in STEM degrees by the Pew Research Center, the gap is unlikely to narrow any time soon. For Seanna Leath, an assistant professor of psychology, universities have a critical part to play in addressing Black womens retention in STEM fields.

Leaths CAREER award will allow her to explore how improving the academic, social and psychological wellbeing of Black college women will help attract them to the study of STEM disciplines and allow them to thrive as students. Funding from the award will allow Leath to develop longitudinal surveys and interview tools to assess Black undergraduate womens experiences over a four-year period, and using the data she collects, she hopes to identify the most important factors affecting the motivation and retention of Black women in STEM degrees.

You might think that air pollution is an equal-opportunity threat, but there is a solid body of evidence suggesting that not everyone who lives in urban areas experiences the same level of exposure, which means that some communities are faced with a lower quality of life and a lower life expectancy.

Using a variety of airborne and ground-based data-collection methods, Pusede, an atmospheric chemist, is interested in advancing sciences understanding of how variations in exposure to airborne pollutants occur in urban areas and why.

With the help of the CAREER award, Pusede will conduct field work in Dakar, Senegal, which will lead to the training of U.S. and Senegalese students in an international collaboration of physical and social scientists that will involve collecting and integrating scientific data and demographic information from a wide range of sources to shed light on inequalities in pollutant exposure and their consequences.

Pusedes project will also include the development of educational and public-outreach activities based on her research, including the development of a middle-school curriculum aimed at encouraging students interest in the STEM fields and demonstrating how those fields can help advance the cause of environmental justice.

Did you know that a tuna is a super swimmer?

Theyre really fast, theyre really strong, theyre big, theyre at the top of their food chain without any natural predators. Theyre a model organism for roboticists because theyre phenomenal swimmers, Quinn said. Besides being fast, tuna dart back and forth very quickly complex, high-speed maneuvers and were not sure how they do it.

Quinn is using his CAREER Award to find out. His Smart Fluids Systems Lab is using a tuna model rigged up to swim inside a tank to try to discover how liquids flow past the fish a process called fluid dynamics and govern high-speed, irregular or asymmetric swimming.

By mapping out these flows, bio-inspired roboticists who have to rely on models of low-speed, regular or symmetric movements when designing and testing robots will have the information they need to start modeling and designing fast, highly maneuverable water and aerial drones. Even though Quinn is studying swimming, the principles of fluid dynamics apply to water and air propulsion, so his research will inform both.

Well be creating the first-ever flow visualizations of bio-inspired robots darting side-to-side, Quinn said. Our measurements could lay the groundwork for a new generation of intelligent swimming and flying machines.

According to the U.S. Energy Information Administration, approximately 5% of the energy that is generated by power plants in the country is lost to resistance in the power lines used to transmit energy to homes and businesses, and the complexity of the problem at the atomic level makes it difficult for researchers to understand exactly what properties of the electrons involved might lead to the ability to conduct current without resistance.

Direct imaging of electrons is almost impossible, but quantum simulation can shed light on the microscopic properties of these complex quantum systems. CAREER winner Peter Schauss, an assistant professor of physics who specializes in experimental atomic, molecular and optical physics, will use funding from the award to develop quantum simulations using atoms cooled to a few billionths of a degree and trapped in an artificial crystal of light. Using a quantum gas microscope with high-resolution imaging capabilities that will allow him to capture images of individual atoms, hell search for answers that could lead to the next generation of superconductors.

In this modern age of materials, complex alloys lighten the weight of cars and planes to save fuel and help the environment, biomaterials replace human joints so we can remain active into our elder years, and graphene-coated smart screens put us in touch literally with individual creativity and global commerce.

These breakthroughs demonstrate the power of nanotechnology, a term introduced in 1974 to describe the precision machining of materials to the atomic scale. While experimentation, development and commercialization of nanomaterials has evolved, the textbook model describing how and when a material changes its form remains stuck in the 1970s.

Zhou has a plan to bring this model into the modern age and democratize materials design. He will use his CAREER Award to innovate a valuable tool in alloy development called the CALPHAD method, which stands for CALculation of PHAse Diagrams.

My grand vision is to make computational tools easy to use and valuable for all materials scientists, Zhou said.

Editors note: Two additional faculty members from UVA earned CAREER Awards, but are no longer with the University.

View original post here:

UVA Science and Engineering Faculty Win 12 NSF Career Awards - University of Virginia

Read More..

Microsoft Excel is still the data analytics gold standard. The pre-Black Friday sale can teach you fast. – The Next Web

TLDR: The Ultimate 2022 Pivot Tables and Dashboard in Excel Bundle brings all the pro tips of hardcore data analysis to any user for just $16.99.

Anybody can plunk some numbers into a rudimentary spreadsheet. That doesnt mean youre somehow now a Microsoft Excel expert. Not quite. That heritage business software has been around for decades because its incredibly versatile, but if you dont understand some of the basics, then the true subtle power of Excel is lost.

Which brings us to pivot tables. If you dont fully grasp pivot tables, the way that Excel users extract important data and aggregate it for display from much larger data sets, then you dont really get Excel. With the coursework in The Ultimate 2022 Pivot Tables and Dashboard in Excel Bundle ($16.99 after code SAVE15NOV from TNW Deals), anyone with an eye for data, analytics, and data science can become a Pivot Table pro and understand all the ways that powerful function works for Excel users.

The collection includes three courses, packed with over 22 hours of learning that can take even first time Pivot Table users from the basics through to tips and tricks only Excel elite know how to achieve.

It all begins with Pivot Table for Beginners, your introduction to this interactive way of quickly summarizing large amounts of data. Users will get a feel for how pivot tables work, how to input data sets into those tables, and even how to clean your data so you can get the true, proper analysis that you want.

The training escalates with Advanced Pivot Tables, as learners drill deeper into this powerful data analysis function. From those original basics, this course elevates the training to give users a knowing understanding of features like advanced sorting, slicers, timelines, calculated fields, pivot charts, conditional formatting, and more.

Finally, Dashboards in Excel lets users take that data visualization to new levels. This in-depth course gets students exposure to some essential formulas needed to create dashboards in Excel, Pivot Tables, Pivot Charts, Form Controls and more.

The training walks users through creating their own sales and HR dashboards through the use of insightful step-by-step guides.

A $249 collection of training, users can pick up The Ultimate 2022 Pivot Tables and Dashboard in Excel Bundle now at one of its lowest prices of the year thanks to the current pre-Black Friday sale. When shoppers enter the code SAVE15NOV during checkout, buyers can get the complete package for just $16.99.

Prices are subject to change

See original here:

Microsoft Excel is still the data analytics gold standard. The pre-Black Friday sale can teach you fast. - The Next Web

Read More..

Filings buzz: tracking artificial intelligence mentions in the automotive industry – just-auto.com

Credit: Michael Traitov/ Shutterstock

Mentions of artificial intelligence within the filings of companies in the automotive industry were 141% increase between July 2020 and June 2021 than in 2016, according to the latest analysis of data from GlobalData.

When companies in the automotive industry publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Artificial intelligence is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether artificial intelligence is featuring more in the summaries and strategies of companies in the automotive industry, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned artificial intelligence at least once in filings during the past twelve months - this was 86% compared to 57% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to artificial intelligence.

Of the 50 biggest employers in the automotive industry, Yamaha Motor Co Ltd was the company which referred to artificial intelligence the most between July 2020 and June 2021. GlobalData identified 151 artificial intelligence-related sentences in the Japan-based company's filings - 2.2% of all sentences. Aisin Seiki Co Ltd mentioned artificial intelligence the second most - the issue was referred to in 1.9% of sentences in the company's filings. Other top employers with high artificial intelligence mentions included Denso Corp, Ford Motor Co and Toyota Boshoku Corp.

Across all companies in the automotive industry the filing published in the second quarter of 2021 which exhibited the greatest focus on artificial intelligence came from Ford Motor Co. Of the document's 1,720 sentences, 22 (1.3%) referred to artificial intelligence.

This analysis provides an approximate indication of which companies are focusing on artificial intelligence and how important the issue is considered within the automotive industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning artificial intelligence more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into artificial intelligence have been successes or failures.

In the last quarter, companies in the automotive industry based in Asia were most likely to mention artificial intelligence with 0.32% of sentences in company filings referring to the issue. In contrast, companies with their headquarters in the United States mentioned artificial intelligence in just 0.17% of sentences.

GlobalData can provide actionable insights to drive your company forward

28 Aug 2020

GlobalData exists to help businesses decode the future to profit from faster, more informed decisions.

28 Aug 2020

GlobalData can provide actionable insights to drive your company forward

28 Aug 2020

See original here:
Filings buzz: tracking artificial intelligence mentions in the automotive industry - just-auto.com

Read More..

The artificial intelligence in healthcare market is projected to grow from USD 6.9 billion in 2021 to USD 67.4 billion by 2027; it is expected to grow…

Many companies are developing software solutions for various healthcare applications; this is the key factor complementing the growth of the software segment. Strong demand among software developers (especially in medical centers and universities) and widening applications of AI in the healthcare sector are among the prime factors complementing the growth of the AI platform within the software segment.

New York, Nov. 05, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Artificial Intelligence in Healthcare Market by Offering, Technology, Application, End User and Geography - Global Forecast to 2027" - https://www.reportlinker.com/p04897122/?utm_source=GNW

Google AI Platform, TensorFlow, Microsoft Azure, Premonition, Watson Studio, Lumiata, and Infrrd are some of the top AI platforms.

The market for machine learning segment is expected to grow at the highest CAGR during the forecast periodThe increasing adoption of machine learning technology (especially deep learning) in various healthcare applications such as inpatient monitoring & hospital management, drug discovery, medical imaging & diagnostics, and cybersecurity is driving the adoption of machine learning technology in the AI in healthcare market.

The medical imaging & diagnostics segment is expected to grow at the highest CAGR of the artificial intelligence in healthcare market during the forecast period.The high growth of the medical imaging and diagnostics segment can be attributed to factors such as the presence of a large volume of imaging data, advantages offered by AI systems to radiologists in diagnosis and treatment management, and the influx of a large number of startups in this segment.

North America region is expected to hold the largest share of the artificial intelligence in healthcare market during the forecast period.Increasing adoption of AI technology across the continuum of care, especially in the US, and high healthcare spending combined with the onset of COVID-19 pandemic accelerating the adoption of AI in hospital and clinics across the region are the major factors driving the growth of the North American market.

Break-up of the profiles of primary participants: By Company Type Tier 1 40%, Tier 2 25%, and Tier 3 35% By Designation C-level 40%, Director-level 35%, and, Other 25% By Region North America - 30%, Europe 20%, APAC 45%, and RoW 5%

The key players operating in the artificial intelligence in healthcare market include Intel (US), Koninklijke Philips (Netherlands), Microsoft (US), IBM (US), and Siemens Healthineers (US)

The artificial intelligence in healthcare market has been segmented into offering, technology, application, end user, and region.

Based on offering the market has been segmented into hardware, software, and services.Based on technology the market has been segmented machine learning, natural language processing, context-aware computing, and computer vision.

Based on application the market has been segmented into patient data & risk analysis, inpatient care & hospital management, medical imaging & diagnostics, lifestyle management & monitoring, virtual assistants, drug discovery, research, healthcare assistance robots, precision medicine, emergency room & surgery, wearables, mental health, and cybersecurity.Based on end user, the market has been segmented into hospitals & healthcare providers, patients, pharmaceutical & biotechnology companies, healthcare payers, and others.

The artificial intelligence in healthcare market has been studied for North America, Europe, Asia Pacific (APAC), and the Rest of the World (RoW).

Reasons to buy the report: Illustrative segmentation, analysis, and forecast of the market based on offering, technology, application, end user, and region have been conducted to give an overall view of the artificial intelligence in healthcare market. A value chain analysis has been performed to provide in-depth insights into the artificial intelligence in healthcare market. The key drivers, restraints, opportunities, and challenges pertaining to the artificial intelligence in healthcare market have been detailed in this report. Detailed information regarding the COVID-19 impact on the artificial intelligence in healthcare market has been provided in the report. The report includes a detailed competitive landscape of the market, along with key players, as well as in-depth analysis of their revenuesRead the full report: https://www.reportlinker.com/p04897122/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Continue reading here:
The artificial intelligence in healthcare market is projected to grow from USD 6.9 billion in 2021 to USD 67.4 billion by 2027; it is expected to grow...

Read More..