Page 4,080«..1020..4,0794,0804,0814,082..4,0904,100..»

Siri, Tell Fido To Stop Barking: What’s Machine Learning, And What’s The Future Of It? – 90.5 WESA

Machine learning is an integral part of Pittsburgh's tech economy, thanks to Carnegie Mellon University's position as one of the nation's foremost research centers on the topic. That's enticed tech giants such as Google and Uber to set up shop in the Steel City.

Pittsburghers have varied knowledge on what machine learning is.

On a crisp afternoon on Carnegie Mellon University's campus, Adeline Mercier of Squirrel Hill was walking with her young daughter on campus. She said her husband works in machine learning.

"It's when you train a computer, for example, or a program to learn something," Mercier said. "To optimize something or to automize something."

Alex Xu of Oakland was a little more specific.

"It's like applied statistics used for understanding how patterns can get recognized," Xu said.

A handful of people, including CMU student Karen Abruzzo, were unfamiliar.

"I've heard of it, but I don't really know what it is," said Abruzzo.

Tom Mitchell, a professor of machine learning at Carnegie Mellon University, said machine learning aims to answer the question of how to make computers improve automatically with experience.

"Humans are the best learners, much better than machines are these days, but the idea is similar," Mitchell said. "For example, if you learn to play chess you start out knowing the rules but not the strategy, and you make mistakes and learn from those and become better."

Mitchell said computers learn similarly. For example, he said people know how the recognize their mother in a photograph but can't write down an algorithm for how. An algorithm is like a recipe for a computer, telling a computer step by step what to do.

"Today it's very easy to train a computer program to recognize your mother by showing it photographs, you say in this photograph, here's my mother, this photograph, my mother is not in this one," Mitchell said. "If you give it enough of those training examples, machine learning algorithms look at the details and they find what's common to the positive examples that distinguish it from the negative."

Mitchell said one of the earliest commerical uses of machine learning was credit card fraud detection, trained on hundreds of millions of examples legitimate and illegitimate transactions. This system is still used today.

Mitchell said machine learning is also used to diagnose skin cancer in a blemish.

"Computers are now at least as accurate as carefully trained doctors," Mitchell said. "It's simply because it can examine more training data than people will see in a career."

In the future, Mitchell predicts machine learning will apply to more and more parts of life. He said it will likely become more similar to how people learn, too, especially when it comes to systems such as Siri and Alexa.

"I think in the future, you'll be able to use [smart devices] by saying, that sound you just heard was my dog barking, and whenever my dog barks and you don't hear me respond, I want you to say in my voice, 'It's okay Fido, calm down,'" Mitchell said. "I think in the coming decade we'll be able to teach them the same way you would teach me to do something if I was your assistant."

View original post here:

Siri, Tell Fido To Stop Barking: What's Machine Learning, And What's The Future Of It? - 90.5 WESA

Read More..

The Real Reason Your School Avoids Machine Learning – The Tech Edvocate

Education has embraced machine learning. Some educators, however, still find themselves reluctant to join their colleagues in its adoption. They understand that artificial intelligence can improve job performance, but they shun machine learning altogether. They would rather dig in and do everything themselves even though machine learning could increase their job satisfaction.

Machine learning provides text analysis, automated grading, and tutoring systems. These benefit students and teachers. As the scope and quality of artificial intelligence continue to improve, well continue to see it included more routinely in every aspect of instruction.

Many teachers rely on machine learning for testing, customizing instruction, and even predicting academic achievement outcomes. Their jobs have become more streamlined, and technology helps them reach more students.

Not all teachers, however, have been quick to accept the inevitable inclusion of machine learning in their classrooms. They analyze data on their own, rely on traditional classroom management strategies, and customize learning themselves. This adherence to past practices can be cumbersome and time-consuming. Teachers unwillingness to embrace change has left them and possibly their students behind.

Even intelligent software applications like Google Assistant and Apples Siri take a back seat in the classroom when the teacher wont experiment with new technology.

Some educators arent interested in what machine learning can do for them.

How machine learning helps the classroom

Teachers find themselves responsible for meeting more demanding expectations with each passing year. They must accommodate for a variety of learning styles. They have to create scaffolded instruction for every student. All of this is in addition to their other responsibilities for the safety, health, and well-being of their students.

Teachers do not have enough time in the day to do it all.

As a result, todays classrooms require that educators demonstrate immense adaptabilty and flexibility. Academic expectations are changing continuously. To meet these new standards, education leaders revise learning expectations and increase rigor. These revisions require new instructional methods and resources to support them.

Machine learning provides the extra boost teachers need to educate the whole child and meet new standards. Intelligence automation assists educators in the pursuit of whats best for every child in their classrooms. It makes teaching easier.

Why would teachers run from something that helps them do their jobs?

Job replacement concern

The real reason your school avoids machine learning is fear.

Some teachers worry that that artificial intelligence will make their skills obsolete. Machine learning will take over traditional teacher tasks. Some educators think AI will replace teachers. No educator wants to lose their job, especially not to artificial intelligence.

Its up to us to help our teachers understand that machine learning will not replace them. We need our teachers more than ever, especially as artificial intelligence becomes more prevalent. Teachers bring immense value to the classroom. They bring human empathy. Machine learning performs the most routine instructional tasks so that teachers can fulfill their roles as inspirational role models for their students.

No amount of artificial intelligence will ever replace that.

Follow this link:

The Real Reason Your School Avoids Machine Learning - The Tech Edvocate

Read More..

Podcast: How artificial intelligence, machine learning can help us realize the value of all that genetic data we’re collecting – Genetic Literacy…

Everywhere we look these days, someone is talking about the potential for artificial intelligence and machines to change the face of healthcare and biotechnology. Certainly, there are the pie-in-the-sky ideas about replacing human doctors with robots. But on the more realistic level, we are using the technology to rapidly sift through the vast amounts of genetic information weve collected to find associations and links between genetic mutations and various diseases, disorders and health risks.

Certainly AI is one of the biggest buzzwords thats around today, said Gabe Musso, chief scientific officer at BioSymetrics. But often times, when people are talking about artificial intelligence, what they are really talking about is machine learning. Machine learning is a process thats been around for a very long time. Its basically pattern identification. When we get into AI, its about how we can make the process autonomous.

In this episode of Talking Biotech, Musso joins plant geneticist and host Kevin Folta to talk about artificial intelligence and machine learning and how emerging technologies can be used to examine complex data sets in the quest to find patterns that can give us new perspectives in biology. Musso takes these complex concepts and makes them understandable, while describing ways they may be applied in contemporary contexts.

Gabe Musso is the chief scientific officer at BioSymetrics. Follow him on Twitter @gabe_musso

Kevin M. Folta is a professor in the Horticultural Sciences Department at the University of Florida. Follow professor Folta on Twitter @kevinfolta and email your questions to [emailprotected]

The Talking Biotech podcast, produced by Kevin Folta, is available for listening or subscription:

Apple Podcasts | Android | Email | Google Podcasts | Stitcher | RSS | Player FM | Pod Directory | TuneIn

Read the original here:

Podcast: How artificial intelligence, machine learning can help us realize the value of all that genetic data we're collecting - Genetic Literacy...

Read More..

Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning – SemiEngineering

The importance of data is changing traditional value creation in electronics and forcing recalculations of return on investment.

The last couple of weeks have been busy with me participating on three panels that dealt with AI and machine learning in the contexts of automotive and aero/defense, in San Jose, Berlin and Detroit. The common theme? Data is indeed the new oil, and it messes with traditional value creation in electronics. Also, requirements for system design and verification are changing and there are completely new, blank-sheet opportunities that can help with verification and confirmation of what AI and ML actually do.

In the context of AI/ML, I have been using the movie quote a lot that I think I am 90% excited and 10% scared, oh wait, perhaps I am 10% percent excited and 90% scared. The recent panel discussions that I was part of did not help that much.

First, I was part of a panel called Collaborating to Realize an Autonomous Future at Arm TechCon in San Jose. The panel was organized by Arms Andrew Moore and my co-panelists were Robert Day (Arm), Phil Magney (VSI Labs) and Hao Liu (AutoX Inc.). Given the technical audience, questions were centered around how to break down hardware development for autonomous systems, how the autonomous software stack could be divided between developers, whether compute will end up being centralized or decentralized, and what the security and safety implications of a large amount of collaboration would bebasically boiling things down to a changing industry structure with new dynamics of control in the design chain.

For the second panel I was in Berlin, which was gearing up for the celebrations of 30-year anniversary of the fall of the Berlin Wall. The panel title could roughly be translated as If training data is the oil for digitalization enabled by artificial intelligence, how can the available oil be used best? The panel was organized by Wolfgang Ecker (Infineon) and my co-panelists were Erich Biermann (Bosch), Raik Brinkmann (OneSpin), Matthias Kstner (Microchip), Stefan Mengel (BMBF) and Herbert Taucher (Siemens). Discussion points here were centered around ownership of data, whether users would be willing to share data with tool vendors and whether this data could be trusted to be complete enough in the first place.

The third panel took place in Detroit, from which I just returned. It took place at the Association of the United States Army (AUSA) Autonomy and AI Symposium. Moderated by Major Amber Walker, my co-panelists were Margaret Amori (NVIDIA), BG Ross Coffman (United States Army Futures Command) and Ryan Close (United States Army Futures Command, C5ISR Center). Questions here were centered on lessons learned from civilian autonomous vehicles and what the differences between civilian and Army customization needs are. We discussed advances in hardware and how ready developers are for new sensors and compute, course resilience and trust and what the new vulnerabilities for cyber-attacks in AI would be, as well as design for customization and how the best of both worldscustom and adaptablecan be achieved.

Discussions and opinions were diverse, to say the least. Two big take-aways stick with me.

First, data really is the new oil! It needs protectionsecurity and resilience are crucial in an Army context in which data in the enemys hands could have catastrophic consequences, and privacy is crucial in civilian applications as well. Data also changes the value chain in electronics. As I had written before in the context of IoT, the value really has to come from the overall system perspective and cannot be assigned to individual components alone. In a system value chain of sensors, network, storage and data, one may decide to give away the tracker if the data it creates allows value creation through advertisement. Calculation of return on investments is becoming much more complicated.

Second, verification of what these neural networks actually do (and not do) is becoming critical. I had mused in the past about a potential Revenge of the Digital Twins, but these recent panel discussions have emphasized to me that indeed the confirmability of what an CNN/DNN in an AI does is seen as critical by manyin both automotive and the Army contexts, safety of the car and the human life involved is a risk if we cannot really confirm that AI cannot be tricked. Examples that demonstrate how easy it is to trick self-driving cars by defacing street signs make me worry quite a bit here.

That said though, from challenges come opportunity. Verification for CNN/DNNs and associated data sets will likely be an interesting new market in itselfI am definitely watching this space.

Continue reading here:

Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning - SemiEngineering

Read More..

Machine Learning with R, Third Edition – Free Sample Chapters – Neowin

Claim your complimentary sample for free, before the offer expires.

Expert techniques for predictive modeling.

Machine learning, at its core, is concerned with transforming data into actionable knowledge. R offers a powerful set of machine learning methods to quickly and easily gain insight from your data.

Machine Learning with R, Third Edition provides a hands-on, readable guide to applying machine learning to real-world problems. Whether you are an experienced R user or new to the language, Brett Lantz teaches you everything you need to uncover key insights, make new predictions, and visualize your findings.

This new 3rd edition updates the classic R data science book with newer and better libraries, advice on ethical and bias issues in machine learning, and an introduction to deep learning. Find powerful new insights in your data; discover machine learning with R.

Please ensure you read the terms and conditions to download this free resource. Complete and verifiable information is required in order to receive this free offer. If you have previously made use of these free offers, you will not need to re-register.

>> AWS Penetration Testing with Kali Linux - Free Sample Chapters Offered by Packt, view their other free resources. Time-limited offer.

That's OK, there are other free eBooks on offer you can check out here, but be aware that these are all time-limited offers. If you are uncomfortable sharing your details with a third-party sponsor, we understand. Or via our preferred partner:

How can I disable these posts? Click here.

Disclosure: A valid email address is required to fulfill your request. Complete and verifiable information is required in order to receive this offer. By submitting a request, your information is subject to TradePub.com's Privacy Policy.

Read the original post:

Machine Learning with R, Third Edition - Free Sample Chapters - Neowin

Read More..

Workday talks machine learning and the future of human capital management – ZDNet

Because people are the most important resources in any organization, human capital management (HCM) is essential in every large enterprise. Changes in technology -- from mobile devices to AI -- are having a profound impact on how people in business work and interact.

Also:More on machine learning

To glimpse the future of HCM technology and the role of machine learning, I spoke with Cristina Goldt, vice president for HCM product management and Strategy at Workday. Cristina is a prominent HCM technology leader who is helping to shape human capital management. Our conversation took place at Workday Rising 2019, the company's annual customer event, held this year in Orlando.

Watch the video embedded above to see the future of HCM and read the edited transcript below. I recorded this video as part of the CxOTalk series of conversations with the world's top innovators in business, technology, government, and higher education.

We see technology -- artificial intelligence, machine learning -- changing work, changing jobs, and the relationship between people and machines. We see the world of work and alternative arrangements and agile teams. We see all of that playing into how work gets done.

And, and very importantly, we see skills become a key factor or driver in how people are thinking about their workforce, their talent, executing on their talent strategy.

We need to support them. They're looking for how they support their people in this ever-changing world of HR and world of work. And so for us, it's how do we help them become those enterprises in the future to identify, develop and optimize talent. Scale and speed are what we endeavor to help them with.

I think executing on our account strategy, where we're trying to match talent to talent demand, which is the work. For us, it's how do we take that data foundation, that rich foundation that we have, and build on it. I talked about skills earlier. It really is about building that skills and capability foundation, which we did with using machine learning.

It's a common language of skills across all of our customers. And most importantly, if you think of software as a service, it's skills as a service because it's crowdsourced. It dynamically lives and breathes and grows based on data. We've solved the data challenge of understanding, categorizing and continually keeping your skills updated.

The next thing was to solve the challenge of [knowing] what skills my people have. Taking machine learning to do that, to infer skills, matching people to work.

In the past, it has been very manual and challenging, if not almost impossible, because there wasn't a common language or way to do the matching. The technology wasn't there. Today that technology is there to sort through the huge volumes of data, to understand skills.

Data is the foundation to make this happen. At Workday, we started with that core system of record, which became that core foundation of data. Which now moves to a core system of capabilities. You now have data about your people that you can take action on, make recommendations. Using machine learning to make suggestions and make all of this happen. It doesn't happen without the data. That data foundation gets us to the next step.

The data comes from the billions of transactions and thousands of dimensions of the over 40 million workers in Workday.

Data is an important part of the future of work and a foundation for all the things we're going to do next.

Disclosure: Workday covered most of my travel to Workday Rising.

See the original post here:

Workday talks machine learning and the future of human capital management - ZDNet

Read More..

The transformation of healthcare with AI and machine learning – ITProPortal

AI and ML solutions are already being used by thousands of companies with the goal of improving the healthcare experience. For example, Babylon Health is changing the way we manage and better understand health. Founder, Ali Parsa developed the app in 2013 with a mission of providing accessible and affordable healthcare to every individual on earth. Babylons AI system has been designed to understand and recognise the way humans express their medical symptoms and it can interpret symptoms and medical questions through a chatbot interface and match them to the most appropriate service. It can recognise most healthcare issues seen in primary care and provide information on next steps to take.

The conversation around artificial intelligence (AI) and machine learning (ML) in healthcare continues to grow. Research in cutting-edge areas like machine learning continues to demonstrate that computers have the potential to predict outcomes and optimise clinical operations in a wide variety of settings.

Healthcare stands poised for a transformation driven by AI and ML, and fuelled by an abundance of data sources electronic health records, claims data, genomic sequences, mobile devices, medical imaging, and even embedded sensor data.

Data is the fundamental raw material required to power AI and ML systems, and is an essential ingredient that enables healthcare organisations to increase efficiency, improve outcomes, and enhance quality of life for both patients and providers.

While the demands of treating patients and developing new therapies often relegate data collection and analysis to a back burner in healthcare, new tools enable developers to integrate ML and other capabilities easily into the routine process of developing and delivering treatments. Far from being an exclusive province of researchers and technology companies, AI and ML are now accessible to all.

As these use cases expand, success is dependent on several ingredients. First, such initiatives require large quantities of carefully curated, high-quality data, which may be hard to come by in healthcare where data is often complex and unstructured. High-quality data sets are required not only to operate AI and ML-driven systems, but even more importantly, to feed the training models upon which they are built.

Second, these systems need to be optimised for the compute-intensive jobs typically required by AI applications. And finally, IT resources supporting AI applications must comply with industry standards and regulations and adhere to the highest security and privacy standards to protect patient and other sensitive data.

One company that has successfully rooted itself in developing and curating its data is Touch Surgery The company is transforming professional healthcare training through the delivery of a unique platform that links mobile apps with powerful data back-end. Touch Surgery uses cognitive mapping techniques coupled with cutting edge AI and 3D rendering technology to codify surgical procedures. They have partnered with leaders in Virtual Reality and Augmented Reality to work toward a vision of advancing surgical care in the operating room. With over 1 million users, the firm are recording vast amounts of usage data to power their data analytics product, which in turn allows users to learn and practice over 50 surgical procedures, evaluate and measure progress, and connect with physicians across the world.

A crucial technology that provides storage capacity, compute elasticity, security, and analytic capabilities needed to implement AI and ML and drive innovation - is cloud computing. Cloud computing platforms make it easy to ingest and process data, whether structured, unstructured, or streaming and simplifies the process of building, training, and deploying machine learning-based models. Healthcare organisations that can use cloud computing to make themselves more efficient and effective will be the most successful in coming years, particularly as the industry shifts to value-based care.

For the National Health Service (NHS), AI and ML are having a huge impact on its ability to cut costs, while improving patient services. The NHS is the UKs largest employer and health provider. NHS Business Services Authority (NHS BSA), a Special Health Authority and an Arm's Length Body of the Department of Health and Social Care, provides a range of critical central services to NHS organisations, NHS contractors, patients and the public. As such, the NHS BSAs call centre staff handle around five million calls per year. The organisation decided to implement a cloud-based contact centre and deep learning chatbot service using Amazon Connect and Amazon Lex to help improve the user experience, reduce call centre load, increase efficiency and cut costs. By moving to the cloud, the NHS BSA has identified around $650,000 in cost savings per annum from a reduction in average call times alone.

Healthcare companies, whether established or new start-ups, are increasingly looking to AI and ML to drive innovation and transformation at their company and across the healthcare industry. These organisations share a common goal of reducing time to discovery and insight, improving care quality and enhancing the patient and provider experience. As the availability and volume of data sources continue to grow, the essential ingredients for AI and ML success will remain the same: high-quality data, cloud computing to remove undifferentiated heavy lifting, and ML services accessible to everyday developers. Once these foundational elements are established, AI and ML have the potential to power more efficient and effective care, enhanced decision making and the ability to drive greater value for patients and providers.

Shez Partovi, M.D., Director of Global Business Development, Healthcare, Life Sciences and Agricultural Technology, AWS

Read this article:

The transformation of healthcare with AI and machine learning - ITProPortal

Read More..

Synthetic Data: The Diamonds of Machine Learning – TDWI

Synthetic Data: The Diamonds of Machine Learning

Refined and labeled data is imperative for advances in AI. When your supply of good data does not match your demand, look to synthetic data to fill the gap.

We have all heard the saying, Diamonds are a girls best friend. This saying was made famous by Marilyn Monroe in the 1953 film Gentlemen Prefer Blondes. The unparalleled brilliance and permanence of the diamond contribute to its desirability. Its unique molecular structure results in its incredible strength, making it highly desirable not only as jewelry that looks beautiful but also for industrial tools that cut, grind, and drill.

However, the worldwide supply of diamonds is limited as they take millions of years to form naturally. In the middle of the last century, corporations set out to determine a process to produce lab-grown diamonds. Over the past 70 years, scientists have not only been able to replicate the strength and durability of natural diamonds but, more recently, have been able to match the color and clarity of natural diamonds as well.

Just as in the case of diamonds in the mid-twentieth century, today there is a mismatch of the supply and demand of high-quality data needed to power todays artificial intelligence revolution. Just as the supply of coal did not equal the supply of diamonds, todays supply of raw data does not equal the supply of refined, labeled data, which is needed to power the training of machine learning models.

What is the answer to this mismatch of supply and demand? Many companies are pursuing lab-generated synthetic data that can be used to support the explosion of artificial intelligence.

The goal of synthetic data generation is to produce sufficiently groomed data for training an effective machine learning model -- including classification, regression, and clustering. These models must perform equally well when real-world data is processed through them as if they had been built with natural data.

Synthetic data can be extremely valuable in industries where the data is sparse, scarce, or expensive to acquire. Common use cases include outlier detection or problems that deal with highly sensitive data, such as private health-related problems. Whether challenges arise from data sensitivity or data scarcity, synthetic data can fill in the gaps.

There are three common methods of generating synthetic data: enhanced sampling, generative adversarial networks, and agent-based simulations.

Enhanced Sampling

In problems such as rare disease detection or fraud detection, one of the most common challenges is the rarity of instances representing the target for which you are searching. Class imbalance in your data limits the ability of the machine learning model to be accurately trained. Without sufficient exposure to instances of the minority class during training, it is difficult for the model to recognize instances when evaluating production data. In fraud cases, if the model is not trained with sufficient instances of fraud, it will classify everything as non-fraudulent when deployed in production.

To balance your data, one option is to either over-sample the minority class or under-sample your majority case to create a synthetic distribution of the data. This method does ensure that the model has an equal balance of each class of data. Statistical professionals have long used this method for addressing the class imbalance.

Go here to see the original:

Synthetic Data: The Diamonds of Machine Learning - TDWI

Read More..

Machine Learning Improves Performance of the Advanced Light Source – Machine Design

Synchrotrons, such as the like the Advanced Light Source (ALS) at the Department of Energys Lawrence Berkeley National Laboratory, can generate light in a wide variety of frequencies by accelerating electrons until they emit a controlled beam of light. Scientists use the controlled and uniform light beams to peer into materials to learn more about biology, chemistry, physics, environmental science, and, of course, material science.

The more intense and uniform a synchrotrons light beam is, the more information scientists can get from their experiments. And over the years, researchers have devised ways to upgrade their synchrotrons to produce brighter, more-consistent light beams that let them make more complex and detailed studies across a broad range of sample types.

This image shows the profile of an electron beam at Berkeley Labs Advanced Light Source synchrotron represented as pixels measured by a charged-coupled-device sensor. Some experiments require that the light-beam size remain stable on time scales ranging from less than seconds to hours to ensure reliable data. (Image: Lawrence Berkeley National Laboratory)

But some light-beam properties still fluctuate, posing a challenge for certain experiments.

That changed recently when a large team of researchers at Berkeley Lab and UC Berkeley developed a method of using machine learning to improve the stability of the synchrotron light beams size by using an algorithm to make adjustments that largely cancel out these fluctuations, reducing them from a level of a few percent down to 0.4%, with submicron (below 1 millionth of a meter) precision.

Machine learning, a form of artificial intelligence, uses a computer to analyze a set of data to build predictive programs that solve complex problems. The machine learning used at the ALS is referred to as a neural network because it recognizes patterns in data in a way loosely resembling the way a human brain does.

This chart shows how vertical beam-size stability greatly improves when a neural network is implemented during Advanced Light Source operations. When the so-called feed-forward correction is used, fluctuations in the vertical beam size are stabilized down to the sub-percent level (see yellow-highlighted section) from levels that otherwise range to several percent. (Credit: Lawrence Berkeley National Laboratory)

During development, researchers fed electron beam data from the ALSwhich included the positions of the magnetic devices used to produce light from the electron beaminto the neural network. The neural network recognized patterns in this data and identified how different device parameters affected the width of the electron beam. The machine-learning algorithm also recommended adjustments to the magnets to improve the electron beam.

The machine learning technique suggested changes to the way the magnets are constantly adjusted in the ALS that compensate in real-time for fluctuations in the various beams the ALS can create simultaneously. In fact, the improvements are refinements of alterations made back in 1993.

The algorithm-directed ALS can now make corrections at a rate of up to 10 times per second, though three times a second appears to be adequate for improving performance at this stage.

An exterior view of the Advanced Light Source dome that houses dozens of beamlines. (Credit: Roy Kaltschmidt/Berkeley Lab)

The changes narrowed the focus of light beams from around 100 microns down to below 10 microns. Scientist already know that the newest upgrade reduced artifacts in the images our X-ray microscopes that use the light beam. This make ALS suitable for advanced X-ray techniques such as ptychography, which can resolve the structure of samples down to the level of nanometers, and X-ray photon correlation spectroscopy, or XPCS, which lets scientists study rapid changes in highly concentrated materials that dont have a uniform structure.

Machine learning fundamentally requires two things: The problem needs to be reproducible, and you need huge amounts of data, says Simon Leemann, leader of the machine learning project at ALS. We realized we could put all of our data to use and have an algorithm recognize patterns. The problem consisted of roughly 35 parametersway too complex for us to figure out ourselves.

Read more:

Machine Learning Improves Performance of the Advanced Light Source - Machine Design

Read More..

Rad AI Raises $4M to Automate Repetitive Tasks for Radiologists Through Machine Learning – – HIT Consultant

Rad AI raises $4M in seed funding led by Gradient Ventures, Googles AI-focused venture fund to transform radiology with the latest advances in AI to save radiologists 60+ minutes a day.

By streamlining existing workflow and automating repetitive manual tasks, Rad AI increases daily productivity while reducing radiologist burnout.

Rad AI provides more consistent radiology reports for ordering clinicians, and higher accuracy for the patients it serves.

Berkeley-based Rad AI, a digital health startup using machine learning to transform the practice of radiology, today announced its company launch and a $4 million seed round led by Gradient Ventures, Googles AI-focused venture fund. Investors UP2398, Precursor Ventures, GMO Venture Partners, Array Ventures, Hike Ventures, Fifty Years VC and various angels also participated in this round.

Today, radiology groups face increased competition and unrelenting market consolidation. While keeping up with the growing demand and complexity of their workflows, radiologists continue to struggle with meeting RVU goals. In addition, there is a drastic and growing shortage of radiologists. According to WHO, two-thirds of the world does not have access to radiology services. In areas that do, radiologist burnout, error rates, and turnaround times continue rising. The result: overloaded radiologists, crumbling medical workflows, and inadequate patient care.

Designed by Radiologists, for Radiologists

Rad AI was founded by radiologists who understand these pressures firsthand. Founder Dr. Jeff Chang, the youngest radiologist and second youngest doctor on record in the US, was troubled by high error rates, radiologist burnout, and rising imaging demand despite a worsening shortage of US radiologists, so he decided to pursue graduate work in machine learning to identify ways that AI could help. After he met serial entrepreneur Doktor Gurson, they created Rad AI in 2018 at the intersection of radiology and AI. Built by radiologists, for radiologists, Rad AI is transforming the field of radiology with the inside perspective as its driving force.

Radiology is facing severe pressures that range from falling reimbursements to market consolidation. There is also a radiologist shortage that is exacerbated by rising imaging volumes nationwide. We help radiology groups significantly increase productivity, while reducing radiologist burnout and improving report accuracy. By working closely with radiologists, we can make a positive impact on patient care, said Dr. Chang.

AI-Driven Solution Saves Radiologists 60+ MinutesA Day

Comprised of both radiology and AI expertise, Rad AIs stellar team builds products that maximize radiologist productivity, ultimately making healthcare more widely accessible and improving patient outcomes. By streamlining existing workflow and automating repetitive manual tasks, Rad AI increases daily productivity while reducing radiologist burnout. Rad AI saves radiologists an average of more than 60 minutes per day.

Using the latest state-of-the-art artificial intelligence, their solution automatically generates report impressions, customized to their exact language. This means 35% fewer words dictated, more consistent reports and recommendations, and decreased radiologist burnout.

Traction/Milestones

Rad AIs current partners include Greensboro Radiology, Medford Radiology, Einstein Healthcare Network, and BICRAD, the 8th largest private radiology group in the United States, as well as other radiology groups that have yet to be announced. Product rollouts have demonstrated an average of 20% time savings on the interpretation of CTs and 15% time savings on radiographs translating into an hour a day saved for each radiologist. Rad AI plans to use the latest capital to build out its engineering team and expand the rollout of its first product to more radiology groups and customers.

Excerpt from:

Rad AI Raises $4M to Automate Repetitive Tasks for Radiologists Through Machine Learning - - HIT Consultant

Read More..