Page 560«..1020..559560561562..570580..»

AI Health Coaches Are Coming Soon to a Device Near You – TIME

Ten years ago, the idea of tracking your footsteps or your heartbeat was weird. Those dedicated to the pursuit of quantified self knowledge proselytized in TED Talks, while journalists attended conferences and reported on the strange new trend. Today, over 40% of households in the U.S. own a wearable device, according to statistics service Statista. It is not uncommon to hear retirees comparing or boasting about their step count for the day. The quantified self is ascendant.

Now, as artificial intelligences relentless advance continues, researchers and technologists are looking for ways to take the next stepto build AI health coaches that sift through health data and tell users how to stay fighting fit.

Theres a lot of evidence to suggest that wearables do offer at least some benefits. A review of scientific studies from 2022 found that, across over 160,000 participants in all the studies included, people who were assigned to wear activity trackers took roughly 1,800 more steps each day, which translated to a weight loss of around two pounds.

Wearables change behavior in a number of waysby prompting users to set goals, allowing them to monitor things they care about, by reminding them when theyre not on track to meet their goalssays Carol Maher, a professor of population and digital health at the University of South Australia and a co-author of the review.

These effects often fade with time, however, says Andrew Beam, an assistant professor in the Department of Epidemiology at the Harvard T.H. Chan School of Public Health, who researches medical artificial intelligence.

Accurately detecting the measures that we care about from signal inputsdetermining step count from an wrist-worn accelerometer, for examplerequires AI, but a banal, unsexy type, says Shwetak Patel, professor in computer science and engineering at the University of Washington and director of health technologies at Google. But, he adds, there is much more it can do already do: AI can stretch the capability of that sensor to do things that we may not have thought were possible. This includes features currently available on popular wearable devices, such as fall detection and blood oxygen detection. Some researchers are trying to use the relatively basic health data provided by wearables to detect disease, including COVID-19, although typically not to the same level of accuracy as devices used in clinical settings.

So far, AI has played a supporting role in the rise of the quantified self. Researchers are hoping to make use of recent advances to put AI on center stage.

Patel recently co-authored a paper in which researchers fed data from wearables into large language models, such as OpenAIs GPT series, and had the models output reasoning about the data that could be useful for clinicians seeking to make mental health diagnoses. For example, if a study participants sleep duration data were erratic, the AI system would point this out and then note that erratic sleep patterns can be an indicator of various issues, including stress, anxiety, or other disorders.

The next generation of AI models can reason, says Patel, and this means they could be used for personalized health coaching. (Other researchers argue its not yet clear whether large language models can reason). It's one thing to say, Your average heart rate is 70 beats per minute, he says. But the thing that we're focusing on is how to interpret that. The kind of modeling work we're doing isthe model now knows what 70 beats per minute means in your context.

The data provided by wearables could also allow AI coaches to understand users health at a much greater level of depth than a human coach could, says Patel. For example, a human coach could ask you how you slept, but wearables could provide detailed, objective sleep data.

Maher has also helped author a review of the research on the effectiveness of AI chatbots on lifestyle behaviors, which found that chatbot health coaches can help people increase the amount of physical activity and sleep they get and improve their diets, although the effect was smaller than is typically found for wearables. These studies were done using fairly rudimentary chatbots (developed years ago, well before, for example, OpenAIs ChatGPT) and Maher expects that more sophisticated AI health coaches would be more effective. She notes, however, that there are still challenges that need solving with large language models like ChatGPTsuch as the models tendency to make up information.

There are reasons to be skeptical about chatbot health coaches, says Beam. First, they suffer from the same drop off in effectiveness over time as wearables. Second, in the realm of health, even human scientists given reams of data about an individual do not yet understand enough to give personalized advice.

Even if the evidence doesnt yet exist to offer precise recommendations to different people based on their health data, an AI health coach could monitor whether a given action seems to be helping and adjust its recommendations accordingly. For example, heart rate data during a suggested workout could be used to inform future exercise recommendations, says Sandeep Waraich, product management lead for wearable devices at Google.

Google has not announced plans to launch an AI health coach, although it does plan to provide AI-powered insights to Fitbit users from early 2024, and in August the New York Times reported that Google DeepMind has been working on an AI life adviser. Apple is also reportedly working on an AI health coach, codenamed Quartz, that it plans to release next year.

Its not just the big tech companies that are trying to take data from wearables and provide continuous, personalized health coaching. Health app Humanity claims to be able to determine a user's biological age to within three years based on movement and heart-rate data. Humanitys algorithm was developed using data from the U.K. biobank, which had 100,000 participants wear a wrist-worn accelerometer for a week. But Michael Geer, co-founder and chief strategy officer at Humanity, is more excited about the possibility for tracking how biological age changes. We're not trying to say you're definitely in the body of a 36-year-old. What we're trying to see is basically over time, did [biological age] generally go up or down, and then that's feeding back to figure out what actions are making you healthier or not, he says.

The problem with tracking measures like Humanitys biological age is that there is still no evidence linking those measures to actual health outcomes, like a reduction in all-cause mortality, says Beam. This is a problem with AIs use in health care more broadly, he says. In general, caution is the right approach here. Even within clinical medicine, there's a huge emerging body of literature on how much these AI algorithms know about medicinewe still don't know how that translates to outcomes. We care about outcomes, we care about improving patient health. And there's just a paucity of evidence for that as of now.

Excerpt from:

AI Health Coaches Are Coming Soon to a Device Near You - TIME

Read More..

2023: A year of groundbreaking advances in AI and computing – Google Research

Posted by Jeff Dean, Chief Scientist, Google DeepMind & Google Research, Demis Hassabis, CEO, Google DeepMind, and James Manyika, SVP, Google Research, Technology & Society

This has been a year of incredible progress in the field of Artificial Intelligence (AI) research and its practical applications.

As ongoing research pushes AI even farther, we look back to our perspective published in January of this year, titled Why we focus on AI (and to what end), where we noted:

We are committed to leading and setting the standard in developing and shipping useful and beneficial applications, applying ethical principles grounded in human values, and evolving our approaches as we learn from research, experience, users, and the wider community.

We also believe that getting AI right which to us involves innovating and delivering widely accessible benefits to people and society, while mitigating its risks must be a collective effort involving us and others, including researchers, developers, users (individuals, businesses, and other organizations), governments, regulators, and citizens.

We are convinced that the AI-enabled innovations we are focused on developing and delivering boldly and responsibly are useful, compelling, and have the potential to assist and improve lives of people everywhere this is what compels us.

In this Year-in-Review post well go over some of Google Research's and Google DeepMinds efforts putting these paragraphs into practice safely throughout 2023.

This was the year generative AI captured the worlds attention, creating imagery, music, stories, and engaging conversation about everything imaginable, at a level of creativity and a speed almost implausible a few years ago.

In February, we first launched Bard, a tool that you can use to explore creative ideas and explain things simply. It can generate text, translate languages, write different kinds of creative content and more.

In May, we watched the results of months and years of our foundational and applied work announced on stage at Google I/O. Principally, this included PaLM 2, a large language model (LLM) that brought together compute-optimal scaling, an improved dataset mixture, and model architecture to excel at advanced reasoning tasks.

By fine-tuning and instruction-tuning PaLM 2 for different purposes, we were able to integrate it into numerous Google products and features, including:

In June, following last years release of our text-to-image generation model Imagen, we released Imagen Editor, which provides the ability to use region masks and natural language prompts to interactively edit generative images to provide much more precise control over the model output.

Later in the year, we released Imagen 2, which improved outputs via a specialized image aesthetics model based on human preferences for qualities such as good lighting, framing, exposure, and sharpness.

In October, we launched a feature that helps people practice speaking and improve their language skills. The key technology that enabled this functionality was a novel deep learning model developed in collaboration with the Google Translate team, called Deep Aligner. This single new model has led to dramatic improvements in alignment quality across all tested language pairs, reducing average alignment error rate from 25% to 5% compared to alignment approaches based on Hidden Markov models (HMMs).

In November, in partnership with YouTube, we announced Lyria, our most advanced AI music generation model to date. We released two experiments designed to open a new playground for creativity, DreamTrack and music AI tools, in concert with YouTubes Principles for partnering with the music industry on AI technology.

Then in December, we launched Gemini, our most capable and general AI model. Gemini was built to be multimodal from the ground up across text, audio, image and videos. Our initial family of Gemini models comes in three different sizes, Nano, Pro, and Ultra. Nano models are our smallest and most efficient models for powering on-device experiences in products like Pixel. The Pro model is highly-capable and best for scaling across a wide range of tasks. The Ultra model is our largest and most capable model for highly complex tasks.

In a technical report about Gemini models, we showed that Gemini Ultras performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in LLM research and development. With a score of 90.04%, Gemini Ultra was the first model to outperform human experts on MMLU, and achieved a state-of-the-art score of 59.4% on the new MMMU benchmark.

Building on AlphaCode, the first AI system to perform at the level of the median competitor in competitive programming, we introduced AlphaCode 2 powered by a specialized version of Gemini. When evaluated on the same platform as the original AlphaCode, we found that AlphaCode 2 solved 1.7x more problems, and performed better than 85% of competition participants

At the same time, Bard got its biggest upgrade with its use of the Gemini Pro model, making it far more capable at things like understanding, summarizing, reasoning, coding, and planning. In six out of eight benchmarks, Gemini Pro outperformed GPT-3.5, including in MMLU, one of the key standards for measuring large AI models, and GSM8K, which measures grade school math reasoning. Gemini Ultra will come to Bard early next year through Bard Advanced, a new cutting-edge AI experience.

Gemini Pro is also available on Vertex AI, Google Clouds end-to-end AI platform that empowers developers to build applications that can process information across text, code, images, and video. Gemini Pro was also made available in AI Studio in December.

To best illustrate some of Geminis capabilities, we produced a series of short videos with explanations of how Gemini could:

In addition to our advances in products and technologies, weve also made a number of important advancements in the broader fields of machine learning and AI research.

At the heart of the most advanced ML models is the Transformer model architecture, developed by Google researchers in 2017. Originally developed for language, it has proven useful in domains as varied as computer vision, audio, genomics, protein folding, and more. This year, our work on scaling vision transformers demonstrated state-of-the-art results across a wide variety of vision tasks, and has also been useful in building more capable robots.

Expanding the versatility of models requires the ability to perform higher-level and multi-step reasoning. This year, we approached this target following several research tracks. For example, algorithmic prompting is a new method that teaches language models reasoning by demonstrating a sequence of algorithmic steps, which the model can then apply in new contexts. This approach improves accuracy on one middle-school mathematics benchmark from 25.9% to 61.1%.

In the domain of visual question answering, in a collaboration with UC Berkeley researchers, we showed how we could better answer complex visual questions (Is the carriage to the right of the horse?) by combining a visual model with a language model trained to answer visual questions by synthesizing a program to perform multi-step reasoning.

We are now using a general model that understands many aspects of the software development life cycle to automatically generate code review comments, respond to code review comments, make performance-improving suggestions for pieces of code (by learning from past such changes in other contexts), fix code in response to compilation errors, and more.

In a multi-year research collaboration with the Google Maps team, we were able to scale inverse reinforcement learning and apply it to the world-scale problem of improving route suggestions for over 1 billion users. Our work culminated in a 1624% relative improvement in global route match rate, helping to ensure that routes are better aligned with user preferences.

We also continue to work on techniques to improve the inference performance of machine learning models. In work on computationally-friendly approaches to pruning connections in neural networks, we were able to devise an approximation algorithm to the computationally intractable best-subset selection problem that is able to prune 70% of the edges from an image classification model and still retain almost all of the accuracy of the original.

In work on accelerating on-device diffusion models, we were also able to apply a variety of optimizations to attention mechanisms, convolutional kernels, and fusion of operations to make it practical to run high quality image generation models on-device; for example, enabling a photorealistic and high-resolution image of a cute puppy with surrounding flowers to be generated in just 12 seconds on a smartphone.

Advances in capable language and multimodal models have also benefited our robotics research efforts. We combined separately trained language, vision, and robotic control models into PaLM-E, an embodied multi-modal model for robotics, and Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalized instructions for robotic control.

Furthermore, we showed how language can also be used to control the gait of quadrupedal robots and explored the use of language to help formulate more explicit reward functions to bridge the gap between human language and robotic actions. Then, in Barkour we benchmarked the agility limits of quadrupedal robots.

Designing efficient, robust, and scalable algorithms remains a high priority. This year, our work included: applied and scalable algorithms, market algorithms, system efficiency and optimization, and privacy.

We introduced AlphaDev, an AI system that uses reinforcement learning to discover enhanced computer science algorithms. AlphaDev uncovered a faster algorithm for sorting, a method for ordering data, which led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.

We developed a novel model to predict the properties of large graphs, enabling estimation of performance for large programs. We released a new dataset, TPUGraphs, to accelerate open research in this area, and showed how we can use modern ML to improve ML efficiency.

We developed a new load balancing algorithm for distributing queries to a server, called Prequal, which minimizes a combination of requests-in-flight and estimates the latency. Deployments across several systems have saved CPU, latency, and RAM significantly. We also designed a new analysis framework for the classical caching problem with capacity reservations.

We improved state-of-the-art in clustering and graph algorithms by developing new techniques for computing minimum-cut, approximating correlation clustering, and massively parallel graph clustering. Additionally, we introduced TeraHAC, a novel hierarchical clustering algorithm for trillion-edge graphs, designed a text clustering algorithm for better scalability while maintaining quality, and designed the most efficient algorithm for approximating the Chamfer Distance, the standard similarity function for multi-embedding models, offering >50 speedups over highly-optimized exact algorithms and scaling to billions of points.

We continued optimizing Googles large embedding models (LEMs), which power many of our core products and recommender systems. Some new techniques include Unified Embedding for battle-tested feature representations in web-scale ML systems and Sequential Attention, which uses attention mechanisms to discover high-quality sparse model architectures during training.

Beyond auto-bidding systems, we also studied auction design in other complex settings, such as buy-many mechanisms, auctions for heterogeneous bidders, contract designs, and innovated robust online bidding algorithms. Motivated by the application of generative AI in collaborative creation (e.g., joint ad for advertisers), we proposed a novel token auction model where LLMs bid for influence in the collaborative AI creation. Finally, we show how to mitigate personalization effects in experimental design, which, for example, may cause recommendations to drift over time.

The Chrome Privacy Sandbox, a multi-year collaboration between Google Research and Chrome, has publicly launched several APIs, including for Protected Audience, Topics, and Attribution Reporting. This is a major step in protecting user privacy while supporting the open and free web ecosystem. These efforts have been facilitated by fundamental research on re-identification risk, private streaming computation, optimization of privacy caps and budgets, hierarchical aggregation, and training models with label privacy.

In the not too distant future, there is a very real possibility that AI applied to scientific problems can accelerate the rate of discovery in certain domains by 10 or 100, or more, and lead to major advances in diverse areas including bioengineering, materials science, weather prediction, climate forecasting, neuroscience, genetic medicine, and healthcare.

In Project Green Light, we partnered with 13 cities around the world to help improve traffic flow at intersections and reduce stop-and-go emissions. Early numbers from these partnerships indicate a potential for up to 30% reduction in stops and up to 10% reduction in emissions.

In our contrails work, we analyzed large-scale weather data, historical satellite images, and past flights. We trained an AI model to predict where contrails form and reroute airplanes accordingly. In partnership with American Airlines and Breakthrough Energy, we used this system to demonstrate contrail reduction by 54%.

We are also developing novel technology-driven approaches to help communities with the effects of climate change. For example, we have expanded our flood forecasting coverage to 80 countries, which directly impacts more than 460 million people. We have initiated a number of research efforts to help mitigate the increasing danger of wildfires, including real-time tracking of wildfire boundaries using satellite imagery, and work that improves emergency evacuation plans for communities at risk to rapidly-spreading wildfires. Our partnership with American Forests puts data from our Tree Canopy project to work in their Tree Equity Score platform, helping communities identify and address unequal access to trees.

Finally, we continued to develop better models for weather prediction at longer time horizons. Improving on MetNet and MetNet-2, in this years work on MetNet-3, we now outperform traditional numerical weather simulations up to twenty-four hours. In the area of medium-term, global weather forecasting, our work on GraphCast showed significantly better prediction accuracy for up to 10 days compared to HRES, the most accurate operational deterministic forecast, produced by the European Centre for Medium-Range Weather Forecasts (ECMWF). In collaboration with ECMWF, we released WeatherBench-2, a benchmark for evaluating the accuracy of weather forecasts in a common framework.

The potential of AI to dramatically improve processes in healthcare is significant. Our initial Med-PaLM model was the first model capable of achieving a passing score on the U.S. medical licensing exam. Our more recent Med-PaLM 2 model improved by a further 19%, achieving an expert-level accuracy of 86.5%. These Med-PaLM models are language-based, enable clinicians to ask questions and have a dialogue about complex medical conditions, and are available to healthcare organizations as part of MedLM through Google Cloud.

In the same way our general language models are evolving to handle multiple modalities, we have recently shown research on a multimodal version of Med-PaLM capable of interpreting medical images, textual data, and other modalities, describing a path for how we can realize the exciting potential of AI models to help advance real-world clinical care.

We have also been working on how best to harness AI models in clinical workflows. We have shown that coupling deep learning with interpretability methods can yield new insights for clinicians. We have also shown that self-supervised learning, with careful consideration of privacy, safety, fairness and ethics, can reduce the amount of de-identified data needed to train clinically relevant medical imaging models by 3100, reducing the barriers to adoption of models in real clinical settings. We also released an open source mobile data collection platform for people with chronic disease to provide tools to the community to build their own studies.

AI systems can also discover completely new signals and biomarkers in existing forms of medical data. In work on novel biomarkers discovered in retinal images, we demonstrated that a number of systemic biomarkers spanning several organ systems (e.g., kidney, blood, liver) can be predicted from external eye photos. In other work, we showed that combining retinal images and genomic information helps identify some underlying factors of aging.

In the genomics space, we worked with 119 scientists across 60 institutions to create a new map of the human genome, or pangenome. This more equitable pangenome better represents the genomic diversity of global populations. Building on our ground-breaking AlphaFold work, our work on AlphaMissense this year provides a catalog of predictions for 89% of all 71 million possible missense variants as either likely pathogenic or likely benign.

We also shared an update on progress towards the next generation of AlphaFold. Our latest model can now generate predictions for nearly all molecules in the Protein Data Bank (PDB), frequently reaching atomic accuracy. This unlocks new understanding and significantly improves accuracy in multiple key biomolecule classes, including ligands (small molecules), proteins, nucleic acids (DNA and RNA), and those containing post-translational modifications (PTMs).

On the neuroscience front, we announced a new collaboration with Harvard, Princeton, the NIH, and others to map an entire mouse brain at synaptic resolution, beginning with a first phase that will focus on the hippocampal formation the area of the brain responsible for memory formation, spatial navigation, and other important functions.

Quantum computers have the potential to solve big, real-world problems across science and industry. But to realize that potential, they must be significantly larger than they are today, and they must reliably perform tasks that cannot be performed on classical computers.

This year, we took an important step towards the development of a large-scale, useful quantum computer. Our breakthrough is the first demonstration of quantum error correction, showing that its possible to reduce errors while also increasing the number of qubits. To enable real-world applications, these qubit building blocks must perform more reliably, lowering the error rate from ~1 in 103 typically seen today, to ~1 in 108.

Generative AI is having a transformative impact in a wide range of fields including healthcare, education, security, energy, transportation, manufacturing, and entertainment. Given these advances, the importance of designing technologies consistent with our AI Principles remains a top priority. We also recently published case studies of emerging practices in society-centered AI. And in our annual AI Principles Progress Update, we offer details on how our Responsible AI research is integrated into products and risk management processes.

Proactive design for Responsible AI begins with identifying and documenting potential harms. For example, we recently introduced a three-layered context-based framework for comprehensively evaluating the social and ethical risks of AI systems. During model design, harms can be mitigated with the use of responsible datasets.

We are partnering with Howard University to build high quality African-American English (AAE) datasets to improve our products and make them work well for more people. Our research on globally inclusive cultural representation and our publication of the Monk Skin Tone scale furthers our commitments to equitable representation of all people. The insights we gain and techniques we develop not only help us improve our own models, they also power large-scale studies of representation in popular media to inform and inspire more inclusive content creation around the world.

With advances in generative image models, fair and inclusive representation of people remains a top priority. In the development pipeline, we are working to amplify underrepresented voices and to better integrate social context knowledge. We proactively address potential harms and bias using classifiers and filters, careful dataset analysis, and in-model mitigations such as fine-tuning, reasoning, few-shot prompting, data augmentation and controlled decoding, and our research showed that generative AI enables higher quality safety classifiers to be developed with far less data. We also released a powerful way to better tune models with less data giving developers more control of responsibility challenges in generative AI.

We have developed new state-of-the-art explainability methods to identify the role of training data on model behaviors. By combining training data attribution methods with agile classifiers, we found that we can identify mislabelled training examples. This makes it possible to reduce the noise in training data, leading to significant improvements in model accuracy.

We initiated several efforts to improve safety and transparency about online content. For example, we introduced SynthID, a tool for watermarking and identifying AI-generated images. SynthID is imperceptible to the human eye, doesn't compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colors, and saving with various lossy compression schemes.

We also launched About This Image to help people assess the credibility of images, showing information like an image's history, how it's used on other pages, and available metadata about an image. And we explored safety methods that have been developed in other fields, learning from established situations where there is low-risk tolerance.

Privacy remains an essential aspect of our commitment to Responsible AI. We continued improving our state-of-the-art privacy preserving learning algorithm DP-FTRL, developed the DP-Alternating Minimization algorithm (DP-AM) to enable personalized recommendations with rigorous privacy protection, and defined a new general paradigm to reduce the privacy costs for many aggregation and learning tasks. We also proposed a scheme for auditing differentially private machine learning systems.

On the applications front we demonstrated that DP-SGD offers a practical solution in the large model fine-tuning regime and showed that images generated by DP diffusion models are useful for a range of downstream tasks. We proposed a new algorithm for DP training of large embedding models that provides efficient training on TPUs without compromising accuracy.

We also teamed up with a broad group of academic and industrial researchers to organize the first Machine Unlearning Challenge to address the scenario in which training images are forgotten to protect the privacy or rights of individuals. We shared a mechanism for extractable memorization, and participatory systems that give users more control over their sensitive data.

We continued to expand the worlds largest corpus of atypical speech recordings to >1M utterances in Project Euphonia, which enabled us to train a Universal Speech Model to better recognize atypical speech by 37% on real-world benchmarks.

We also built an audiobook recommendation system for students with reading disabilities such as dyslexia.

Our work in adversarial testing engaged community voices from historically marginalized communities. We partnered with groups such as the Equitable AI Research Round Table (EARR) to ensure we represent the diverse communities who use our models and engage with external users to identify potential harms in generative model outputs.

We established a dedicated Google AI Red Team focused on testing AI models and products for security, privacy, and abuse risks. We showed that attacks such as poisoning or adversarial examples can be applied to production models and surface additional risks such as memorization in both image and text generative models. We also demonstrated that defending against such attacks can be challenging, as merely applying defenses can cause other security and privacy leakages. We also introduced model evaluation for extreme risks, such as offensive cyber capabilities or strong manipulation skills.

As we advance the state-of-the-art in ML and AI, we also want to ensure people can understand and apply AI to specific problems. We released MakerSuite (now Google AI Studio), a web-based tool that enables AI developers to quickly iterate and build lightweight AI-powered apps. To help AI engineers better understand and debug AI, we released LIT 1.0, a state-of-the-art, open-source debugger for machine learning models.

Colab, our tool that helps developers and students access powerful computing resources right in their web browser, reached over 10 million users. Weve just added AI-powered code assistance to all users at no cost making Colab an even more helpful and integrated experience in data and ML workflows.

To ensure AI produces accurate knowledge when put to use, we also recently introduced FunSearch, a new approach that generates verifiably true knowledge in mathematical sciences using evolutionary methods and large language models.

For AI engineers and product designers, were updating the People + AI Guidebook with generative AI best practices, and we continue to design AI Explorables, which includes how and why models sometimes make incorrect predictions confidently.

We continue to advance the fields of AI and computer science by publishing much of our work and participating in and organizing conferences. We have published more than 500 papers so far this year, and have strong presences at conferences like ICML (see the Google Research and Google DeepMind posts), ICLR (Google Research, Google DeepMind), NeurIPS (Google Research, Google DeepMind), ICCV, CVPR, ACL, CHI, and Interspeech. We are also working to support researchers around the world, participating in events like the Deep Learning Indaba, Khipu, supporting PhD Fellowships in Latin America, and more. We also worked with partners from 33 academic labs to pool data from 22 different robot types and create the Open X-Embodiment dataset and RT-X model to better advance responsible AI development.

Google has spearheaded an industry-wide effort to develop AI safety benchmarks under the MLCommons standards organization with participation from several major players in the generative AI space including OpenAI, Anthropic, Microsoft, Meta, Hugging Face, and more. Along with others in the industry we also co-founded the Frontier Model Forum (FMF), which is focused on ensuring safe and responsible development of frontier AI models. With our FMF partners and other philanthropic organizations, we launched a $10 million AI Safety Fund to advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models.

In close partnership with Google.org, we worked with the United Nations to build the UN Data Commons for the Sustainable Development Goals, a tool that tracks metrics across the 17 Sustainable Development Goals, and supported projects from NGOs, academic institutions, and social enterprises on using AI to accelerate progress on the SDGs.

The items highlighted in this post are a small fraction of the research work we have done throughout the last year. Find out more at the Google Research and Google DeepMind blogs, and our list of publications.

As multimodal models become even more capable, they will empower people to make incredible progress in areas from science to education to entirely new areas of knowledge.

Progress continues apace, and as the year advances, and our products and research advance as well, people will find more and interesting creative uses for AI.

Ending this Year-in-Review where we began, as we say in Why We Focus on AI (and to what end):

If pursued boldly and responsibly, we believe that AI can be a foundational technology that transforms the lives of people everywhere this is what excites us!

This Year-in-Review is cross-posted on both the Google Research Blog and the Google DeepMind Blog.

Originally posted here:

2023: A year of groundbreaking advances in AI and computing - Google Research

Read More..

Research at Microsoft 2023: A year of groundbreaking AI advances and discoveries – Microsoft

In this article

It isnt often that researchers at the cutting edge of technology see something that blows their minds. But thats exactly what happened in 2023, when AI experts began interacting with GPT-4, a large language model (LLM) created by researchers at OpenAI that was trained at unprecedented scale.

I saw some mind-blowing capabilities that I thought I wouldnt see for many years, said Ece Kamar, partner research manager at Microsoft, during a podcast recorded in April.

Throughout the year, rapid advances in AI came to dominate the public conversation (opens in new tab), as technology leaders and eventually the general public voiced a mix of wonder and skepticism after experimenting with GPT-4 and related applications. Could we be seeing sparks of artificial general intelligence (opens in new tab)informally defined as AI systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience (opens in new tab)?

While the answer to that question isnt yet clear, we have certainly entered the era of AI, and its bringing profound changes to the way we work and live. In 2023, AI emerged from the lab and delivered everyday innovations that anyone can use. Millions of people now engage with AI-based services like ChatGPT. Copilots (opens in new tab)AI that helps with complex tasks ranging from search to securityare being woven into business software and services.

Underpinning all of this innovation is years of research, including the work of hundreds of world-class researchers at Microsoft, aided by scientists, engineers, and experts across many related fields. In 2023, AIs transition from research to reality began to accelerate, creating more tangible results than ever before. This post looks back at the progress of the past year, highlighting a sampling of the research and strategies that will support even greater progress in 2024.

AI with positive societal impact is the sum of several integral moving parts, including the AI models, the application of these models, and the infrastructure and standards supporting their development and the development of the larger systems they underpin. Microsoft is redefining the state of the art across these areas with improvements to model efficiency, performance, and capability; the introduction of new frameworks and prompting strategies that increase the usability of models; and best practices that contribute to sustainable and responsible AI.

Microsoft uses AI and other advanced technologies to accelerate and transform scientific discovery, empowering researchers worldwide with leading-edge tools. Across global Microsoft research labs, experts in machine learning, quantum physics, molecular biology, and many other disciplines are tackling pressing challenges in the natural and life sciences.

As AI models grow in capability so, too, do opportunities to empower people to achieve more, as demonstrated by Microsoft work in such domains as health and education this year. The companys commitment to positive human impact requires that AI technology be equitable and accessible.

While AI rightly garners much attention in the current research landscape, researchers at Microsoft are still making plenty of progress across a spectrum of technical focus areas.

Cross-company and cross-disciplinary collaboration has always played an important role in research and even more so as AI continues to rapidly advance. Large models driving the progress are components of larger systems that will deliver the value of AI to people. Developing these systems and the frameworks for determining their roles in peoples lives and society requires the knowledge and experience of those who understand the context in which theyll operatedomain experts, academics, the individuals using these systems, and others.

Throughout the year, Microsoft continued to engage with the broader research community on AI and beyond. The companys sponsorship of and participation in key conferences not only showcased its dedication to the application of AI in diverse technological domains but also underscored its unwavering support for cutting-edge advancements and collaborative community involvement.

Microsoft achieved extraordinary milestones in 2023 and will continue pushing the boundaries of innovation to help shape a future where technology serves humanity in remarkable ways. To stay abreast of the latest updates, subscribe to the Microsoft Research Newsletter (opens in new tab) and the Microsoft Research Podcast (opens in new tab). You can also follow us onFacebook (opens in new tab),Instagram (opens in new tab),LinkedIn (opens in new tab),X (opens in new tab), andYouTube (opens in new tab).

Writers, Editors, and ProducersKristina DodgeKate ForsterJessica GartnerAlyssa HughesGretchen HuizingaBrenda PottsChris StetkiewiczLarry West

Managing EditorAmber Tingle

Project ManagerAmanda Melfi

Microsoft Research Global Design LeadNeeltje Berger

Graphic DesignersAdam BlytheHarley Weber

Microsoft Research Creative Studio LeadMatt Corwine

Originally posted here:

Research at Microsoft 2023: A year of groundbreaking AI advances and discoveries - Microsoft

Read More..

Robotics company using AI that does engineering work by climbing walls – WJAC Johnstown

Robotics company using AI that does engineering work by climbing walls

by Brock Owens

WJAC - GECKO ROBOTICS

Gecko Robotics, based out of Pittsburgh, said it is using AI to find problems before they happen at nuclear reactors, boilers, pipelines, tanks, and ships.

With robots that can climb walls, Gecko Robotics said its mission is to protect some of the world's most important assets.

Company founder and CEO Jake Loosararian said this dream started as a student at Grove City College, and now he's beginning to partner power plants that are using the software he is calling Cantilever.

What those robots are doing is they're gathering information and data as it relates to the health of the structures, Loosararian said. The software was targeted at specifically using our unique data sets that we've been collecting for the last 11 years since I started Gecko out of my college dorm room.

He said making the AI able to climb walls is inspired by a trip to a powerplant in Oil City when he was in college.

The guy who was gathering information to see if the power plant was going to have a forced outage, Loosararian said, Fell and died doing that inspection.

The robot is remote control operated and according to Loosararian takes about half the time doing the engineering jobs. He said it does require some human help.

Loosararian said he understands the fear some people show toward artificial intelligence progress.

I think it's right to be skeptical of technology, but it's more important to prioritize health, safety, and actually doing the job right, Loosararian said.

At least for now, Cantilever should not fully push out the human jobs according to Loosararian.

Loosararian said, Ones that don't adopt these useful pieces of tech are very much at risk of not just providing solutions to the community that they need to be relied on, but also folks jobs are going to be at risk if you can't figure out better ways to actually operate these facilities.

Load more...

More:

Robotics company using AI that does engineering work by climbing walls - WJAC Johnstown

Read More..

New Class of Antibiotics Discovered Using AI – Scientific American

December 20, 2023

4 min read

A deep-learning algorithm helped identify new compounds that are effective against antibiotic-resistant infections in mice, opening the door to AI-guided drug discovery

By Tanya Lewis

Antibiotic resistance is among the biggest global threats to human health. It was directly responsible for an estimated 1.27 million deaths in 2019 and contributed to nearly five million more. The problem only got worse during the COVID pandemic. And no new classes of antibiotics have been developed for decades.

Now researchers report that they have used artificial intelligence to discover a new class of antibiotic candidates. A team at the laboratory of James Collins of the Broad Institute of the Massachusetts Institute of Technology and Harvard University used a type of AI known as deep learning to screen millions of compounds for antibiotic activity. They then tested 283 promising compounds in mice and found several that were effective against methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcisome of the most stubbornly hard-to-kill pathogens. Unlike a typical AI model, which operates as an inscrutable black box, it was possible to follow this models reasoning and understand the biochemistry behind it.

The development builds on previous research by this group and others, including work by Csar de la Fuente, an assistant professor in the department of psychiatry at the University of Pennsylvanias Perelman School of Medicine, and his colleagues. Scientific American spoke with de la Fuente about the significance of the new study for using AI to help guide the development of new antibiotics.

[An edited transcript of the interview follows.]

How significant is this finding of a new class of antibiotics using AI?

Im very excited about this new work at the Collins LabI think this is a great next breakthrough. Its an area of research that was not even a field until five years ago. Its an extremely exciting and very emerging area of work, where the main goal is to use AI for antibiotic discovery and antibiotic design. My own laboratory has been working toward this for the past half-decade. In this study, the researchers used deep learning to try to discover a new type of antibiotic. They also implemented notions of explainable AI, which is interesting, because when we think about machine learning and deep learning, we think of them as black boxes. So I think its interesting to start incorporating explainability into some of the models were building that apply AI to biology and chemistry. The authors were able to find a couple of compounds that seemed to reduce infection in mouse models, so thats always exciting.

What advantage does AI have over humans in being able to screen and identify new antibiotic compounds?

AI and machines in general can systematically and very rapidly mine structures or any sort of dataset that you give them. If you think about the traditional antibiotic discovery pipeline, it takes around 12 years to discover a new antibiotic, and it takes between three and six years to discover any clinical candidates. Then you have to transition them to phase I, phase II and phase III clinical trials. Now, with machines, weve been able to accelerate that. In my and my colleagues own work, for example, we can discover in a matter of hours thousands or hundreds of thousands of preclinical candidates instead of having to wait three to six years. I think AI in general has enabled that. And I think another example of that is this work by the Collins Labwhere, by using deep learning in this case, the team has been able to sort through millions of chemical compounds to identify a couple that seemed promising. That would be very hard to do manually.

What are the next steps needed in order to translate this new class of antibiotics into a clinical drug?

Theres still a gap there. You will need systematic toxicity studies and then pre-IND [investigational new drug] studies. The U.S. Food and Drug Administration requires you do these studies to assess whether your potentially exciting drug could transition into phase I clinical trials, which is the first stage in any clinical trial. So those different steps still need to take place. But again, I think this is another very exciting advance in this really emerging area of using AI in the field of microbiology and antibiotics. The dream we have is that hopefully someday AI will create antibiotics that can save lives.

The compounds identified in this new study were effective at killing microbes such as MRSA in mice, right?

Yes, they showed that in two mouse models, which is interesting. Whenever you have mouse infection data, thats always a lot more excitingit shows those compounds were actually able to reduce infection in realistic mouse models.

As another example of using AI, we recently mined the genomes and proteomes of extinct organisms in my own lab, and we were able to identify a number of clinical antibiotic candidates.

Why is it important that the AI model is explainable?

I think it's important if we are to think about AI as an engineering discipline someday. In engineering, youre always able to take apart the different pieces that constitute some sort of structure, and you understand what each piece is doing. But in the case of AI, and particularly deep learning, because its a black box, we don't know what happens in the middle. Its very difficult to re-create what happened in order to give us compound X or Y or solution X or Y. So beginning to dig into the black box to see whats actually happening in each of those steps is a critical step for us to be able to turn AI into an engineering discipline. A first step in the right direction is to use explainable AI in order to try to comprehend what the machine is actually doing. It becomes less of a black boxperhaps a gray box.

Follow this link:

New Class of Antibiotics Discovered Using AI - Scientific American

Read More..

3 Up-and-Coming Artificial Intelligence (AI) Stocks to Buy in 2024 – The Motley Fool

Artificial intelligence was a hot field in 2023, leading to soaring stock prices for big-name tech names like Nvidia (thanks to its advanced chips) and Microsoft (thanks to its partnership with ChatGPT creator OpenAI). Investors who didn't buy these stocks before the AI frenzy drove up share prices may feel they've missed out.

Fortunately, plenty of up-and-coming tech firms provide new opportunities to benefit from the advent of AI, and 2024 is a good time to scoop up shares of some of these rising stars. Here is a trio of young tech companies well-positioned to deliver robust returns in the new year.

The transformative power of AI is particularly evident in Symbotic (SYM 1.42%). The company specializes in providing warehouses with robotic workers managed by AI. These robots can process freight quickly, accurately, and safely alongside humans. And Symbotic's AI can continuously analyze and refine the work performed by the robots, routinely improving their efficiency.

The company's customers include Walmart, which owns a stake in Symbotic, and Southern Glazer's Wine and Spirits, the largest distributor of alcoholic beverages in the U.S.

But Symbotic is just getting started. In its 2023 fiscal year, ended September 30, Symbotic had installed 12 systems for customers, a substantial jump from 2022's seven. This growth translated into fiscal 2023 revenue of $1.2 billion, nearly double the sales generated in the prior year.

More revenue growth lies ahead for the company. Symbotic was in the process of installing 35 robotic systems at the end of fiscal 2023, more than double the 17 systems that were in process the previous year. As a result, the company anticipates fiscal Q1 revenue of at least $350 million, up from the prior year's $206.3 million.

UiPath (PATH 0.63%) provides clients with an AI platform that can analyze their business workflows, identify areas for improvement, and then automate those tasks. Organizations are flocking to UiPath's AI solution, particularly in industries such as finance, healthcare, and government, since these sectors involve a ton of administrative tasks that AI can handle.

UiPath's success is seen in its strong sales growth. The company's revenue of $325.9 million in its fiscal third quarter, ended October 31, represented a 24% year-over-year increase. The company expects more revenue growth in Q4, forecasting at least $381 million versus the prior year's $308.5 million.

Despite the strong sales, UiPath is not profitable, like the other high-growth tech companies on this list. But UiPath made a concerted effort over the past year to reign in costs. So its fiscal Q3 net loss of $31.5 million was a substantial drop from the prior year's loss of $57.7 million. This is a positive sign of the company's improving financial health.

Another positive is its improvement in free cash flow (FCF). UiPath's Q3 adjusted FCF was $44 million, up from negative FCF of $24.1 million in the prior year.

IonQ (IONQ -1.09%) operates in the emerging field of quantum computing. Quantum computers offer the potential for AI to evolve exponentially, because once quantum technology progresses far enough, these machines will be able to perform complex calculations beyond the abilities of the world's most powerful supercomputers.

Quantum machines are potent since they use quantum physics to perform multiple computing tasks simultaneously, rather than processing them sequentially like today's computers. IonQ developed quantum computers in 2023 that achieved 29 algorithmic qubits.

This milestone signals IonQ could reach 35 algorithmic qubits in 2024. Algorithmic qubits are a benchmark measuring a system's ability to run quantum workloads. The higher the number, the more computing work the quantum machine can successfully complete.

At 35 algorithmic qubits, IonQ's system will be on the verge of exceeding the abilities of today's conventional computers, and the emergence of quantum-powered AI can begin.

IonQ generates revenue by charging for access to its quantum technology, and that revenue is rising quickly. The company's Q3 sales zoomed up 122% year over year to $6.1 million. Through three quarters, IonQ's 2023 revenue stood at $15.9 million, more than double 2022's $7.3 million.

As its sales success shows, IonQ's technology is attracting customers. In September the company signed a deal with the U.S. Air Force worth $25.5 million to provide it with a quantum system.

Because IonQ, UiPath, and Symbotic are all nascent businesses successfully capturing customers in their respective fields, they possess the potential for years of sales growth ahead, making them worthwhile buys for 2024 -- or at least worthy of going on your watchlist. And given how fast their revenue is rising, they're great stocks for growth investors.

See the rest here:

3 Up-and-Coming Artificial Intelligence (AI) Stocks to Buy in 2024 - The Motley Fool

Read More..

This Blue Chip Artificial Intelligence (AI) Stock Is a Buy for 2024 – The Motley Fool

The rise of artificial intelligence (AI) in 2023 sent many tech stocks soaring. As a result, a plethora of businesses touted AI capabilities. Sifting through them to figure out which are worthwhile long-term investments can prove challenging.

But one blue-chip stock possesses so many compelling qualities, it makes sense to pick up shares and hold on to them through 2024 and beyond. That stock is tech stalwart International Business Machines (IBM 0.85%).

It may be a good time to buy IBM stock, and not because a new year is upon us. At the time of this writing, Big Blue's share price has retreated a bit from its 52-week high of $166.34, reached on December 12. And now, consider these other factors that make IBM a good long-term investment.

Before Arvind Krishna, who used to oversee IBM's cloud computing and AI division, rose to the CEO spot in 2020, Big Blue was struggling under the weight of a vast organization with too many irons in the fire. Mr. Krishna focused the company on AI and cloud computing, while divesting businesses that no longer made sense for the company.

Today's IBM is leaner, and now on a growth trajectory thanks to these moves. The company's third-quarter revenue jumped 5% year over year to $14.8 billion as a number of areas across its businesses experienced growth.

IBM's data and AI division saw revenue rise 6% year over year, while its Red Hat cloud computing solution increased by 9% as organizations continue to migrate IT operations to the cloud.

IBM also possesses a substantial consulting business, which grew revenue 6% year over year to $5 billion. IBM's clients are looking for help integrating AI capabilities into their businesses, which led to growth in Big Blue's consulting division. As more businesses seek to to capitalize on the advent of AI, IBM's consulting capabilities are likely to prosper.

IBM's work with AI technology stretches back to the 1950s. Its latest AI platform, watsonx, debuted in July. This platform is helping IBM clients achieve business improvements such as automating mundane operational tasks, improving customer service, and modernizing the software code used in their organizations. AI clients include Samsung Electronics and NASA.

Big Blue is continuously enhancing its AI platform. For example, on December 18, IBM announced its acquisition of two companies from Software AG, which will help watsonx integrate with a customer's systems and ingest the mountains of data needed for accurate AI decision-making.

The company is also working in the emerging field of quantum computing, which offers key technology in AI's evolution. These machines use quantum physics to perform calculations multi-dimensionally rather than with the sequential approach used by today's computers.

This allows quantum machines to perform calculations too complex for even the most powerful supercomputers on the planet, and that kind of potency can substantially advance AI's capabilities. In fact, customers today can use watsonx to perform quantum code programming. Customers using IBM's quantum computing technology include the U.S. government and Harvard University.

Although IBM competes against other well-known tech firms, such as Microsoft, in the AI and cloud computing industries, these markets are large enough to support multiple players. Moreover, IBM's revenue growth shows it is successfully capturing its share of customers.

And in contemplating an investment in IBM, consider Big Blue's stock valuation versus rival Microsoft's. IBM's price-to-earnings ratio (P/E ratio) over the trailing 12 months is just under 22, whereas Microsoft's P/E multiple of 36 is significantly higher, suggesting IBM is the better value.

And its value to investors doesn't stop there. IBM offers a robust dividend, currently yielding over 4%, which can provide you with years of passive income. The company has paid dividends since 1916 and boasts an impressive streak of dividend increases spanning 28 consecutive years.

IBM's growing business, driven by its ever-evolving AI and cloud computing technologies, its attractive dividend, and its reasonable valuation combine to make this blue-chip stock a solid investment for 2024 and beyond.

Robert Izquierdo has positions in International Business Machines and Microsoft. The Motley Fool has positions in and recommends Microsoft. The Motley Fool recommends International Business Machines. The Motley Fool has a disclosure policy.

Visit link:

This Blue Chip Artificial Intelligence (AI) Stock Is a Buy for 2024 - The Motley Fool

Read More..

Donald Trump said an ad used AI to make him look bad. The clips are real. – Tampa Bay Times

Published Dec. 22

Former President Donald Trump has a few gripes with the Lincoln Project, a political advocacy group composed of Republicans who oppose Trumps leadership. A recent complaint: that the group is showing altered footage of him committing gaffes.

The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I.(Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden, Trump posted Dec. 4 on Truth Social.

In the Lincoln Projects Dec. 4 video, titled Feeble, a narrator addresses Trump directly with a taunt. Hey, Donald, the female voice says. We notice something. More and more people are saying it. Youre weak. You seem unsteady. You need help getting around. The video flashes through scenes showing Trump tripping over his words, gesturing, misspeaking and climbing steps to a plane with something white stuck to his shoe.

Are these clips the work of AI? We reviewed them and found the Trump clips are legitimate and not generated using AI. We reached out to the Trump campaign but did not hear back.

The Lincoln Project posted on X, formerly Twitter, that its Feeble ad was not AI-generated. We also looked at two other ads the group published in the days preceding Trumps post and found no evidence they included AI-generated content, either.

We identified the origin of all but one of the 31 photos and videos used in the Feeble ad, 21 of them featuring Trump. Weve corroborated them with footage from C-SPAN, news outlets, and/or government archives. In some of the clips, Trump is trying to publicly mock President Joe Biden, which the Lincoln Project ad does not make clear.

Subscribe to our free Buzz newsletter

Well send you a rundown on local, state and national politics coverage every Thursday.

Want more of our free, weekly newslettersinyourinbox? Letsgetstarted.

For good measure, we also checked the clips in the video that didnt feature Trump. These included clips and photos of Biden and stock videos. None of them were AI-generated, either.

We were unable to find the source for a 1-second video of Biden smiling at the 0:45 timestamp in the ad.

But of the 21 Trump-related images and clips in the ad, we found no evidence they were created or altered using AI.

The Lincoln Project also uploaded two other ads near the time of Trumps post that appeared to attack Trump. One called Christian Trump was also published on YouTube on Dec. 4. Another, titled Welcome to the clown show was uploaded Dec. 3.

We checked those, too, and found no evidence that AI was used to alter Trumps appearance or make him seem to say something he didnt.

At the 1:09 timestamp of Christian Trump, the Lincoln Project included a photo of Bibles stacked in a bathroom, which appears to have been altered. The original photo shows a bathroom in Trumps Mar-a-Lago estate in Palm Beach, which an indictment said was used to store boxes of records; it did not include a stack of Bibles.

In Welcome to the clown show we were unable to identify the source for a clip of a person talking about his preferred leader at the 0:58 timestamp. We were also unable to identify the source of the audio at the end of Christian Trump, which sounds like Trump saying Jesus Christ.

But there were no AI-generated clips of Trumps likeness.

We rate Trumps claim that the Lincoln Project is using AI in its television commercials about Trump False.

PolitiFact Researcher Caryn Baird contributed to this report.

Read more from the original source:

Donald Trump said an ad used AI to make him look bad. The clips are real. - Tampa Bay Times

Read More..

AI image-generators are being trained on explicit photos of children, a study shows – The Associated Press

Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built.

Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.

Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what theyve learned from two separate buckets of online images adult pornography and benign photos of kids.

But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions thats been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.

The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatorys report, LAION told The Associated Press it was temporarily removing its datasets.

LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, said in a statement that it has a zero tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them.

While the images account for just a fraction of LAIONs index of some 5.8 billion images, the Stanford group says it is likely influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.

Its not an easy problem to fix, and traces back to many generative AI projects being effectively rushed to market and made widely accessible because the field is so competitive, said Stanford Internet Observatorys chief technologist David Thiel, who authored the report.

Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention, Thiel said in an interview.

A prominent LAION user that helped shape the datasets development is London-based startup Stability AI, maker of the Stable Diffusion text-to-image models. New versions of Stable Diffusion have made it much harder to create harmful content, but an older version introduced last year which Stability AI says it didnt release is still baked into other applications and tools and remains the most popular model for generating explicit imagery, according to the Stanford report.

We cant take that back. That model is in the hands of many people on their local machines, said Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection, which runs Canadas hotline for reporting online sexual exploitation.

Stability AI on Wednesday said it only hosts filtered versions of Stable Diffusion and that since taking over the exclusive development of Stable Diffusion, Stability AI has taken proactive steps to mitigate the risk of misuse.

Those filters remove unsafe content from reaching the models, the company said in a prepared statement. By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content.

LAION was the brainchild of a German researcher and teacher, Christoph Schuhmann, who told the AP earlier this year that part of the reason to make such a huge visual database publicly accessible was to ensure that the future of AI development isnt controlled by a handful of powerful companies.

It will be much safer and much more fair if we can democratize it so that the whole research community and the whole general public can benefit from it, he said.

About the use of AI image-generators to produce illicit images

The problemSchools and law enforcement have been alarmed at the use of AI tools -- some more accessible than others -- to produce realistic and explicit deepfake images of children. In a growing number of cases, teens have been using the tools to transform real photos of their fully-clothed peers into nudes.

How it happensWithout proper safeguards, some AI systems have been able to generate child sexual abuse imagery when prompted to do so because theyre able to produce novel images based on what theyve learned from the patterns of a huge trove of real images pulled from across the internet, including adult pornography and benign photos of kids. Some systems have also been trained on actual child sexual abuse imagery, including more than 3,200 images found in the giant AI database LAION, according to a report from the Stanford Internet Observatory.

SolutionsThe Stanford Internet Observatory and other organizations combating child abuse are urging AI researchers and tech companies to do a better job excluding harmful material from the training datasets that are the foundations for building AI tools. Its hard to put open-source AI models back in the box when theyre already widely accessible, so theyre also urging companies to do what they can to take down tools that lack strong filters and are known to be favored by abusers.

Much of LAIONs data comes from another source, Common Crawl, a repository of data constantly trawled from the open internet, but Common Crawls executive director, Rich Skrenta, said it was incumbent on LAION to scan and filter what it took before making use of it.

LAION said this week it developed rigorous filters to detect and remove illegal content before releasing its datasets and is still working to improve those filters. The Stanford report acknowledged LAIONs developers made some attempts to filter out underage explicit content but might have done a better job had they consulted earlier with child safety experts.

Many text-to-image generators are derived in some way from the LAION database, though its not always clear which ones. OpenAI, maker of DALL-E and ChatGPT, said it doesnt use LAION and has fine-tuned its models to refuse requests for sexual content involving minors.

Google built its text-to-image Imagen model based on a LAION dataset but decided against making it public in 2022 after an audit of the database uncovered a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.

Trying to clean up the data retroactively is difficult, so the Stanford Internet Observatory is calling for more drastic measures. One is for anyone whos built training sets off of LAION5B named for the more than 5 billion image-text pairs it contains to delete them or work with intermediaries to clean the material. Another is to effectively make an older version of Stable Diffusion disappear from all but the darkest corners of the internet.

Legitimate platforms can stop offering versions of it for download, particularly if they are frequently used to generate abusive images and have no safeguards to block them, Thiel said.

As an example, Thiel called out CivitAI, a platform thats favored by people making AI-generated pornography but which he said lacks safety measures to weigh it against making images of children. The report also calls on AI company Hugging Face, which distributes the training data for models, to implement better methods to report and remove links to abusive material.

Hugging Face said it is regularly working with regulators and child safety groups to identify and remove abusive material. Meanwhile, CivitAI said it has strict policies on the generation of images depicting children and has rolled out updates to provide more safeguards. The company also said it is working to ensure its policies are adapting and growing as the technology evolves.

The Stanford report also questions whether any photos of children even the most benign should be fed into AI systems without their familys consent due to protections in the federal Childrens Online Privacy Protection Act.

Rebecca Portnoff, the director of data science at the anti-child sexual abuse organization Thorn, said her organization has conducted research that shows the prevalence of AI-generated images among abusers is small, but growing consistently.

Developers can mitigate these harms by making sure the datasets they use to develop AI models are clean of abuse materials. Portnoff said there are also opportunities to mitigate harmful uses down the line after models are already in circulation.

Tech companies and child safety groups currently assign videos and images a hash unique digital signatures to track and take down child abuse materials. According to Portnoff, the same concept can be applied to AI models that are being misused.

Its not currently happening, she said. But its something that in my opinion can and should be done.

See the original post:

AI image-generators are being trained on explicit photos of children, a study shows - The Associated Press

Read More..

Forget Nvidia: Buy This Magnificent Artificial Intelligence (AI) Stock Instead – The Motley Fool

Excitement over artificial intelligence (AI) has created many millionaires this year, as chip stocks like Nvidia (NVDA -0.33%) have skyrocketed 230% since Jan. 1. The company has significantly profited from increased demand for graphics processing units (GPUs), which are crucial for training AI models.

Nvidia's business exploded this year. However, it is worth looking at companies at slightly earlier stages in their AI journeys, as they could have more room to run in the coming years.

Intel (INTC 1.95%) is an exciting option, with years of experience in the chip market. The company also plans to launch a new AI GPU in 2024.

So, forget Nvidia. Here is why Intel is a magnificent AI stock to buy instead.

It hasn't been easy to be an investor in Intel over the last few years. The company was responsible for more than 80% of the central processing unit (CPU) market for at least a decade, and was the primary chip supplier for Apple's MacBook lineup for years. However, Intel's dominance saw it grow complacent, leaving it vulnerable to more innovative competitors.

As a result, Advanced Micro Devices started gradually eating away at Intel's CPU market share in 2017, with Intel's share now down to 69%. Then, in 2020, Apple cut ties with Intel in favor of far more powerful in-house hardware. Intel's stock subsequently dipped 4% over the last three years. Meanwhile, annual revenue tumbled 19%, with operating income down 90%.

However, the fall from grace has seemingly lit a fire under Intel again. According to Mercury Research, from the second quarter of 2022 to Q2 2023, Intel regained 3% of its CPU market share from AMD.

Moreover, Intel has pivoted its business to the $137 billion AI market, with plans to challenge Nvidia's dominance in 2024. The sector is projected to expand at a compound annual growth rate of 37% through 2030, which would see it rise more than $1 trillion before the end of the decade.

As a result, even if Intel can't dethrone Nvidia, projections show there will be plenty of opportunities for Intel to snap up market share and profit significantly from the industry's development.

Earlier this month, Intel unveiled Gaudi3, a generative AI chip meant to compete directly with Nvidia's H100. The GPU will begin shipping in 2024 alongside Core Ultra and Xeon chips that include neural processing units, making them capable of running AI programs faster.

Shares in Intel have soared more than 70% in 2023, almost entirely thanks to its prospects in AI. While that is nowhere near Nvidia's stock growth in the period, it could mean Intel has more to offer new investors in the coming years.

Data by YCharts

The charts show Intel's earnings could hit nearly $3 per share over the next two fiscal years, while Nvidia's are expected to reach $24 per share. Therefore, on the surface, Nvidia might look like a no-brainer. However, multiplying these figures by the companies' forward price-to-earnings ratios yields a stock price of $130 for Intel and $939 for Nvidia.

Looking at their current positions, the figures project Intel's stock will rise 184% and Nvidia's 95% within the next two fiscal years. While both boast impressive growth, Intel is forecast to deliver far more significant gains.

The figures align with Nvidia's meteoric rise this year compared to Intel's more gradual expansion. Intel is just getting started in AI and could be in for a lucrative 2024. So if you're looking for an AI stock to add before the new year, Intel is a screaming buy right now instead of Nvidia.

Dani Cook has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Apple, and Nvidia. The Motley Fool recommends Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short February 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.

View post:

Forget Nvidia: Buy This Magnificent Artificial Intelligence (AI) Stock Instead - The Motley Fool

Read More..