Page 618«..1020..617618619620..630640..»

2D material reshapes 3D electronics for AI hardware – The Source – Washington University in St. Louis

Multifunctional computer chips have evolved to do more with integrated sensors, processors, memory and other specialized components. However, as chips have expanded, the time required to move information between functional components has also grown.

Think of it like building a house, said Sang-Hoon Bae, an assistant professor of mechanical engineering and materials science at the McKelvey School of Engineering at Washington University in St. Louis. You build out laterally and up vertically to get more function, more room to do more specialized activities, but then you have to spend more time moving or communicating between rooms.

To address this challenge, Bae and a team of international collaborators, including researchers from the Massachusetts Institute of Technology, Yonsei University, Inha University, Georgia Institute of Technology and the University of Notre Dame, demonstrated monolithic 3D integration of layered 2D material into novel processing hardware for artificial intelligence (AI) computing. They envision that their new approach will not only provide a material-level solution for fully integrating many functions into a single, small electronic chip, but also pave the way for advanced AI computing. Their work was published Nov. 27 in Nature Materials, where it was selected as a front cover article.

The teams monolithic 3D-integrated chip offers advantages over existing laterally integrated computer chips. The device contains six atomically thin 2D layers, each with its own function, and achieves significantly reduced processing time, power consumption, latency and footprint. This is accomplished through tightly packing the processing layers to ensure dense interlayer connectivity. As a result, the hardware offers unprecedented efficiency and performance in AI computing tasks.

This discovery offers a novel solution to integrate electronics and also opens the door to a new era of multifunctional computing hardware. With ultimate parallelism at its core, this technology could dramatically expand the capabilities of AI systems, enabling them to handle complex tasks with lightning speed and exceptional accuracy, Bae said.

Monolithic 3D integration has the potential to reshape the entire electronics and computing industry by enabling the development of more compact, powerful and energy-efficient devices, Bae said. Atomically thin 2D materials are ideal for this, and my collaborators and I will continue improving this material until we can ultimately integrate all functional layers on a single chip.

Bae said these devices also are more flexible and functional, making them suitable for more applications.

From autonomous vehicles to medical diagnostics and data centers, the applications of this monolithic 3D integration technology are potentially boundless, he said. For example, in-sensor computing combines sensor and computer functions in one device, instead of a sensor obtaining information then transferring the data to a computer. That lets us obtain a signal and directly compute data resulting in faster processing, less energy consumption and enhanced security because data isnt being transferred.

Kang J-H, Shin H, Kim KS, Song M-K, Lee D, Meng Y, Choi C, Suh JM, Kim BJ, Kim H, Hoang AT, Park B-I, Zhou G, Sundaram S, Vuong P, Shin J, Choe J, Xu Z, Younas R, Kim JS, Han S, Lee S, Kim SO, Kang B, Seo S, Ahn H, Seo S, Reidy K, Park E, Mun S, Park M-C, Lee S, Kim H-J, Kum HS, Lin P, Hinkle C, Ougazzaden A, Ahn J-H, Kim J, and Bae S-H. Monolithic 3D integration of 2D materials-based electronics towards ultimate edge computing solutions. Nature Materials. Nov. 27, 2023. DOI: https://doi.org/10.1038/s41563-023-01704-z

This work was supported by Washington University in St. Louis and its Institute of Materials Science & Engineering, the Korea Institute of Science and Technology, the National Research Foundation of Korea, the National Science Foundation, and SUPREME, one of seven centers in JUMP 2.0, a Semiconductor Research Corp. program sponsored by DARPA.

Originally published on the McKelvey School of Engineering website.

The rest is here:

2D material reshapes 3D electronics for AI hardware - The Source - Washington University in St. Louis

Read More..

OSDG Initiative Recognized in Top 100 AI Projects for Advancing … – United Nations Development Programme

New York - TheOSDG initiative, a collaborative effort between the United Nations Development Programme (UNDP) and European research and policy analysis centre PPMI, has been honored as one of the IRCAITop 100 Artificial Intelligence (AI) initiatives driving progress toward the Sustainable Development Goals (SDGs). This prestigious recognition is annually bestowed by theInternational Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO.

OSDG represents a unique partnership, supported by a global volunteer community. It encompasses an innovative online tool for SDG analysis and a curated dataset for Machine Learning applications. This recognition underscores OSDG's remarkable contribution to the SDGs by uniting innovative AI solutions with a dedicated volunteer community.

Initially conceived as a basic online text analysis tool for SDG monitoring, OSDG has rapidly evolved into a sophisticated resource for users around the world. As a free, open-source tool developed by PPMI and SDG AI Lab, OSDG can swiftly review uploaded texts, identifying its relevancy to the 17 SDGs. This tool enables users to conduct comprehensive SDGs analyses and visually explore data from thousands of documents in 17 languages.

Moreover, the OSDG team has curated a one-of-a-kindOSDG Community Dataset, containing 42,065 text excerpts. These have been rigorously vetted by 2,600 dedicated volunteers, ensuring perfect alignment with the SDGs. This dataset, downloaded over 4,000 times, has significantly contributed to several prominent academic publications. The community of online volunteers, which includes almost1000 Online UN Volunteers from over 50 countries, have contributed in various fields like science, technology, healthcare, and public policy in the realm of sustainable development.

The versatility of OSDG extends to academic and practical applications. Prominent universities, including University College London, University of Hong Kong, Hohenheim University, Nazarbayev University and York University, have used OSDG to align their curricula and research with the SDGs. Both dataset and tool have also featured in publications covering topics ranging from the European Green Deal to urban resilience, computational thinking, carbon emissions, and technical papers on knowledge graphs and theRoBERTa approach.

In government policy-making and national planning, OSDG has proven to be an invaluable asset. For example, the Government of Turkiye, in collaboration with the SDG AI Lab, used the OSDG tool to align the 11th Development Plan with the SDGs. This process not only identified frequently addressed goals but also highlighted less targeted ones, offering a template for nations worldwide to integrate technology for more informed policy making.

The success of the OSDG has sparked the development of new tools and research initiatives. The UNDP global team, for instance, has launched the SDGs Push platform, powered by the OSDG Community Dataset. Similarly, the dataset has been instrumental in developing the GIZ Policy Action Tracker and has been utilized by the policy analysis startup Overton. The OSDG project sparks SDG-related innovations and global cooperation.

The recognition of OSDG among the Top 100 initiatives advancing the SDGs underscores the critical importance of digital technologies and volunteerism to achieve sustainable development. The project's global impact supports various stakeholders, including academia, researchers and government, in their pursuit of the SDGs.

Read more from the original source:

OSDG Initiative Recognized in Top 100 AI Projects for Advancing ... - United Nations Development Programme

Read More..

‘Unmasking AI’ author Joy Buolamwini says prejudice is baked into … – NPR

Computer scientist Joy Buolamwini was a graduate student at MIT when she made a startling discovery: The facial recognition software program she was working on couldn't detect her dark skin; it only registered her presence when she put on a white mask.

It was Buolamwini's first encounter with what she came to call the "coded gaze."

"You've likely heard of the 'male gaze' or the 'white gaze,'" she explains. "This is a cousin concept really, about who has the power to shape technology and whose preferences and priorities are baked in as well as also, sometimes, whose prejudices are baked in."

Buolamwini notes that in a recent test of Stable Diffusion's text-to-image generative AI system, prompts for high paying jobs overwhelmingly yielded images of men with lighter skin. Meanwhile, prompts for criminal stereotypes, such as drug dealers, terrorists or inmates, typically resulted in images of men with darker skin.

In her new book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini looks at the social implications of the technology and warns that biases in facial analysis systems could harm millions of people especially if they reinforce existing stereotypes.

"With the adoption of AI systems, at first I thought we were looking at a mirror, but now I believe we're looking into a kaleidoscope of distortion," Buolamwini says. "Because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made."

Buolamwini says she got into computer science because she wanted to "build cool future tech" not to be an activist. But as the potential misuses of the technology became clearer, she realized she needed to speak out.

"I truly believe if you have a face, you have a place in the conversation about AI," she says. "As you encounter AI systems, whether it's in your workplace, maybe it's in the hospital, maybe it's at school, [ask] questions: 'Why have we adopted this system? Does it actually do what we think it's going to do?' "

On why facial recognition software makes mistakes

How is it that someone can be misidentified by a machine? So we have to look at the ways in which we teach machines to recognize the pattern of a face. And so the approach to this type of pattern recognition is often machine learning. And when we talk about machine learning, we're talking about training AI systems that learn from a set of data. So you have a dataset that would contain many examples of a human face, and from that dataset, using various techniques, the model would be trained to detect the pattern of a face, and then you can go further and say, "OK, let's train the model to find a specific face."

What my research showed and what others have shown as well is many of these datasets were not representative of the world at all. I started calling them "pale male" datasets, because I would look into the data sets and I would go through and count: How many light-skinned people, How many dark-skinned people? How many women, how many men and so forth. And some of the really important data sets in our field. They could be 70% men, over 80% lighter skinned individuals. And these sorts of datasets could be considered gold standards. ...

And so it's not then so surprising that you would have higher misidentification rates for people who are less represented when these types of systems were being developed in the first place. And so when you look at people like Porcha Woodruff, who was falsely arrested due to facial recognition misidentification, when you look at Robert Williams, who was falsely arrested due to facial misidentification in front of his two young daughters, when you look at Nijeer Parks, when you look at Randall Reed, Randall was arrested for a crime that occurred in a state he had never even set foot in. And all of these people I've mentioned they're all dark-skinned individuals.

On why AI misgenders female faces

Joy Buolamwini is the founder of the Algorithmic Justice League, an organization that raises awareness about the implications of AI. Her research was also featured in the Netflix documentary Coded Bias. Naima Green/Penguin Random House hide caption

Joy Buolamwini is the founder of the Algorithmic Justice League, an organization that raises awareness about the implications of AI. Her research was also featured in the Netflix documentary Coded Bias.

I looked at the research on gender classification, I saw with some prior studies, actually older women tended to be misgendered more often than younger women. And I also started looking at the composition of the various gender classification testing datasets, the benchmarks and so forth. And it's a similar kind of story to the dark skin here. It's not just the proportion of representation, but what type of woman is represented. So, for example, many of these face datasets are face datasets of celebrities. And if you look at women who tend to be celebrated, [they are] lighter skin women, but also [women who] fit very specific gender norms or gender presentation norms and stereotypes as well. And so if you have systems that are trained on some type of ideal form of woman that doesn't actually fit many ways of being a woman, this learned gender presentation does not reflect the world.

On being a "poet of code," and the success of her piece, "AI, Aint I a Woman?"

I spent so much time wanting to have my research be taken seriously. ... I was concerned people might also think it's a gimmick. ... And so after I published the Gender Shades paper and it was really well received in the academic world and also industry, in some ways I felt that gave me a little bit of a shield to experiment with more of the poetic side. And so shortly after that research came out, I did a poem called "AI, Ain't I a Woman?," which is both a poem and an AI audit from testing different AI systems out. And so the AI audit results are what drive the lyrics of the poems. And as I was working on that, it allowed me to connect with the work in a different way.

This is where the humanizing piece comes in. So it's one thing to say, "OK, this system is more accurate than that system," or "this system performs better on darker skin or performs better on lighter skin." And you can see the numbers. But I wanted to go from the performance metrics to the performance arts so you could feel what it's like if somebody is misclassified not just read the various metrics around it.

And so that's what the whole experimentation around "AI, Ain't I a Woman?" was. And that work traveled in places I didn't expect. Probably the most unexpected place was with the EU Global Tech panel. It was shown to defense ministers of every EU country ahead of a conversation on lethal autonomous weapons to humanize the stakes and think about what we're putting out.

On her urgent message for President Biden about AI

We have an opportunity to lead on preventing AI harms, and the subtitle of the book is protecting What Is Human in a World of Machines. And when I think of what is human, I think about our right to express ourselves, the essence of who we are and our expectations of dignity. I challenge President Biden for the U.S. to lead on what I call biometric rights. ...

I'm talking about our essence, our actual likeness. ... Someone can take the voice of your loved one, clone it and use it in a hoax. So you might hear someone screaming for your name, saying someone has taken something, and you have fraudsters who are using these voice clones to extort people. Celebrity won't save you. You had Tom Hanks, his likeness was being used with synthetic media with a deepfake to promote a product he had never even heard of.

So we see these algorithms of exploitation that are taking our actual essence. And then we also see the need for civil rights and human rights continue. It was very encouraging to see in the executive order that the principles from the Blueprint for an AI Bill of Rights such as protections from algorithmic discrimination, that the AI systems being used are effective, that there are human fallbacks were actually included, because that's going to be necessary to safeguard our civil rights and our human rights.

On how catastrophizing about AI killing us in the future neglects the harm it can do now

I'm concerned with the way in which AI systems can kill us slowly already. I'm also concerned with things like lethal autonomous weapons as well. So for me, you don't need to have super intelligent AI systems or advanced robotics to have a real harm. A self-driving car that doesn't see you on the road can be fatal and harmful. I think of this notion of structural violence where we think of acute violence: There's the gun, the bullet, the bomb. We see that type of violence. But what's the violence of not having access to adequate health care? What's the violence of not having housing and an environment free of pollution?

And so when I think about the ways in which AI systems are used to determine who has access to health care and insurance, who gets a particular organ, in my mind ... there are already many ways in which the integration of AI systems lead to real and immediate harms. We don't have to have super-intelligent beings for that.

Sam Briger and Thea Chaloner produced and edited this interview for broadcast. Bridget Bentz, Molly Seavy-Nesper and Beth Novey adapted it for the web.

Read the original here:

'Unmasking AI' author Joy Buolamwini says prejudice is baked into ... - NPR

Read More..

Social Security Administration names Peltier acting chief AI officer – FedScoop

The Social Security Administration has named Brian Peltier its acting chief AI officer, FedScoop has learned. Peltier, who is currently the agencys chief architect and responsible AI official, is one of several people who have been appointed to the CAIO role in recent weeks.

Though some federal agencies previously had CAIOs, the Biden administrations recent executive order on AI requires many federal agencies to name an official to the position. Agencies are expected to share the name of their CAIOs with the Office of Management of Budget 60 days after it finalizes guidance for government use of technology. A draft version of that guidance was released earlier this month.

In response to FedScoop reporting, several agencies including the National Science Foundation, the Department of Housing and Urban Development and the Education Department have announced who theyve appointed to the role.

While some agencies, like the Social Security Administration and the Department of Health and Human Services, have brought their responsible AI officials into the new CAIO role, others have selected their top data and technology leaders. FedScoop is tracking those appointed to the role of CAIO, as well as those previously appointed to the role of responsible AI official, at Chief Financial Officer Act agencies.

Notably, the SSA is already using several forms of artificial intelligence, according to an agency inventory.

Madison Alder contributed to this article.

See the article here:

Social Security Administration names Peltier acting chief AI officer - FedScoop

Read More..

UNFCCC partners with Microsoft to use AI and advanced data … – Microsoft

DUBAI, United Arab Emirates Nov. 29, 2023 Leaders from the United Nations and Microsoft Corp. on Thursday announced a partnership that will enable the UNFCCC to create a new AI-powered platform and global climate data hub to measure and analyze global progress in reducing emissions. This will dramatically simplify the process to validate and analyze climate data submitted by the 196 Parties to the Paris Agreement.

The partnership comes at a critical time, as the worlds governments come together at COP28, organized by the UNFCCC and the COP28 UAE Presidency, to take stock of the slow progress in meeting the climate goals set by the Paris Agreement.

The world must move faster to reduce carbon emissions. Simply put, you cant fix what you cant measure, and these new AI and data tools will allow nations to measure emissions far better than they can today, said Brad Smith, vice chair and president of Microsoft.

The Paris Agreement provides the framework for all the worlds nations to reduce greenhouse gas emissions in line with limiting global warming to 1.5 degrees, said Simon Stiell, UNFCCC executive secretary. Climate change is a global emergency that goes beyond borders. It will require technology for adaptation and mitigation. Progress also requires collaboration from trusted partners to develop the tools that the framework requires to be delivered. We are happy to work with Microsoft in this effort.

Aggregating and analyzing carbon data today is time consuming and often done through manual methods. Under the agreement, Microsoft will build a new platform to provide digital support to the UNFCCCs Enhanced Transparency Framework. This platform will enable advanced analysis of global climate data through the creation of a new global climate data hub and an AI-powered data analytics platform. This will equip UNFCCC and member states with the tools they need to efficiently report and validate progress toward carbon reduction targets.This includes tracking transportation, agriculture, industrial processes, and other sources of carbon emissions. It will also provide UNFCCC and member states with tools to plan carbon reduction strategies using simulations, benchmarks, and data visualizations to help inform targeted actions, saving time and money.

This work will also include the creation of Global Climate Dashboards for publication on UNFCCC website, increasing transparency, accountability, and ultimately informing meaningful climate action.

Microsoft has committed $3 million over two years to help enable the implementation of the Enhanced Transparency Framework and the Global Stocktake mechanisms established by the Paris Agreement.

Enacted in 2015, the Paris Agreement commits countries to reducing emissions to slow the impact of climate change, and to strengthen these commitments over time. Implementation of the Paris Agreement is critical to achieve the Sustainable Development Goals.

Microsoft and the UNFCCC will also partner to host a series of events intended to accelerate climate action in the UNFCCC Pavilion (Blue Zone) at COP28.

About UNFCCC

The UNFCCC secretariat (UN Climate Change) is the United Nations entity tasked with supporting the global response to the threat of climate change. UNFCCC stands for United Nations Framework Convention on Climate Change.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,

[emailprotected]

UNFCCC, [emailprotected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center athttp://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsofts Rapid Response Team or other appropriate contacts listed athttps://news.microsoft.com/microsoft-public-relations-contacts.

Read more:

UNFCCC partners with Microsoft to use AI and advanced data ... - Microsoft

Read More..

Coca-Cola set to continue generative AI efforts with holiday-season … – Digiday

This upcoming holiday season Coca-Cola is hoping to win over users to its AI platform which allows them to create customized holiday greeting cards with prompts. The Create Real Magic platform was built for Coca-Cola by OpenAI and Bane & Company using assets from the Coca-Cola archive. For the greeting card effort, iconic Coca-Cola holiday artwork is being used.

Coca-Cola, this past March, began asking its fans to use the Create Real Magic platform to create artwork for the brand with the potential for the final product to appear on billboard ads in New York and London. The callout had fans spending more than seven minutes and twelve seconds each and creating more than 120,000 images.

The beverage behemoth believes that if it can offer an interactive, easy-to-use tool that appeals to consumers, particularly younger tech savvy consumers, it can help the brand retain relevance.

Were living in the age of AI, said Pratik Thakar, Coca-Colas senior director of generative AI; he moved into the role this past summer. We want to stay ahead of the curve. We want to stay innovative. [What were doing is] taking our Create Real Magic platform and enhancing it with new technology, new features and making it more relevant for holidays.

Coca-Cola worked with creators from its Real Magic Creative Academy the brand held a three day symposium for digital artists using AI earlier this year to improve the platform and make it more user-friendly for the holiday. The stunt, which is available now and will run through the holidays, allows users to input a prompt, select various images and make a customized holiday greeting card. Users can then download the card or send directly from the platform to their friends and family. The initiative is just one example of the way the brand is working to create more interactive experiences for potential customers and fans.

Other than AI, Coke is exploring experiences like the Sphere, gaming and music to find ways to connect its brand with culture beyond traditional advertising vehicles. Its no surprise that the brand would do so as marketers recognize that historical advertising models are not delivering what they used to given the fragmented media and social landscape. With that being the case, marketers have to offer more to get consumers to pay attention.

The company has been working to transform its marketing and move from the interruption model to experience and engagement, said Thakar.

As media gets more fragmented and retaining the attention of audiences increasingly challenging, brands are looking to innovation to insert themselves into the conversation, said Lydia Corin, director of creative partnerships at creative studio The Mayda Creative.

Corin continued: Both legacy and challenger brands recognize that culturally poignant content will always trump traditional advertising. This is an incredibly smart move for Coca-Cola who, in the past, leveraged the heritage of its previous holiday campaigns to engage consumers during the holiday season.

Justin Booth-Clibborn, creative strategist and former chief executive producer at creative shop Psyop, agreed. Aside from keeping Coca-Cola in the immense cultural conversation around AI, this speaks to the core issue of how brands are looking for ways to strengthen their relationships with consumers beyond the transactional, and beyond traditional advertising. Booth-Clibborn added that by entertaining without immediately asking for a sale, the company is building brand sentiment, credibility and trust over time.

Aside from the free tool, the brand is incentivizing people to use it once again with the potential for their artwork to appear on billboards. While the previous effort offered two spots, the brand is planning to create 20 billboards in various countries to incentivize people to spend time on the platform and create holiday cards. People love their artwork going on those famous, iconic billboards, said Thakar.

To gauge the success of the effort, Coca-Cola will track how many images are created and shared, how much time people spend on the platform and how many greeting cards are generated.

When marketers are able to make an experience where users will want to share the brands content, thats a win for the brand, said Eunice Shin, partner at Prophet, a growth strategy consulting firm. Shin cited Spotifys Wrapped feature, which serves as an annual marketing moment for the platform, as an example of brand content thats gone viral because its customized and semi-user generated.

If its a good product people will do it, said Shin. It all comes back to activation and execution. If theres social excitement and momentum, if its easily shareable, then theres potential there.

View post:

Coca-Cola set to continue generative AI efforts with holiday-season ... - Digiday

Read More..

Is AI coming to a weather forecast near you? – MPR News

The average person checks the weather forecast on their phone several times a day. One day soon you may be seeing a weather forecast generated by artificial intelligence and it could improve the accuracy of modern weather forecasting.

A recent study in the journal Science describes how an AI weather forecast model from Googles DeepMind significantly outperformed conventional weather forecasting methods. The project called GraphCast bested the European Centre for Medium-Range Weather Forecasts predicting global weather conditions up to 10 days in advance.

GraphCasts forecast skill and efficiency are already competitive with traditional weather forecasting methods.

Heres Googles description of GraphCast and the study performance:

MPR News is supported by Members. Gifts from individuals power everything you find here. Make a gift of any amount today to become a Member!

GraphCast is a weather forecasting system based on machine learning and Graph Neural Networks (GNNs), which are a particularly useful architecture for processing spatially structured data.

GraphCast makes forecasts at the high resolution of 0.25 degrees longitude/latitude (28km x 28km at the equator). Thats more than a million grid points covering the entire Earths surface. At each grid point the model predicts five Earth-surface variables including temperature, wind speed and direction, and mean sea-level pressure and six atmospheric variables at each of 37 levels of altitude, including specific humidity, wind speed and direction, and temperature.

While GraphCasts training was computationally intensive, the resulting forecasting model is highly efficient. Making 10-day forecasts with GraphCast takes less than a minute on a single Google TPU v4 machine. For comparison, a 10-day forecast using a conventional approach, such as HRES, can take hours of computation in a supercomputer with hundreds of machines.

In a comprehensive performance evaluation against the gold-standard deterministic system, HRES, GraphCast provided more accurate predictions on more than 90 percent of 1380 test variables and forecast lead times (see ourScience paperfor details). When we limited the evaluation to the troposphere, the 6-20 kilometer high region of the atmosphere nearest to Earths surface where accurate forecasting is most important, our model outperformed HRES on 99.7 percent of the test variables for future weather.

Ive been making daily weather forecasts from computer models for four decades. There are two big developments that jump out here for me with the new AI machine learning approach.

The first is that GraphCasts machine learning application uses historical data analogs. This is a different approach to the commonly used process of feeding current weather conditions into a forecast model that predicts future weather patterns. The use of historical weather analogs could improve outcomes in many forecast situations.

The second notable development is the sheer speed of the AI forecast process. GraphCast is able to run these forecasts in about one minute. Thats magnitudes of scale faster than current weather forecast models that take hours to run.

The ability to rerun these forecasts multiple times in just a few minutes could create a radical improvement in allowing models to incorporate and adjust to incoming changing weather data. This could vastly improve forecasts over just a few hours as initial conditions change.

NOAA supercomputers

National Oceanic and Atmospheric Administration

Its still early in the AI forecast process, but the skill shown so far is astounding. Its quite possible that AI-generated weather forecasts could lead to a breakthrough in forecast speed and accuracy in the coming years.

We have seen tremendous advancements in weather forecast skill over the past 40 years during my weather forecast career. Todays five-day forecast is as accurate as the three-day forecast 30 years ago.

Its going to be amazing to watch and see how much AI can improve weather forecasts and warnings over the next decade.

The big question is, can AI currently pronounce unusual Minnesota place names like Lac qui Parle County?

Stay tuned.

Go here to see the original:

Is AI coming to a weather forecast near you? - MPR News

Read More..

NVIDIA Brings Business Intelligence to Chatbots, Copilots and … – NVIDIA Blog

Cadence, Dropbox, SAP, ServiceNow First to Access NVIDIA NeMo Retriever to Optimize Semantic Retrieval for Accurate AI Inference

AWS re:InventNVIDIA today announced a generative AI microservice that lets enterprises connect custom large language models to enterprise data to deliver highly accurate responses for their AI applications.

NVIDIA NeMo Retriever a new offering in the NVIDIA NeMo family of frameworks and tools for building, customizing and deploying generative AI models helps organizations enhance their generative AI applications with enterprise-grade retrieval-augmented generation (RAG) capabilities.

As a semantic-retrieval microservice, NeMo Retriever helps generative AI applications provide more accurate responses through NVIDIA-optimized algorithms. Developers using the microservice can connect their AI applications to business data wherever it resides across clouds and data centers. It adds NVIDIA-optimized RAG capabilities to AI foundries and is part of the NVIDIA AI Enterprise software platform, available in AWS Marketplace.

Cadence, Dropbox, SAP and ServiceNow are among the pioneers working with NVIDIA to build production-ready RAG capabilities into their custom generative AI applications and services.

Generative AI applications with RAG capabilities are the next killer app of the enterprise, said Jensen Huang, founder and CEO of NVIDIA. With NVIDIA NeMo Retriever, developers can create customized generative AI chatbots, copilots and summarization tools that can access their business data to transform productivity with accurate and valuable generative AI intelligence.

Global Leaders Enhance LLM Accuracy With NeMo RetrieverElectronic systems design leader Cadence serves companies across hyperscale computing, 5G communications, automotive, mobile, aerospace, consumer and healthcare markets. It is working with NVIDIA to develop RAG features for generative AI applications in industrial electronics design.

Generative AI introduces innovative approaches to address customer needs, such as tools to uncover potential flaws early in the design process, said Anirudh Devgan, president and CEO of Cadence. Our researchers are working with NVIDIA to use NeMo Retriever to further boost the accuracy and relevance of generative AI applications to reveal issues and help customers get high-quality products to market faster.

Cracking the Code for Accurate Generative AI ApplicationsUnlike open-source RAG toolkits, NeMo Retriever supports production-ready generative AI with commercially viable models, API stability, security patches and enterprise support.

NVIDIA-optimized algorithms power the highest accuracy results in Retrievers embedding models. The optimized embedding models capture relationships between words, enabling LLMs to process and analyze textual data.

Using NeMo Retriever, enterprises can connect their LLMs to multiple data sources and knowledge bases, so that users can easily interact with data and receive accurate, up-to-date answers using simple, conversational prompts. Businesses using Retriever-powered applications can allow users to securely gain access to information spanning numerous data modalities, such as text, PDFs, images and videos.

Enterprises can use NeMo Retriever to achieve more accurate results with less training, speeding time to market and supporting energy efficiency in the development of generative AI applications.

Reliable, Simple, Secure Deployment With NVIDIA AI EnterpriseCompanies can deploy NeMo Retriever-powered applications to run during inference on NVIDIA-accelerated computing on virtually any data center or cloud. NVIDIA AI Enterprise supports accelerated, high-performance inference with NVIDIA NeMo, NVIDIA Triton Inference Server, NVIDIA TensorRT, NVIDIA TensorRT-LLM and other NVIDIA AI software.

To maximize inference performance, developers can run their models on NVIDIA GH200 Grace Hopper Superchips with TensorRT-LLM software.

AvailabilityDevelopers can sign up for early access to NVIDIA NeMo Retriever.

Read the rest here:

NVIDIA Brings Business Intelligence to Chatbots, Copilots and ... - NVIDIA Blog

Read More..

FTC Bureau Director Outlines FTCs Proactive Approach to AI Regulation – The National Law Review

On September 19, 2023, the Director of the Federal Trade Commission Bureau of Consumer Protection, Samuel Levine, delivered remarks that provided insight into the FTCs ongoing strategy for regulating artificial intelligence (AI) during the National Advertising Divisions annual conference. Levine emphasized that the FTC is taking a more proactive approach to protect consumers from the harmful uses of AI, while ensuring the market remains fair, open, and competitive. Levine expressed the belief that self-regulation is not sufficient to address the regulation of AI. Levine also asserted that the FTC would continue to use its enforcement authority to challenge unfair or deceptive practices related to emerging AI products and push to expand its existing toolkit through proposed rules, such as imposing fines against those who use voice-cloning to defraud consumers. In his speech, he stated in-person, I would say, at this stage, that were monitoring the market closely. I think the bigger thing were seeing now is claims around the use of AI. When we see more actual use of AI in direct interaction with consumers, well be monitoring that closely to ensure that theyre not being deceived and that does not lead to harm otherwise.

Read the original post:

FTC Bureau Director Outlines FTCs Proactive Approach to AI Regulation - The National Law Review

Read More..

Philippines’ SEC to block access to world’s largest crypto exchange Binance – Reuters

Smartphone with displayed Binance logo and representation of cryptocurrencies are placed on a keyboard in this illustration taken, June 8, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

MANILA, Nov 29 (Reuters) - The Philippines' Securities and Exchange Commission has begun the process of blocking access to the world's largest crypto exchange Binance, whose chief last week stepped down and pleaded guilty to breaking U.S. anti-money laundering laws.

The SEC said the operator of Binance was not a registered corporation in the Philippines, and was operating without the necessary licence and authority to sell or offer any form of securities.

The removal of access in the Philippines, the SEC said in a statement, will take effect within three months of the issuance of its advisory on Nov. 28 to give Filipino users time to pull out investments from the crypto exchange.

It has asked Alphabet's Google (GOOGL.O) and Facebook parent Meta to ban online advertisements from Binance in the Philippines, and warned those selling via or convincing people to invest in the platform they may be held criminally liable.

Former Binance chief Changpeng Zhao stepped down as CEO last week after pleading guilty to wilfully causing the exchange to fail to maintain an effective anti-money laundering program.

Reuters sought comment from Binance through email, but received an automated response.

Reporting by Karen Lema and Mikhail Flores; Editing by Jan Harvey

Our Standards: The Thomson Reuters Trust Principles.

Read more from the original source:

Philippines' SEC to block access to world's largest crypto exchange Binance - Reuters

Read More..