Page 1,285«..1020..1,2841,2851,2861,287..1,2901,300..»

Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence – BBC

Updated 17 May 2023

Sam Altman testified before a US Senate Committee about the potential of artificial intelligence - and its risks

The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI).

Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities - and pitfalls - of the new technology.

In a matter of months, several AI models have entered the market.

Mr Altman said a new agency should be formed to license AI companies.

ChatGPT and other similar programmes can create incredibly human-like answers to questions - but can also be wildly inaccurate.

Mr Altman, 38, has become a spokesman of sorts for the burgeoning industry. He has not shied away from addressing the ethical questions that AI raises, and has pushed for more regulation.

He said that AI could be as big as "the printing press" but acknowledged its potential dangers.

"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that," Mr Altman said. "We want to work with the government to prevent that from happening."

He also admitted the impact that AI could have on the economy, including the likelihood that AI technology could replace some jobs, leading to layoffs in certain fields.

"There will be an impact on jobs. We try to be very clear about that," he said, adding that the government will "need to figure out how we want to mitigate that".

Mr Altman added, however, that he is "very optimistic about how great the jobs of the future will be".

To play this content, please enable JavaScript, or try a different browser

Watch: Senator Richard Blumenthal uses ChatGPT to write his statement

However, some senators argued new laws were needed to make it easier for people to sue OpenAI.

Mr Altman told legislators he was worried about the potential impact on democracy, and how AI could be used to send targeted misinformation during elections - a prospect he said is among his "areas of greatest concerns".

"We're going to face an election next year," he said. "And these models are getting better."

He gave several suggestions for how a new agency in the US could regulate the industry - including "a combination of licensing and testing requirements" for AI companies, which he said could be used to regulate the "development and release of AI models above a threshold of capabilities".

He also said firms like OpenAI should be independently audited.

Republican Senator Josh Hawley said the technology could be revolutionary, but also compared the new tech to the invention of the "atomic bomb".

Democrat Senator Richard Blumenthal observed that an AI-dominated future "is not necessarily the future that we want".

"We need to maximize the good over the bad. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment," he warned.

What was clear from the testimony is that there is bi-partisan support for a new body to regulate the industry.

However, the technology is moving so fast that legislators also wondered whether such an agency would be capable of keeping up.

To play this content, please enable JavaScript, or try a different browser

Ex-Google CEO: AI on social media bad for democracy

Read the original here:
Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence - BBC

Read More..

Want to Cash In on Artificial Intelligence? These AI Stocks Will Pay Immediate Dividends – The Motley Fool

The launch ofChatGPT has generated a lot of buzz, making artificial intelligence (AI) one of the hottest topics in the business and investment world. Many companies are seeking to learn how to leverage AI's power to grow their businesses.

Investors are pouring intoAI stocks, hoping to cash in on the frenzy. However, many AI stocks will likely never live up to the hype. Because of that, investors should consider companies with AI upside that haven't yet gotten caught up in the hype. Equinix(EQIX 0.33%) andIntuit(INTU -0.36%) are under-the-radar AI stocks. Adding to their appeal is that both pay dividends, enabling investors to immediately generate income from companies starting to capitalize on the AI megatrend.

Equinix is adata center real estate investment trust (REIT). Those facilities will be increasingly crucial to supporting AI because companies will need space to store all the data used to train and run their AI programs.

The REIT isalready starting to see AI-driven demand materialize. CEO Charles Meyers stated on the first-quarter conference call: "We've closed several key AI wins over the past few quarters and are seeing a growing pipeline of new opportunities directly and with key partners for both training and inference use cases." Meyers believes we're in the "early days" of AI-driven data demand, which will be an "exciting incremental opportunity for the company."

AI could support strong occupancy, rising rental rates, and new development opportunities for the company. Equinix sees AI using two types of data centers. AI learning will likely occur in large-scale data centers like its xScale facilities. Meanwhile, AI interface programs, like ChatGPT, will likely run in retail locations close to end users because they need proximity to quickly crunch data and generate outputs.

Equinix's data centers generate a lot of cash, giving the REIT the money to pay a decent dividend. Equinix has a 1.9% dividend yield, slightly higher than theS&P 500's1.7% yield. That enables investors to generate a nicepassive income stream from an AI-powered stock. The company raised its payout by 10% earlier this year and has steadily increased the dividend over the years.

Meanwhile, investors are getting a reasonable price on a company with lots of AI-powered upside. Shares are down about 5% from their 52-week high as the stock has yet to get caught up on the AI hype train. They currently trade at about 23 times forward earnings, much cheaper than many AI stocks.

Intuit's strategy is to be an AI-driven expert platform. Thefintech company wants to leverage the power of AI and human expertise to improve outcomes for its clients.

Intuit powers its unique AI-driving expert platform with several technologies. It uses knowledge engineering to arrange and work with rule sets like the tax code. It employs natural language processing to interact with customers and help meet their needs seamlessly. The company also relies on machine learning to tap into its massive data and create personalized customer experiences. Finally, Intuit is investing in generative AI capabilities to improve customer outcomes.

The company is leveraging the power of AI across its platform. For example, its Mailchimp marketing platform allows customers to tap into the power of AI to create marketing email campaigns. Marketers can automate, generate, and optimize content, saving time and improving outcomes. Meanwhile, TurboTax and QuickBooks customers can gain automated digital assistance from AI or get matched with a human expert through that technology. Finally, Credit Karma uses AI to provide users with personalized insights and recommendations.

Intuit enables investors to generate a little passive cash flow from its AI-powered expert platform. The company pays a modest dividend, currently yielding about 0.7%. It's a decent payout, considering many hype-driven AI stocks either aren't profitable or don't pay dividends. Meanwhile, Intuit regularly increases its dividend. It gave investors a 15% pay bump last year.

Speaking of the hype train, it certainly has yet to hit Inuit, given the stock currently sits about 35% below its 52-week high despite its AI-focused strategy. The fintech company trades at a reasonable 30 times earnings, which isn't anywhere near as expensive as some popular AI stocks.

Equinix and Intuit are early leaders in harnessing the power of AI. Their investors don't have to wait long for a payoff from their AI-driven growth because both companies pay quarterly cash dividends they've steadily increased. That enables investors to make a tangible return on AI-powered investments, even if the technology never lives up to the hype.

Matthew DiLallo has positions in Equinix and Intuit. The Motley Fool has positions in and recommends Equinix and Intuit. The Motley Fool has a disclosure policy.

Read the original:
Want to Cash In on Artificial Intelligence? These AI Stocks Will Pay Immediate Dividends - The Motley Fool

Read More..

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Hollywood Reporter

AI startup Respeecher re-created James Earl Jones Darth Vader voice for the Disney+ series Obi Wan Kenobi.

On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.

The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.

The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.

The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.

The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.

Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.

The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.

Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.

One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.

A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.

What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?

Link:
Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Hollywood Reporter

Read More..

How artificial intelligence is helping make fisheries more sustainable – Fox Weather

A newly published AI algorithm has been used to estimate coastal fish stocks in the Western Indian Ocean with 85 percent accuracy. (Courtesy: Wildlife Conservation Society)

INDIAN OCEAN A newly published AI algorithm has been used to estimate coastal fish stocks in the Western Indian Ocean with 85% accuracy.

By taking account of fish stocks, or the number of fish living in a given area, people can gauge the health of fisheries and see whether those fisheries need time to recover.

The recovery of fisheries allows them to be fished more sustainably, rather than depleting the area of the economically vital natural resource.

Fish swim around a coral reef. (Wildlife Conservation Society / FOX Weather)

To gather this information, scientists created an algorithm that utilized years of fish abundance data, along with satellite measurements and an AI tool. They targeted an area on the Western Indian Ocean tropical reefs, where there is a high dependency on fisheries.

The algorithm allowed researchers to quickly and accurately estimate coastal fish stock, all without setting foot in the water, according to the Wildlife Conservation Society. The model successfully estimated fish stocks in the area with 85% accuracy.

According to the WCS, the AI tool has the potential to quickly provide data about fisheries to local and national governments in a cost-effective way.

A fisherman uses a bucket to gather fish. (Wildlife Conservation Society / FOX Weather)

Many tropical countries in Africa and Asia, where the highest percentage of people who depend on fishing for food and income can be found, traditionally have not had much access to the usually high-cost methods of taking fish stocks.

Without this data, small-scale fisheries in those countries are often operating blindly, without long-term plans to keep their coastal waters healthy and productive, WCS said.

They noted that tools, such as this new algorithm, can help change that.

A man holds up his catch. (Wildlife Conservation Society / FOX Weather)

"Our goal is to give people the information required to know the status of their fish resources and whether their fisheries need time to recover or not," said Tim McClanahan, director of Marine Science at WCS and co-author on the study.

HOW TO WATCH FOX WEATHER

"The long term goal is that they, their children, and their neighbors can find a balance between peoples needs and ocean health," he added.

WCS is hoping to continue this work and help fill data gaps about fisheries around the world.

Read this article:
How artificial intelligence is helping make fisheries more sustainable - Fox Weather

Read More..

Will artificial intelligence replace doctors? – Harvard Health

Q. Everyone's talking about artificial intelligence, and how it may replace people in various jobs. Will artificial intelligence replace my doctor?

A. Not in my lifetime, fortunately! And the good news is that artificial intelligence (AI) has the potential to improve your doctor's decisions, and to thereby improve your health if we are careful about how it is developed and used.

AI is a mathematical process that tries to make sense out of massive amounts of information. So it requires two things: the ability to perform mathematical computations rapidly, and huge amounts of information stored in an electronic form words, numbers, and pictures.

When computers and AI were first developed in the 1950s, some visionaries described how they could theoretically help improve decisions about diagnosis and treatment. But computers then were not nearly fast enough to do the computations required. Even more important, almost none of the information the computers would have to analyze was stored in electronic form. It was all on paper. Doctors' notes about a patient's symptoms and physical examination were written (not always legibly) on paper. Test results were written on paper and pasted in a patient's paper medical record. As computers got better, they started to relieve doctors and other health professionals from some tedious tasks like helping to analyze images electrocardiograms (ECGs), blood samples, x-rays, and Pap smears.

Today, computers are literally millions of times more powerful than when they were first developed. More important, huge amounts of medical information now are in electronic form: medical records of millions of people, the results of medical research, and the growing knowledge about how the body works. That makes feasible the use of AI in medicine.

Already, computers and AI have made powerful medical research breakthroughs, like predicting the shape of most human proteins. In the future, I predict that computers and AI will listen to conversations between doctor and patient and then suggest tests or treatments the doctor should consider; highlight possible diagnoses based on a patient's symptoms, after comparing that patient's symptoms to those of millions of other people with various diseases; and draft a note for the medical record, so the doctor doesn't have to spend time typing at a computer keyboard and can spend more time with the patient.

All of this will not happen immediately or without missteps: doctors and computer scientists will need to carefully evaluate and guide the development of new AI tools in medicine. If the suggestions AI provides to doctors prove to be inaccurate or incomplete, that "help" will be rejected. And if AI then does not get better, and fast, it will lose credibility. Powerful technologies can be powerful forces for good, and for mischief.

More:
Will artificial intelligence replace doctors? - Harvard Health

Read More..

How artificial intelligence is helping build hurricane-resistant homes – Fox Weather

Superstorm Sandy flooded the emergency room at the former Coney Island Hospital in South Brooklyn. 11 years later, FOX Weather's Amy Freeze takes you to the new Ruth Bader Ginsburg Hospital, a $1B hospital funded by a FEMA grant, built to be hurricane resistant.

Researchers have developed a method of digitally simulating hurricanes to help refine building codes for homes and businesses in hurricane-prone areas.

Current building code guidelines include maps that state the level of wind a structure must be able to handle based at a given location. These maps were developed using earlier simulations of the inner workings of hurricanes.

The newly published simulations use advances in artificial intelligence, along with years of additional hurricane records, to create more realistic hurricane wind maps for the future.

People clear debris in the aftermath of Hurricane Ian in Fort Myers Beach, Florida on September 30, 2022. (GIORGIO VIERA/AFP / Getty Images)

Researchers used information on more than 1,500 storms from the National Hurricane Centers Atlantic Hurricane Database, which contains information about hurricanes from the past 100 years.

With this information, researchers produced models using machine-learning and deep-learning techniques that simulated hurricane properties, such as landfall location and wind speed, that were consistent with historical records.

TOP 5 COSTLIEST HURRICANES IN US HISTORY

"It performs very well," said Adam Pintar, a mathematical statistician at the National Institute of Standards and Technology and co-author on the study. "Depending on where you're looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one, honestly."

Hurricane simulation using the new models. (Shutterstock, adapted by B. Hayes/NIST / FOX Weather)

The models were also used to generate sets of 100 years worth of hypothetical storms, which the researchers noted largely overlapped with the general behavior of storms in the NHC's Atlantic Hurricane Database.

Researchers did note, however, that the simulations generated by the models were less realistic for coastal states in the Northeast due to a relative lack of information.

A man motions to a satellite image of Hurricane Fiona over Puerto Rico. (Office of the Governor of Puerto Rico / FOX Weather)

"Hurricanes are not as frequent in, say, Boston as in Miami, for example," said said Emil Simiu, NIST fellow and co-author on the study. "The less data you have, the larger the uncertainty of your predictions."

HOW TO WATCH FOX WEATHER

According to the NIST, the team plans to use simulated hurricanes to develop coastal maps of extreme wind speeds as well as quantify uncertainty in those estimated speeds.

See original here:
How artificial intelligence is helping build hurricane-resistant homes - Fox Weather

Read More..

AI: Good or bad? All your artificial intelligence fears, addressed – AMBCrypto News

Leading Artificial Intelligence [AI] researcher Geoffrey Hinton recently quit Google, citing concerns about the risks of artificial intelligence. He voiced his concerns that the tech might soon outperform the human brains information capacity. He termed some threats posed by these chatbots as quite scary.

Hinton argued that chatbots can learn on their own and share their expertise. This means that any new knowledge acquired by one copy is automatically distributed to the entire group. This enables chatbots to collect knowledge far beyond the capacity of any individual.

Let us dig deeper into these concerns and understand how much of these concerns are shared by the online world.

There is a general understanding that AI will most likely become super intelligent in the next few decades. But in its current state, AI is merely a tool. It has no ability to think. Any chatbot today just translates large amounts of data into numbers and returns the required figures. It can handle complex and ill-formed problems in disciplines such as image recognition, state space searches, model construction, and natural language processing in a reasonably consistent manner.

At the moment, an AI that interprets your language cannot predict your movements. These would be two distinct programs. Artificial intelligence, as of now, does not involve general cognitive processes. We are so far from a refined AI model as depicted in science fiction that we dont even know what developing a highly intelligent AI entails.

At present, our existing AI models handle specific issues in specific circumstances. They are essentially just sophisticated statistical models. Although this technology is extremely effective, there is no reason to believe that we are developing a powerful general-purpose AI model.

However, a lot of money is being poured these days towards coming closer to a general-purpose AI, both in academia and in industry, but it doesnt exist yet.

The AI of today is incapable of resolving moral quandaries. Moral issues are not rational; they are subjective and unique to the person who discusses the issue. If AI is told to kill all persons of X demography in a certain place while causing no harm to members of Y population, it will do so without hesitation.

The problem with this approach is that it ignores the sole thing that limits our own intellect, the environment. The universes intricacy is incomprehensible. Just because we have a very specialized AI system does not imply that the AI is a specialist in everything.

There is an implicit assumption that morality is something we lose track of as we get smarter. But that is far from the case. Indeed, ethics is a common wisdom among us, though ever-evolving. It is a method of dealing with the complexities of universal problems.

AI now has huge economic incentives for development, with billions of dollars in research being spent across a wide range of applications by both private and public organizations. This way, we can say that ruling out quick progress in the next few decades would be foolish.

However, most industries are currently focused on compartmentalized AI, which involves combining numerous separate AIs that each does a certain task very well.

The development of AI is frequently viewed as both a threat and an opportunity for humans, depending on a variety of circumstances.

On the one hand, there are concerns about the possible hazards of AI. These include employment displacement, privacy and security concerns, algorithm biases, and the concentration of power in the hands of a few individuals/organizations. These risks, if not appropriately handled, might have severe effects for people, society, and humanitys general well-being.

Even so, one cannot discount AIs benefits. It can increase productivity, generate innovation, advance healthcare and science, and address difficult social concerns. AI has the potential to boost human talents, automate monotonous chores, and allow us to make more informed judgements. It provides opportunities for progress in a variety of industries, including education, transportation, agriculture, and others.

The aim is to create and deploy AI in a responsible and ethical manner. We can maximize the good impact of AI while minimizing possible hazards by addressing concerns such as transparency, accountability, justice, and prejudice. It requires collaboration among researchers, policymakers, and industry leaders to ensure that AI is developed and used in ways that align with human values and benefit society.

More here:
AI: Good or bad? All your artificial intelligence fears, addressed - AMBCrypto News

Read More..

Reviving the Past with Artificial Intelligence – Caltech

While studying John Singer Sargent's paintings of wealthy women in 19th-century society, Jessica Helfand, a former Caltech artist in residence, had an idea: to search census records to find the identities of those women's servants. "I thought, What happens if I paint these women in the style of John Singer Sargent?' It's a sort of cultural restitution," Helfand explained, "reverse engineering the narrative by reclaiming a kind of beauty, style, and majesty."

To recreate a style from history, she turned to technology that, increasingly, is driving the future. "Could AI help me figure out how to paint, say, lace or linen, to capture the folds of clothing in daylight?" Helfand discussed her process in a seminar and discussion moderated by Hillary Mushkin, research professor of art and design in engineering and applied science and the humanities and social sciences. The event, part of Caltech's Visual Culture program, also featured Joanne Jang, product lead at DALL-E, an AI system that generates images based on user-supplied prompts.

While DALL-E has a number of practical applications from urban planning, to clothing design, to cooking, the technology also raises new questions. Helfand and Jang spoke about recent advancements in generative AI, ethical considerations when using such tools, and the distinction between artistic intelligence and artificial intelligence.

The rest is here:
Reviving the Past with Artificial Intelligence - Caltech

Read More..

Artificial Intelligence: Key Practices to Help Ensure Accountability in … – Government Accountability Office

What GAO Found

Artificial intelligence (AI) is evolving at a rapid pace and the federal government cannot afford to be reactive to its complexities, risks, and societal consequences. Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, a critical mass of workforce expertise is needed to enable federal agencies to accelerate the delivery and adoption of AI.

Participants in an October 2021 roundtable convened by GAO discussed agencies' needs for digital services staff, the types of work that a more technical workforce could execute in areas such as artificial intelligence, and challenges associated with current hiring methods. They noted such staff would require a variety of digital and government-related skills. Participants also discussed challenges associated with existing policies, infrastructure, laws, and regulations that may hinder agency recruitment and retention of digital services staff.

During a September 2020 Comptroller General Forum on AI, experts discussed approaches to ensure federal workers have the skills and expertise needed for AI implementation. Experts also discussed how principles and frameworks on the use of AI can be operationalized into practices for managers and supervisors of these systems, as well as third-party assessors. Following the forum, GAO developed an AI Accountability Framework of key practices to help ensure responsible AI use by federal agencies and other entities involved in AI systems. The Framework is organized around four complementary principles: governance, data, performance, and monitoring.

Artificial Intelligence (AI) Accountability Framework

To help managers ensure accountability and the responsible use of AI in government programs and processes, GAO has developed an AI Accountability Framework. Separately, GAO has identified mission-critical gaps in federal workforce skills and expertise in science and technology as high-risk areas since 2001.

This testimony summarizes two related reportsGAO-22-105388 and GAO-21-519SP. The first report addresses the digital skills needed to modernize the federal government. The second report describes discussions by experts on the types of risks and challenges in applying AI systems in the public sector.

To develop the June 2021 AI Framework, GAO convened a Comptroller General Forum in September 2020 with AI experts from across the federal government, industry, and nonprofit sectors. The Framework was informed by an extensive literature review, and the key practices were independently validated by program officials and subject matter experts.

For the November 2021 report on digital workforce skills, GAO convened a roundtable discussion in October 2021 comprised of chief technology officers, chief data officers, and chief information officers, among others. Participants discussed ways to develop a dedicated talent pool to help meet the federal government's needs for digital expertise.

For more information, contact Taka Ariga at (202) 512-6888 or arigat@gao.gov.

Continue reading here:
Artificial Intelligence: Key Practices to Help Ensure Accountability in ... - Government Accountability Office

Read More..

Senators use hearings to explore regulation on artificial intelligence – Roll Call

We could be looking at one of the most significant technological innovations in human history, Hawley said. And I think my question is, what kind of an innovation is it going to be?

Blumenthal said there are real potential bright sides to artificial intelligence, such as curing cancer or developing new understanding of physics. But the technology comes with potential pitfalls, such as disinformation and deepfakes, he said.

Perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers, the loss of huge numbers of jobs, Blumenthal said.

Industry officials testified at the hearing that the U.S. government should play a role in addressing artificial intelligence.

Samuel Altman, the CEO of OpenAI, which released ChatGPT, told the committee that the artificial intelligence research and deployment company thinks regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.

Excerpt from:
Senators use hearings to explore regulation on artificial intelligence - Roll Call

Read More..