Page 1,232«..1020..1,2311,2321,2331,234..1,2401,250..»

Why the IAEA model may not be best for regulating artificial … – James Martin Center for Nonproliferation Studies

June 9, 2023Ian J. Stewart

The following is an excerpt from the Bulletin of the Atomic Scientists.

OpenAI, the company behind Chat GPT and an apparent advocate for strong regulation in the artificial intelligence space, recently suggested that an International Atomic Energy Agency-like model may be needed for AI regulation.

On the face of it, the IAEA might seem like a reasonable model for AI regulation. The IAEA system of safeguards has developed over time and provides confidence that safeguarded materials are not diverted to weapons-related end uses and that safeguarded nuclear facilities cannot be used for weapons development. But the nuclear governance model is actually not a good one for regulation of Artificial General Intelligence.

While one might argue that both nuclear weapons and artificial intelligence have the potential to destroy the world, the modality and pathways of this catastrophe for AI are not as clear as for nuclear technology. While many focus on the idea that an AI could somehow develop or take over military capabilities, including nuclear weapons (e.g. Skynet), the existence of credible paths through which AI could destroy the world have not yet been established. Work to identify such paths is underway and must continue.

Nonetheless, given that the imperative of addressing urgent global threats has driven the evolution of nonproliferation and safeguard measures, a lesson from the nuclear domain is that consensus around the credibility and definition of a global challenge is necessary before states will take collective action to address it.

Continue reading at the Bulletin of the Atomic Scientists.

Here is the original post:
Why the IAEA model may not be best for regulating artificial ... - James Martin Center for Nonproliferation Studies

Read More..

Britain to host first global summit on artificial intelligence safety – Reuters

[1/3] Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

WASHINGTON, June 7 (Reuters) - Britain will host a global summit on artificial intelligence safety later this year and Prime Minister Rishi Sunak and U.S. President Joe Biden will discuss the technology at their Thursday meeting, the UK government said.

The summit will consider the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action, the British government said in a statement. No date was given for the event.

Biden and Sunak, who will meet on Thursday for a fourth time in as many months, will work to coordinate their approaches on critical and emerging technologies, with an eye to strengthening their economic security, British and U.S. officials said.

U.S. technology company Palantir Technologies, which already has more than 800 employees in Britain, will separately announce plans to make the UK its new European headquarters for AI development, the British government said.

Sunak planned wide-ranging discussions with Biden on the UK-U.S. relationship and how the two countries could work together to strengthen their economies and cement their "joint leadership in the technologies of the future," the government said.

Several governments are considering how to mitigate the dangers of the emerging technology, which has experienced a boom in investment and consumer popularity in recent months after the release of OpenAI's ChatGPT.

That includes China, where the government is seeking to initiate artificial intelligence regulations, according to billionaire Elon Musk who met officials during his recent trip to China.

Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and images, the impact of which proponents compare to the arrival of the internet.

Sunak is on a trip to the United States and will meet Biden at the White House on Thursday.

Reporting by Kanishka Singh and Andrea Shalal in Washington; editing by Deepa Babington and Stephen Coates

Our Standards: The Thomson Reuters Trust Principles.

Thomson Reuters

Kanishka Singh is a breaking news reporter for Reuters in Washington DC, who primarily covers US politics and national affairs in his current role. His past breaking news coverage has spanned across a range of topics like the Black Lives Matter movement; the US elections; the 2021 Capitol riots and their follow up probes; the Brexit deal; US-China trade tensions; the NATO withdrawal from Afghanistan; the COVID-19 pandemic; and a 2019 Supreme Court verdict on a religious dispute site in his native India.

Read the original here:
Britain to host first global summit on artificial intelligence safety - Reuters

Read More..

How SAIC looks at artificial intelligence as part of a whole – Washington Technology

From strictly a conversation standpoint: artificial intelligence and practically all things automation have definitely gained more steam this year amid the rapid emergence of generative AI systems that respond to user prompts.

The entire public sector ecosystem of both government agencies and their contractors is also looking to make sense out of where AI is today and more importantly the direction it is going.

During an earnings call with investors Monday, Science Applications International Corp.'s chief executive Nazzic Keene gave them a sense of how agencies are thinking about AI.

"Certainly the discussion from coming out of the federal government appears to be that we're near a tipping point, and the customers seem to be more interested these days in looking at driving an advantage through using AI," Keene said in the call to discuss SAIC's fiscal first quarter financial results.

But from SAIC's vantage point: the AI conversation becomes a very different exercise when trying to put numbers to any of those opportunities.

"The way that we think about AI, in addition to AI for AI's sake, is it's clearly embedded in so many of our solutions that we bring to market," Keene told analysts. "So it's very hard to quantify what we do in AI because many of our solutions have AI embedded and many of our programs have AI embedded."

Reston, Virginia-headquartered SAIC has made a string of investments over recent years with the goal of incorporating more AI into its larger solution offering. The acquisition of Koverse in 2021 also brought in that company's data management platform.

The internal investment leg of SAIC's larger AI approach involves its network of innovation factories, including one just focused on that technology area. Each of those factories are based in cloud computing environments to enable the sharing of ideas and practices across the entire organization.

Keene touted "several hundred million dollars of wins" as having stemmed from those investments with more opportunities on the horizon in areas like counter-unmanned systems, secure cloud and the Defense Department's so-called JADC2 networking construct.

Like most conversations in public sector, the one focused on AI inevitably works its way toward the potential impacts on people and particularly in terms of changes to work.

AI is far from the only emerging technology domain to present the potential of large amounts of change, both with positive and negative consequences. Keene called AI both a "threat" and "opportunity" in terms of what it could mean, even in the current climate of early days regarding adoption.

"On the threat side, you could see AI have the potential of reducing headcount on certain programs and so much of the way that our industry measures revenue and profit is in a headcount," Keene said.

"We're certainly sensitive to that, we're aware of that. But our preference is to understand what that is, and when it makes sense for us to be the disruptor and drive the performance and drive the competencies and solutions in our programs in partnership with our customers."

Fiscal first quarter revenue of $2.03 billion was 2% higher than the prior year period. That growth rate becomes 3.5% after adjusting for its agreement to make Amentum the controlling owner of their Forfeiture Support Associates joint venture.

Profit in the quarter of $189 million represented a 9% year-over-year increase in adjusted EBITDA (earnings before interest, taxes, depreciation and amortization).

SAIC lifted its fiscal 2024 revenue guidance to a range of $7.125 billion-to-$7.225 billion, while the bottom-line outlook is unchanged at a 9.2%-to-9.4% adjusted EBITDA margin.

SAIC runs its fiscal calendar on a February-January basis.

Link:
How SAIC looks at artificial intelligence as part of a whole - Washington Technology

Read More..

University of Indonesia to Host International Artificial Intelligence … – Tempo.co English

TEMPO.CO, Jakarta - The Computer Science Faculty of the University of Indonesia (UI) is preparing to host the 2023 Pacific Rim International Conference on Artificial Intelligence (PRICAI) from November 17-19, 2023 in Jakarta.

Dean of the faculty, Dr. Ir. Petrus Mursanto, who also serves as the General Co-Chair of the 2023 PRICAI, stated here on Friday that his faculty is proud to become the main host of the event.

Mursanto said the event provides an opportunity to display the abilities in the development of technology and research in the field of artificial intelligence.

The organization of the international conference is a real step for UI's Computer Science Faculty, as one of the leaders in education and research in the Asia Pacific region.

"We hope that through this 2023 PRICAI international conference, new ideas and sound collaboration will emerge between the Computer Science Faculty of UI and researchers, practitioners, industries, and user communities, especially in the field of AI," he remarked.

The event will be held online and offline and will involve hundreds of participants from countries in the Asia Pacific region, as well as those from countries in Europe, America, and Africa.

The first edition of PRICAI was held in Japan in 1990, with the aim of encouraging the development of science and technology, especially in Pacific Rim countries.

The conference continued to develop, and since 2019, it has become an annual international conference.

For the 2023 edition, PRICAI is aimed at becoming an opportunity for researchers, practitioners, educators, artificial intelligence users, and relevant communities to discuss theories, technologies, applications, and current issues in the field of artificial intelligence.

The participants can partake in in-depth intellectual discourses, develop potential research collaborations, and improve professional skills in the field of artificial intelligence.

The 2023 PRICAI will be a major event, especially for the Computer Science Faculty of University of Indonesia, primarily related to the development of artificial intelligence as a domain that is intensively researched.

ANTARA

Editor's Choice: Academic Practitioners Speak of Artificial Intelligence's Beneficial Roles

Click here to get the latest news updates from Tempo on Google News

More here:
University of Indonesia to Host International Artificial Intelligence ... - Tempo.co English

Read More..

58% of Employers Believe Artificial Intelligence and Virtual Reality … – HRO Today

New data from Experis finds an increasing number of companies surveyed are either adopting or planning to adopt the use of emerging technologies in their recruiting processes.

MILWAUKEE,June 8, 2023/PRNewswire/ A majority of employers around the world are optimistic that emerging technologies, including artificial intelligence (AI), machine learning, virtual/augmented reality (VR/AR), blockchain, and others, wont eliminate workers, but will actually create more jobs. Thats according toThe Future is Now: AI, the Metaverse, & the World of Work, fromExperis, the global leader in IT professional resourcing and services and part of theManpowerGroup(NYSE:MAN) family of brands.

The integration of AI, machine learning, VR/AR, and other emerging technologies is rapidly transforming industries and driving the need for an adaptable workforce, said Experis Senior Vice PresidentGer Doyle.We are seeing companies embrace these new technologies with many seeking to hire or upskill existing talent to take advantage of potential productivity gains. Smart employers know that embracing digitizationandnurturing human talent will enhance their readiness to succeed in this era of rapid technological advancement.

With78% of IT organizations reportingdifficulty hiring the talent they need with the skills they covet, an increasing number of companies are either adopting or planning to adopt the use of emerging technologies in their recruiting processes, including: AI (35% adopted, 36% plan to), Conversational AI (35% adopted, 36% plan to), Machine Learning (38% adopted, 34% plan to), and VR (30% adopted, 35% plan to).

So far, the signs are no different from what we have seen with earlier versions of AI or tech innovation. Generative AI can be expected to mostly automate tasks and skills within jobs rather than entire jobs. We should relish the opportunity to outsource these mundane tasks freeing up our time for more creative and intellectually sophisticated endeavors, ManpowerGroup Chief Innovation OfficerTomas Chamorro-Premuzicsaid. This isnt about us versus AI or humans versus machines. Instead, its about how we can leverage these tools to augment and upgrade our uniquely human skills and lead a more human-centric life.

Additionally, the latest ExperisTech Talent Outlookfinds the rise of new tech has also shifted the hiring priorities and focus for employers. The top five staffing priorities reported are:

To address these technology-related challenges, employers shared they are implementing the following approaches:

To view more data from the Tech Talent Outlook, including regional and country data, visit:Tech Talent Outlook.

ABOUT THE TECH TALENT OUTLOOKThe Experis Tech Talent Outlook research is based on results from the ManpowerGroup Employment Outlook Survey the longest running, most comprehensive, forward-looking employment survey of its kind, used globally as a key economic indicator. ManpowerGroup interviewed 5,978 IT employers across 41 countries on hiring intentions for the third quarter of 2023.

ABOUT EXPERISExperis is the global leader in professional resourcing and project-based services. Experis accelerates organizations growth by attracting, assessing, and placing specialized expertise in IT to deliver in-demand talent for mission-critical positions and projects, enhancing the competitiveness of the organizations and people we serve. Experis is part of the ManpowerGroup family of companies, which also includes Manpower and Talent Solutions.

For more information, visitwww.experis.com.

Go here to see the original:
58% of Employers Believe Artificial Intelligence and Virtual Reality ... - HRO Today

Read More..

What Happens When A.I. Enters the Concert Hall – The New York Times

Dr. Schankler ultimately used R.A.V.E in that performance of The Duke Of York, though, because its ability to augment an individual performers sound, they said, seemed thematically resonant with the piece. For it to work, the duo needed to train it on a personalized corpus of recordings. I sang and spoke for three hours straight, Wang recalled. I sang every song I could think of.

Antoine Caillon developed R.A.V.E. in 2021, during his graduate studies at IRCAM, the institute founded by the composer Pierre Boulez in Paris. R.A.V.E.s goal is to reconstruct its input, he said. The model compresses the audio signal it receives and tries to extract the sounds salient features in order to resynthesize it properly.

Wang felt comfortable performing with the software because, no matter the sounds it produced in the moment, she could hear herself in R.A.V.E.s synthesized voice. The gestures were surprising, and the textures were surprising, she said, but the timbre was incredibly familiar. And, because R.A.V.E. is compatible with common electronic music software, Dr. Schankler was able to adjust the program in real time, they said, to create this halo of other versions of Jens voice around her.

Tina Tallon, a composer and professor of A.I. and the arts at the University of Florida, said that musicians have used various A.I.-related technologies since the mid-20th century.

There are rule-based systems, which is what artificial intelligence used to be in the 60s, 70s, and 80s, she said, and then there is machine learning, which became more popular and more practical in the 90s, and involves ingesting large amounts of data to infer how a system functions.

More here:
What Happens When A.I. Enters the Concert Hall - The New York Times

Read More..

FBI says artificial intelligence being used for ‘sextortion’ and … – Reuters.com

WASHINGTON, June 7 (Reuters) - The Federal Bureau of Investigation has warned Americans that criminals are increasingly using artificial intelligence to create sexually explicit images to intimidate and extort victims.

In an alert circulated this week, the bureau said it had recently observed an uptick in extortion victims saying they had been targeted using doctored versions of innocent images taken from online posts, private messages or video chats.

"The photos are then sent directly to the victims by malicious actors for sextortion or harassment," the alert said. "Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet."

The bureau said the images appeared "true-to-life" and that, in some cases, children had been targeted.

The FBI did not go into detail about the program or programs being used to generate the sexual imagery but did note that technological advancements were "continuously improving the quality, customizability, and accessibility of artificial intelligence (AI)-enabled content creation."

The bureau not respond to a follow-up message seeking details on the phenomenon Wednesday.

The manipulation of innocent pictures to make sexually explicit images is almost as old as photography itself, but the release of open-source AI tools has made the process easier than ever. The results are often indistinguishable from real life photographs, and several websites and social media channels that specialize in the creation and exchange of AI-enabled sexual imagery have sprung up in recent years.

Reporting by Raphael Satter; Editing by David Gregorio

Our Standards: The Thomson Reuters Trust Principles.

Thomson Reuters

Reporter covering cybersecurity, surveillance, and disinformation for Reuters. Work has included investigations into state-sponsored espionage, deepfake-driven propaganda, and mercenary hacking.

The rest is here:
FBI says artificial intelligence being used for 'sextortion' and ... - Reuters.com

Read More..

This Artificial Intelligence Stock Is Headed Toward $1 Trillion. This … – The Motley Fool

Over the last several weeks, investors listened to corporate executives discuss the topic of artificial intelligence (AI) on earnings calls. While AI seems like the new buzzword, there are some legitimate reasons to be bullish.

Perhaps no other company has benefited more from the AI mania than chipmaker Nvidia (NVDA 0.68%). Nvidia's earnings report was solid, but its guidance was so far ahead of Wall Street expectations that the stock jumped well over 30% and the company briefly joined the $1 trillion market cap club. What really stood out on the earnings call was how well the company's data center business performed.

Image source: Nvidia investor presentation.

The chart above is from Nvidia's Q1 fiscal 2024 earnings investor presentation. For the quarter ended April 30, Nvidia's data center revenue grew 14% year over year, achieving a record of $4.3 billion. While setting a record in its database segment is exciting, investors really cheered due to the company's Q2 guidance of $11 billion in total revenue.

On the earnings call, Nvidia CFO Colette Kress attributed this growth to the data center business. She stated: "We expect this sequential growth to largely be driven by data center, reflecting a steep increase in demand related to generative AI and large language models. This demand has extended our data center visibility out a few quarters, and we have procured substantially higher supply for the second half of the year."

Kress' commentary on the data center business should not be taken lightly. While Q1 results were respectable, the more important point Kress makes is that demand for applications that leverage AI, such as quantum computing and machine learning, is so strong that Nvidia has a lot of visibility into future business.

This dynamic is important. For the last couple of years, investors have been told that semiconductor companies like Nvidia and Advanced Micro Devicesare experiencing myriad challenges such as supply chain disruptions and waning consumer demand due to the COVID-19 pandemic.

Now, however, some of these issues seem to be drying up. Nvidia's technology is currently center-stage when it comes to the AI revolution. Demand for its chips and its growing suite of data center products should continue to grow as companies of all sizes and across many different industries are figuring out how generative AI can be a catalyst for their business.

While Nvidia briefly joined $1 trillion stock cohort comprised of Apple, Microsoft, Alphabet, and Amazon, the company's market capitalization has fallen to roughly $955 billion. That's still not bad for a company that's generated $27 billion in revenue over the last 12 months.

Nvidia's current forward price-to-earnings (P/E) ratio is 52. Comparable companies like AMD and Taiwan Semiconductor trade with a forward P/E of 42 and 19, respectively. It's obvious that investors are placing a premium on Nvidia. In fact, there's an argument to be made that the future business from AI is, at least in part, priced into the stock.

Nonetheless, it's crucial for long-term investors to zoom out and consider the broader picture. Nvidia will provide ample exposure to AI for your portfolio, and the company's near-to-intermediate pipeline appears encouraging.

While the stock currently seems pricey compared to other chipmakers, there's something to be said for the fact that the company was the first of its kind to reach the trillion-dollar milestone. Moreover, as its data center business continues to blossom, Nvidia could very well cement itself as a member of the trillion-dollar market cap club in the long term.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Adam Spatacco has positions in Alphabet, Apple, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Apple, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.

Originally posted here:
This Artificial Intelligence Stock Is Headed Toward $1 Trillion. This ... - The Motley Fool

Read More..

Grapevine-Colleyville ISD embraces artificial intelligence to help prepare students for future technology – NBC 5 Dallas-Fort Worth

L.L. Bean has just added a third shift at its factory in Brunswick, Maine, in an attempt to keep up with demand for its iconic boot.

Orders have quadrupled in the past few years as the boots have become more popular among a younger, more urban crowd.

The company says it saw the trend coming and tried to prepare, but orders outpaced projections. They expect to sell 450,000 pairs of boots in 2014.

People hoping to have the boots in time for Christmas are likely going to be disappointed. The bootsare back ordered through February and even March.

"I've been told it's a good problem to have but I"m disappointed that customers not gettingwhat they want as quickly as they want," said Senior Manufacturing Manager Royce Haines.

Customers like, Mary Clifford, tried to order boots on line, but they were back ordered until January.

"I was very surprised this is what they are known for and at Christmas time you can't get them when you need them," said Clifford.

People who do have boots are trying to capitalize on the shortage and are selling them on Ebay at a much higher cost.

L.L. Bean says it has hired dozens of new boot makers, but it takes up to six months to train someone to make a boot.

The company has also spent a million dollars on new equipment to try and keep pace with demand.

Some customers are having luck at the retail stores. They have a separate inventory, and while sizes are limited, those stores have boots on the shelves.

Read the original here:
Grapevine-Colleyville ISD embraces artificial intelligence to help prepare students for future technology - NBC 5 Dallas-Fort Worth

Read More..

The productivity effects of generative artificial intelligence – CEPR

Automation technologies machines capable of performing productive tasks in place of human workers have played an enormous role in the economic history of humanity since the Industrial Revolution. From the automation of textile production in the 19th century to the mechanisation of agriculture in the early 20th century, historical waves of automation drove huge sectoral reallocations of labour and helped spur urbanisation and massive social change. These waves of automation were far from perfectly benevolent in the short and medium run (Acemoglu and Johnson 2023), but ultimately contributed to immense growth in production and living standards in industrialised countries.

Between the 1970s and early 2020s, the story of automation in high-income countries remained fairly consistent (Autor 2015). Advances in machinery, the rise of computers, and the proliferation of digital technologies led to the gradual automation of middle-skilled tasks ranging from factory-floor assembly-line tasks to clerical bookkeeping and accounting tasks (Autor et al. 2003). These tasks consisting of discrete, formalisable sequences of steps could increasingly be programmed into ever-cheaper computers and machines, displacing humans from many occupations.

These incremental waves of routine-biased automation contributed to a widely discussed polarisation of the labour market: middle-wage manufacturing and clerical jobs slowly melted away while new jobs appeared in low-wage cleaning, retail, and personal care occupations as well as high-wage managerial, technical, and professional occupations. As a consequence, wage and income inequality increased dramatically over this period, with demographic groups once concentrated in automation-stricken occupations falling behind (Acemoglu and Restrepo 2022) while higher-income professionals and capital owners pulled ahead (Moll et al. 2022).

Beginning in the 2010s, economists observed that the burgeoning field of machine learning could steer automation in a new direction. Previously, tasks could only be automated if they could be broken down into explicit sequences of steps that could be formally explained to a computer or machine. Many tasks that required creativity or tacit, hard-to-formalise knowledge from writing to medical diagnosis to graphic design hence avoided automation. But in the 2010s, economists noted that emerging deep learning techniques, which trained computers inductively on large existing datasets rather than providing explicit instructions, might eventually permit automation of even creative or tacit-knowledge-reliant tasks.

The first wave of machine-learning-based automation technologies targeted predictive tasks such as bail decisions, hiring decisions, or medical diagnoses (Kleinberg et al. 2018, Chalfin et al. 2016, Mullainathan and Obermeyer 2022). Machine-learning algorithms became increasingly good at making binary predictions from high-dimensional input data, prompting concerns about the future of occupations like radiology. But creative tasks still seemed securely insulated from the threat of automation.

This changed with the public release of impressive generative artificial intelligence systems in mid-to-late 2022. Trained using deep-learning techniques to generate large coherent bodies of text or well-produced images in response to written prompts, these systems were substantially more capable than any pre-existing chatbot or image generation tool. For the first time, it appeared that creative writing or design tasks might face imminent widespread automation.

In a recent paper (Noy and Zhang 2023), we report the results of an online experiment we conducted that provide a first look at the potential productivity and labour market impacts of text-based generative AI systems, specifically ChatGPT 3.5.

We conducted the experiment on Prolific, a survey platform that is a mainstay of academic social science research. We screened tens of thousands of respondents on the platform to identify a subset of college-educated respondents in our occupations of interest managers, human resource professionals, grant writers, marketers, consultants, and data analysts which were chosen based on our ability to come up with realistic, occupation-specific, 2030 minute writing tasks that we could administer through an online survey. Managers and HR professionals were assigned to write a sensitive email, marketers to write a press release for a hypothetical product, grant writers to write a grant application, consultants to write a short report, and data analysts to write an analysis plan. About 85% of participants rated the tasks as realistic or very realistic imitations of real tasks performed in their occupations.

Prolific respondents who passed our screening stage were invited to complete an hour-long survey involving two occupation-specific writing tasks. Participants were paid a base rate of $10 and were heavily incentivised to perform well on the tasks: their task submissions were graded by other Prolific respondents working in the same occupations, and they received up to $14 in bonus payments based on their grades. The average total payment in our sample was $17/hour, significantly exceeding the typical $12/hour on Prolific. Our combination of above-market pay and high-powered incentives successfully elicited substantial effort from participants, who spent an average of 27 minutes on the first task.

Between the first and second tasks, participants were randomised into a treatment or control group. Treated participants were told to sign up for ChatGPT and enter several sample prompts, showing them how to use the technology. Control participants were told to sign up for Overleaf (to keep survey time as similar as possible between treatment and control and minimise selective attrition, almost no control participants used Overleaf on the second task). Treated participants were told they were permitted to use ChatGPT on the second task if they found it helpful.

The treatment group overwhelmingly chose to use ChatGPT on the second task: 87% of those who successfully signed up for an account used it. Treated participants were very impressed with the technology, giving it an average usefulness score of 4.4 out of 5.0. Almost all users simply pasted the task prompt into ChatGPT and submitted an unedited or lightly edited version of its output. Contrary to expectations, few participants chose to use ChatGPT in other ways, such as using it to edit their own draft, to brainstorm ideas, or to write a rough draft before heavily editing its output.

Consequently, time spent on the second task dropped precipitously for the treatment group compared to the control group on the second task, decreasing by 40% (Figure 1 Panel A). Average grades rose by 18% (Figure 1 Panel B). The increase in grades largely reflected graders high opinion of pure-ChatGPT output compared to pure-human output, and does not seem to have reflected any value-added from the treated participants themselves.

Figure 1 Productivity effects

Why did the participants do so little editing of ChatGPTs output? One possibility is that they recognised clear deficiencies in the output or areas of potential improvement, but wanted to rush through the task as quickly as possible. Under this interpretation, participants were simply using ChatGPT as a time-saving device and ignoring its output quality, reducing the external validity of our experiment to the higher-stakes real world.

Three pieces of evidence contradict this interpretation. First, 40% of our participants were cross-randomised into a convex incentive scheme that promised them a substantial additional bonus payment for receiving a high grade of 6 or 7 out of 7. This provided an extra incentive to fix or improve ChatGPTs raw output, yet respondents in this group spent no more time editing on average than respondents in our main linear incentive group, and did not receive higher grades. Second, respondents who did choose to edit (or spent longer editing) did not receive higher grades than those who submitted unedited output. Third, many respondents clearly judged that ChatGPT was an output-improving device in addition to a time-saving device. At the end of the survey, some treated respondents were given an opportunity to revise or replace their pre-treatment task submission using ChatGPT; 19% fully replaced their entry with ChatGPTs output and a further 17% used ChatGPT as an editor. Our overall interpretation is that participants saw ChatGPTs output as high-quality and lacking obvious areas of improvement.

As a consequence of broadly uniform usage of ChatGPT in the treatment group, inequality in productivity between participants shrank dramatically, as shown in Figure 2. ChatGPT access allowed nearly everyone in the treated group to perform as well as the top humans in the control group.

Figure 2 Grade inequality decreases

How did participants react to being introduced to this startlingly productive technology? We asked participants about their enjoyment of each task; as Figure 3 Panel A shows, enjoyment rose by 0.5 standard deviations in the treatment group compared to the control group. Participants concerns about AI displacing workers in their occupation rose in the treatment group, as did excitement about AI augmenting workers in their occupation, while overall optimism about AI rose slightly. Respondents therefore greeted the technology enthusiastically overall, but not without trepidation. These gaps disappeared in subsequent resurveying.

Figure 3 Job satisfaction, self-efficacy, and beliefs about automation

We resurveyed participants two weeks and then two months after the experiment to track diffusion of ChatGPT into their real jobs. Two weeks out, 34% of treated and 18% of control respondents had used ChatGPT in their job in the past week; two months out, these figures were 42% and 27%. The slow increase in usage and persistent treatment-control gap suggest that diffusion of ChatGPT into real-world jobs remains somewhat slow and hampered by information frictions. Respondents not using ChatGPT in their main job reported a mix of reasons: lack of familiarity, lack of access at work, or lack of usefulness of ChatGPT due to the importance to their work of context-specific knowledge and style.

ChatGPT has a substantial impact on productivity in mid-level professional writing tasks, increasing speed and quality and narrowing the gap between higher- and lower-ability writers. Its aggregate impacts, however, will depend on complex general-equilibrium considerations that our experiment is unable to speak to. As we discuss in the paper, a number of factors ranging from the elasticity of demand for ChatGPT-relevant services, the particular skills ChatGPT best complements, and the nature of optimal production structures with ChatGPT will determine the impacts of ChatGPT-like technologies on employment, occupation, and wage structures.

Acemoglu, D and P Restrepo (2022), Tasks, Automation, and the Rise in US Wage Inequality, Econometrica 90(5).

Acemoglu, D and S Johnson (2023), Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity, New York: Public Affairs.

Autor, D, F Levy and R Murnane (2003), The Skill Content of Recent Technological Change: An Empirical Exploration, Quarterly Journal of Economics 118(4).

Autor, D (2015), Why Are There Still So Many Jobs? The History and Future of Workplace Automation, Journal of Economic Perspectives 29(3).

Chalfin, A, O Danieli, A Hillis, Z Jelveh, M Luca, J Ludwig and S Mullainathan (2016), Productivity and Selection of Human Capital with Machine Learning, American Economic Review 106(5).

Kleinberg, J, H Lakkaraju, J Leskovec, J Ludwig and S Mullainathan (2018), Human Decisions and Machine Predictions, Quarterly Journal of Economics 133(1).

Moll, B, L Rachel and P Restrepo (2022), Uneven Growth: Automations Impact on Income and Wealth Inequality, Econometrica 90(6).

Mullainathan, S and Z Obermeyer (2022), Diagnosing Physician Error: A Machine Learning Approach to Low-Value Healthcare, Quarterly Journal of Economics 137(2).

Noy, S and W Zhang (2023), Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence, working paper.

See the article here:
The productivity effects of generative artificial intelligence - CEPR

Read More..