Illustration by Acapulco Studio
Artificial-intelligence (AI) tools are becoming increasingly common in science, and many scientists anticipate that they will soon be central to the practice of research, suggests a Nature survey of more than 1,600 researchers around the world.
Science and the new age of AI: a Nature special
When respondents were asked how useful they thought AI tools would become for their fields in the next decade, more than half expected the tools to be very important or essential. But scientists also expressed strong concerns about how AI is transforming the way that research is done.
The share of research papers that mention AI terms has risen in every field over the past decade, according to an analysis for this article by Nature.
Machine-learning statistical techniques are now well established, and the past few years have seen rapid advances in generative AI, including large language models (LLMs), that can produce fluent outputs such as text, images and code on the basis of the patterns in their training data. Scientists have been using these models to help summarize and write research papers, brainstorm ideas and write code, and some have been testing out generative AI to help produce new protein structures, improve weather forecasts and suggest medical diagnoses, among many other ideas.
See Supplementary information for full methodology.
With so much excitement about the expanding abilities of AI systems, Nature polled researchers about their views on the rise of AI in science, including both machine-learning and generative AI tools.
Focusing first on machine-learning, researchers picked out many ways that AI tools help them in their work. From a list of possible advantages, two-thirds noted that AI provides faster ways to process data, 58% said that it speeds up computations that were not previously feasible, and 55% mentioned that it saves scientists time and money.
AI has enabled me to make progress in answering biological questions where progress was previously infeasible, said Irene Kaplow, a computational biologist at Duke University in Durham, North Carolina.
The survey results also revealed widespread concerns about the impacts of AI on science. From a list of possible negative impacts, 69% of the researchers said that AI tools can lead to more reliance on pattern recognition without understanding, 58% said that results can entrench bias or discrimination in data, 55% thought that the tools could make fraud easier and 53% noted that ill-considered use can lead to irreproducible research.
The main problem is that AI is challenging our existing standards for proof and truth, said Jeffrey Chuang, who studies image analysis of cancer at the Jackson Laboratory in Farmington, Connecticut.
To assess the views of active researchers, Nature e-mailed more than 40,000 scientists who had published papers in the last 4 months of 2022, as well as inviting readers of the Nature Briefing to take the survey. Because researchers interested in AI were much more likely to respond to the invitation, the results arent representative of all scientists. However, the respondents fell into 3 groups: 48% who directly developed or studied AI themselves, 30% who had used AI for their research, and the remaining 22% who did not use AI in their science. (These categories were more useful for probing different responses than were respondents research fields, genders or geographical regions; see Supplementary information for full methodology).
Among those who used AI in their research, more than one-quarter felt that AI tools would become essential to their field in the next decade, compared with 4% who thought the tools essential now, and another 47% felt AI would be very useful. (Those whose research field was already AI were not asked this question.) Researchers who dont use AI were, unsurprisingly, less excited. Even so, 9% felt these techniques would become essential in the next decade, and another 34% said they would be very useful.
The chatbot ChatGPT and its LLM cousins were the tools that researchers mentioned most often when asked to type in the most impressive or useful example of AI tools in science (closely followed by protein-folding AI tools, such as AlphaFold, that create 3D models of proteins from amino-acid sequences). But ChatGPT also topped researchers choice of the most concerning uses of AI in science. When asked to select from a list of possible negative impacts of generative AI, 68% of researchers worried about proliferating misinformation, another 68% thought that it would make plagiarism easier and detection harder, and 66% were worried about bringing mistakes or inaccuracies into research papers.
Respondents added that they were worried about faked studies, false information and perpetuating bias if AI tools for medical diagnostics were trained on historically biased data. Scientists have seen evidence of this: a team in the United States reported, for instance, that when they asked the LLM GPT-4 to suggest diagnoses and treatments for a series of clinical case studies, the answers varied depending on the patients race or gender (T. Zack et al. Preprint at medRxiv https://doi.org/ktdz; 2023) probably reflecting the text that the chatbot was trained on.
There is clearly misuse of large language models, inaccuracy and hollow but professional-sounding results that lack creativity, said Isabella Degen, a software engineer and former entrepreneur who is now studying for a PhD in using AI in medicine at the University of Bristol, UK. In my opinion, we dont understand well where the border between good use and misuse is.
The clearest benefit, researchers thought, was that LLMs aided researchers whose first language is not English, by helping to improve the grammar and style of their research papers, or to summarize or translate other work. A small number of malicious players notwithstanding, the academic community can demonstrate how to use these tools for good, said Kedar Hippalgaonkar, a materials scientist at the National University of Singapore.
Researchers who regularly use LLMs at work are still in a minority, even among the interested group who took Natures survey. Some 28% of those who studied AI said they used generative AI products such as LLMs every day or more than once a week, 13% of those who only use AI said they did, and just 1% among others, although many had at least tried the tools.
Moreover, the most popular use among all groups was for creative fun unrelated to research (one respondent used ChatGPT to suggest recipes); a smaller share used the tools to write code, brainstorm research ideas and to help write research papers.
Some scientists were unimpressed by the output of LLMs. It feels ChatGPT has copied all the bad writing habits of humans: using a lot of words to say very little, one researcher who uses the LLM to help copy-edit papers wrote. Although some were excited by the potential of LLMs for summarizing data into narratives, others had a negative reaction. If we use AI to read and write articles, science will soon move from for humans by humans to for machines by machines, wrote Johannes Niskanen, a physicist at the University of Turku in Finland.
Around half of the scientists in the survey said that there were barriers preventing them from developing or using AI as much as they would like but the obstacles seem to be different for different groups. The researchers who directly studied AI were most concerned about a lack of computing resources, funding for their work and high-quality data to run AI on. Those who work in other fields but use AI in their research tended to be more worried by a lack of skilled scientists and training resources, and they also mentioned security and privacy considerations. Researchers who didnt use AI generally said that they didnt need it or find it useful, or that they lacked experience or time to investigate it.
Another theme that emerged from the survey was that commercial firms dominate computing resources for AI and ownership of AI tools and this was a concern for some respondents. Of the scientists in the survey who studied AI, 23% said they collaborated with or worked at firms developing these tools (with Google and Microsoft the most often named), whereas 7% of those who used AI did so. Overall, slightly more than half of those surveyed felt it was very or somewhat important that researchers using AI collaborate with scientists at such firms.
The principles of LLMs can be usefully applied to build similar models in bioinformatics and cheminformatics, says Garrett Morris, a chemist at the University of Oxford, UK, who works on software for drug discovery, but its clear that the models must be extremely large. Only a very small number of entities on the planet have the capabilities to train the very large models which require large numbers of GPUs [graphics processing units], the ability to run them for months, and to pay the electricity bill. That constraint is limiting sciences ability to make these kinds of discoveries, he says.
Researchers have repeatedly warned that the naive use of AI tools in science can lead to mistakes, false positives and irreproducible findings potentially wasting time and effort. And in the survey, some scientists said they were concerned about poor-quality research in papers that used AI. Machine learning can sometimes be useful, but AI is causing more damage than it helps. It leads to false discoveries due to scientists using AI without knowing what they are doing, said Lior Shamir, a computer scientist at Kansas State University in Manhattan.
When asked if journal editors and peer reviewers could adequately review papers that used AI, respondents were split. Among the scientists who used AI for their work but didnt directly develop it, around half said they didnt know, one-quarter thought reviews were adequate, and one-quarter thought they were not. Those who developed AI directly tended to have a more positive opinion of the editorial and review processes.
Reviewers seem to lack the required skills and I see many papers that make basic mistakes in methodology, or lack even basic information to be able to reproduce the results, says Duncan Watson-Parris, an atmospheric physicist who uses machine learning at the Scripps Institution of Oceanography in San Diego, California. The key, he says, is whether journal editors are able to find referees with enough expertise to review the studies.
That can be difficult to do, according to one Japanese respondent who worked in earth sciences but didnt want to be named. As an editor, its very hard to find reviewers who are familiar both with machine-learning (ML) methods and with the science that ML is applied to, he wrote.
Nature also asked respondents how concerned they were by seven potential impacts of AI on society which have been widely discussed in the news. The potential for AI to be used to spread misinformation was the most worrying prospect for the researchers, with two-thirds saying they were extremely or very concerned by it. Automated AI weapons and AI-assisted surveillance were also high up on the list. The least concerning impact was the idea that AI might be an existential threat to humanity although almost one-fifth of respondents still said they were extremely or very concerned by this prospect.
Many researchers, however, said AI and LLMs were here to stay. AI is transformative, wrote Yury Popov, a specialist in liver disease at the Beth Israel Deaconess Medical Center in Boston, Massachusetts. We have to focus now on how to make sure it brings more benefit than issues.
View original post here:
AI and science: what 1,600 researchers think - Nature.com
Read More..