AI Outshines Humans in Creative Thinking – Neuroscience News

Summary: ChatGPT-4 was pitted against 151 human participants across three divergent thinking tests, revealing that the AI demonstrated a higher level of creativity. The tests, designed to assess the ability to generate unique solutions, showed GPT-4 providing more original and elaborate answers.

The study underscores the evolving capabilities of AI in creative domains, yet acknowledges the limitations of AIs agency and the challenges in measuring creativity. While AI shows potential as a tool for enhancing human creativity, questions remain about its role and the future integration of AI in creative processes.

Key Facts:

Source: University of Arkansas

Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.

Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as What is the best way to avoid talking about politics with my parents? In the study, GPT-4 provided more original and elaborate answers than the human participants.

The study, The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks, was published inScientific Reportsand authored by U of A Ph.D. students in psychological science Kent F. Hubert and Kim N. Awa, as well as Darya L. Zabelina, an assistant professor of psychological science at the U of A and director of the Mechanisms of Creative Cognition and Attention Lab.

The three tests utilized were the Alternative Use Task, which asks participants to come up with creative uses for everyday objects like a rope or a fork; the Consequences Task, which invites participants to imagine possible outcomes of hypothetical situations, like what if humans no longer needed sleep?; and the Divergent Associations Task, which asks participants to generate 10 nouns that are as semantically distant as possible. For instance, there is not much semantic distance between dog and cat while there is a great deal between words like cat and ontology.

Answers were evaluated for the number of responses, length of response and semantic difference between words. Ultimately, the authors found that Overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses. In other words, GPT-4 demonstrated higher creative potential across an entire battery of divergent thinking tasks.

This finding does come with some caveats. The authors state, It is important to note that the measures used in this study are all measures of creative potential, but the involvement in creative activities or achievements are another aspect of measuring a persons creativity.

The purpose of the study was to examine human-level creative potential, not necessarily people who may have established creative credentials.

Hubert and Awa further note that AI, unlike humans, does not have agency and is dependent on the assistance of a human user. Therefore, the creative potential of AI is in a constant state of stagnation unless prompted.

Also, the researchers did not evaluate the appropriateness of GPT-4 responses. So while the AI may have provided more responses and more original responses, human participants may have felt they were constrained by their responses needing to be grounded in the real world.

Awa also acknowledged that the human motivation to write elaborate answers may not have been high, and said there are additional questions about how do you operationalize creativity? Can we really say that using these tests for humans is generalizable to different people? Is it assessing a broad array of creative thinking? So I think it has us critically examining what are the most popular measures of divergent thinking.

Whether the tests are perfect measures of human creative potential is not really the point. The point is that large language models are rapidly progressing and outperforming humans in ways they have not before. Whether they are a threat to replace human creativity remains to be seen.

For now, the authors continue to seeMoving forward, future possibilities of AI acting as a tool of inspiration, as an aid in a persons creative process or to overcome fixedness is promising.

Author: Hardin Young Source: University of Arkansas Contact: Hardin Young University of Arkansas Image: The image is credited to Neuroscience News

Original Research: Open access. The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks by Kent Hubert et al. Scientific Reports

Abstract

The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks

The emergence of publicly accessible artificial intelligence (AI) large language models such as ChatGPT has given rise to global conversations on the implications of AI capabilities.

Emergent research on AI has challenged the assumption that creative potential is a uniquely human trait thus, there seems to be a disconnect between human perception versus what AI is objectively capable of creating.

Here, we aimed to assess the creative potential of humans in comparison to AI. In the present study, human participants (N=151) and GPT-4 provided responses for the Alternative Uses Task, Consequences Task, and Divergent Associations Task.

We found that AI was robustly more creative along each divergent thinking measurement in comparison to the human counterparts. Specifically, when controlling for fluency of responses, AI was more original and elaborate.

The present findings suggest that the current state of AI language models demonstrate higher creative potential than human respondents.

Original post:

AI Outshines Humans in Creative Thinking - Neuroscience News

Related Posts

Comments are closed.