Looking at recent developments in the field of generative AI, it almost seems like algorithms are doing the creative stuff, while billions of humans are stuck in non-creative jobs. Sure, you could argue that AI is not really creative and it’s simply emulating creativity. But a new study makes the debate much more complicated. According to the study, in the way we currently judge creativity in students, AI is very creative.
Is AI creative?
The study was directed by Dr. Erik Guzik, an assistant clinical professor at the University of Montana’s College of Business. Guzik says the study was inspired by his own experience with Chat-GPT.
“When people are at their most creative, they’re responding to a need, goal or problem by generating something new — a product or solution that didn’t previously exist. In this sense, creativity is an act of combining existing resources — ideas, materials, knowledge — in a novel way that’s useful or gratifying. Quite often, the result of creative thinking is also surprising, leading to something that the creator did not — and perhaps could not — foresee,” the researcher wrote in an article for The Conversation.
“So, as a researcher of creative thinking, I immediately noticed something interesting about the content generated by the latest versions of AI, including GPT-4. When prompted with tasks requiring creative thinking, the novelty and usefulness of GPT-4’s output reminded me of the creative types of ideas submitted by students and colleagues I had worked with as a teacher and entrepreneur.”
Guzik had Chat-GPT respond to 8 queries that involved creative responses. He used the Torrance Tests of Creative Thinking (TTCT), a series of standardized tests designed to measure various dimensions of creativity. These tests aim to identify and assess creative potential in individuals. The TTCT is one of the most widely used tools for evaluating creativity.
The researcher then compared the responses with 24 students taking his courses, as well as 2,700 college US students who took the test in 2016.
The TTCT measures different aspects of creativity, such as:
- Fluency: The number of ideas you can produce.
- Flexibility: How different your ideas are from each other.
- Originality: How unique or novel your ideas are.
- Elaboration: The amount of detail in your responses.
For fluency and originality, the AI ranked in the top 1%. “That was new,” says Guzik. It didn’t do quite as well in the others, but it still forces us to rethink what we thought we knew about creativity.
We may not really understand creativity
The first striking finding is that AI models like GPT-4 are capable of producing ideas that seem unexpected, novel, and unique. If this is how we evaluate creativity, then AI can absolutely be creative.
This isn’t even the first instance when AI displays creativity. Before ChatGPT was a thing, another AI mastered Go — one of the most complex board games known to mankind (immensely more complex than chess). Remarkably, not only did the AI overcome humanity (something previously thought impossible), but in one game, it invented a completely new concept in the game.
However, there are a number of caveats, and Gavik stops short of interpreting the results; he only presents them.
“For one, many outside of the research community continue to believe that creativity cannot be defined, let alone scored. Yet products of human novelty and ingenuity have been prized — and bought and sold — for thousands of years. And creative work has been defined and scored in fields like psychology since at least the 1950s.”
“Still others are surprised that the term “creativity” might be applied to nonhuman entities like computers. On this point, we tend to agree with cognitive scientist Margaret Boden, who has argued that the question of whether the term creativity should be applied to AI is a philosophical rather than scientific question.”
ChatGPT also highlighted it. Guzik asked it about its performance over the test, and the AI gave a great answer that Guzik shared at a press conference.
“ChatGPT told us we may not fully understand human creativity, which I believe is correct,” he said. “It also suggested we may need more sophisticated assessment tools that can differentiate between human and AI-generated ideas.”
A Sputnik moment for creativity
Among the many philosophical and disconcerting implications for creativity, there’s also a very short-term, actionable conclusion. Simply put, our schools try to evaluate creativity without having a good idea of how to do it.
Moreover, the study underscores the urgency of rethinking how we approach creativity in educational settings. If our current tools for assessing creativity are not nuanced enough to distinguish between human and AI-generated ideas, then we are likely doing a disservice to the next generation of thinkers, artists, and innovators.
In light of these findings, it’s clear that our entire understanding of creativity is at a crossroads. The study raises essential questions about the nature of creativity itself and how we evaluate it, both in humans and in increasingly sophisticated AI systems. If a machine can score in the top 1% for fluency and originality on a test designed to measure human creativity, what does that say about our traditional notions of what it means to be creative?
But for Gazik, this also represents a Sputnik moment in the field of studying creativity.
Just as the launch of the Sputnik satellite in 1957 galvanized the United States to invest in science and technology education, this research could be the catalyst we need to invest in a more nuanced, effective, and equitable system for fostering and evaluating creativity.
“In this sense, the creative abilities now realized by AI may provide a “Sputnik moment” for educators and others interested in furthering human creative abilities, including those who see creativity as an essential condition of individual, social and economic growth.”
Journal Reference: Erik E. Guzik et al, The Originality of Machines: AI Takes the Torrance Test., Journal of Creativity (2023). DOI: 10.1016/j.yjoc.2023.100065