ChatGPT, Creativity, and the Risks of Artificial Intelligence

Post by Rebecca Hill

The takeaway

ChatGPT, a promising tool for gathering and paraphrasing information, has recently been studied for its ability to mimic human creativity. However, ChatGPT has a darker side to it as well, taking information without consent, contributing to plagiarism, and spreading misinformation. 

What is ChatGPT?

Artificial intelligence (AI), or training computers to learn human skills, is an exciting new technology that can be used for a wide range of applications. Large language models are a relatively new type of AI, trained by large amounts of text to create new strings of similar text. One of the most popular new AIs is a chatbot called ChatGPT, powered by a large language model created by OpenAI that has been trained on a variety of text sources, from Wikipedia and journal articles to blogs across the internet. Users ask ChatGPT a question and it responds with paraphrased information, from a few sentences to several paragraphs long. However, these responses don’t automatically provide sources for their information, and sometimes even provide inaccurate information.

Can artificial intelligence be more creative than humans?

With this new technology, many are curious about its ability to not only mimic human language but also human skills such as decision-making and creativity. To measure creativity, researchers test both AI and humans with divergent thinking tasks – those that involve coming up with creative solutions to a problem. While one study claimed that ChatGPT created more original and elaborate solutions during these tasks than humans, another found that the best of the human ideas were better than the ChatGPT ideas. Studies like these have sparked active discussions around the idea of what constitutes creativity. Some researchers believe that AI chatbots like ChatGPT can create new ideas by making connections that humans often miss due to bias or fixed mindsets. Others argue that human creativity is too unique and complex to copy with AI and that since AI requires human input, it is not able to come up with any truly new ideas. Also, emotions are often seen as a crucial part to creativity, and AI doesn’t have the life experiences that humans channel into works of art.

The impact of AI on art

While ChatGPT is a text-based AI, there are also AIs that are used to create visual art, which has in turn triggered its own scientific discussion. Recent studies have found that people prefer art that was labeled as created by humans rather than AI, suggesting that there is a negative bias against AI-created artworks. While some of these studies purport that AI-created artwork is often indistinguishable from human-created, others emphasize the impact this has on the artists themselves. Artists spend years honing their craft and developing their artistic style, and many are insulted by the idea that art created by a person simply providing a prompt to an AI could be equivalent to their own art. Even more upsetting to many artists is that AI-created art is trained on the very same art that humans have spent hours creating, effectively stealing from the artists.

AI threatens the integrity of circulating information

While visual art being scraped for training data for AI is a hotly debated topic, the same can be said for the art of writing. Since ChatGPT is trained using data from the internet, it uses writing from any freely accessible source it can find. However, free to access does not mean free to use. A recent article points out that while there are exceptions to using copyright-protected material, these exceptions require not making money off of this use, and ChatGPT does have an option for users to pay for subscriptions for better access. Even more concerning is the use of ChatGPT in scientific writing, which can lead to bias, plagiarism, and the spread of misinformation. This can have dire consequences when it comes to medical and health research. While ChatGPT is a mystery to many and a fun tool for some, it is important to understand that it is more than a tool for gathering information. The foundations of ChatGPT are built on information not freely given, and the effects of it may be longer lasting and wider reaching than many have anticipated. 

References +

Bellaiche, L., Shahi, R., Turpin, M. H., Ragnhildstveit, A., Sprockett, S., Barr, N., Christensen, A., & Seli, P. (2023). Humans versus AI: Whether and why we prefer human-created compared to AI-created artwork. Cognitive Research: Principles and Implications, 8(1), 42. https://doi.org/10.1186/s41235-023-00499-6

Chiarella, S. G., Torromino, G., Gagliardi, D. M., Rossi, D., Babiloni, F., & Cartocci, G. (2022). Investigating the negative bias towards artificial intelligence: Effects of prior assignment of AI-authorship on the aesthetic appreciation of abstract paintings. Computers in Human Behavior, 137, 107406. https://doi.org/10.1016/j.chb.2022.107406

Guleria, A., Krishan, K., Sharma, V., & Kanchan, T. (2023). ChatGPT: Ethical concerns and challenges in academics and research. The Journal of Infection in Developing Countries, 17(09), 1292–1299. https://doi.org/10.3855/jidc.18738

Hubert, K. F., Awa, K. N., & Zabelina, D. L. (2024). The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks. Scientific Reports, 14(1), 3440. https://doi.org/10.1038/s41598-024-53303-w

Kane, S., Awa, K., Upshaw, J., Hubert, K., Stevens, C., & Zabelina, D. (2023). Attention, affect, and creativity, from mindfulness to mind-wandering. The Cambridge Handbook of Creativity and Emotions, 130-148.

Koivisto, M., & Grassini, S. (2023). Best humans still outperform artificial intelligence in a creative divergent thinking task. Scientific Reports, 13(1), 13601. https://doi.org/10.1038/s41598-023-40858-3

Runco, M. A. (2023). AI can only produce artificial creativity. Journal of Creativity, 33(3), 100063.

Teubner, T., Flath, C. M., Weinhardt, C., Van Der Aalst, W., & Hinz, O. (2023). Welcome to the Era of ChatGPT et al.: The Prospects of Large Language Models. Business & Information Systems Engineering, 65(2), 95–101. https://doi.org/10.1007/s12599-023-00795-x