When You Use Generative AI: Assistant, but Far from Agent
学术英语写作课的课程论文
Writer’s Memo
I actually let out a sigh of relief when I finished my draft. This was my first time writing such a long essay in English. Often, I found myself stuck halfway, with many ideas in my mind but struggling to express them smoothly with proper words. Someone said that writing an essay is like breathing: when you let go of distractions and get into the flow, words and logic will come out naturally. I tried that, and it worked pretty well.
I chose this topic partly because it relates to my field of study, but more importantly, I started to realize that while I am always learning about the latest AI research and developments in my professional field, my mind is filled with technical details every day. Yet, I lack an understanding of the bigger picture behind these details.
In my field, using GPT to help with programming has become common and even encouraged. People generally believe it helps us avoid wasting time on unimportant details. Of course, I also use it, but I always have concerns during the process: What if AI gets it wrong? What if I lose my ability to check for errors myself in the future? With these thoughts in mind, I finally completed the entire portfolio.
During the revision process, my partners gave me very useful advice. Instead of empty praise, they pointed out the missing and weak parts of my writing clearly and concisely. It is interesting that the two partners had opposite suggestions on how to handle certain details in my essay! I was pleasantly surprised to see different viewpoints. I also want to thank Dr. Luo for providing key guidance and inspiration during class, reviewing my bibliography and outline carefully, and offering a platform for intellectual collision.
I believe one important criterion for judging the success of an essay is how long it remains remembered. I hope this document can be reopened someday in the future, evoking my thoughts on these issues and witnessing if this essay helped me form more mature views later on.
When You Use Generative AI: Assistant, but Far from Agent
Weeks ago, Apple finalized deal with OpenAI to bring ChatGPT to iPhone. Yet in 2023, when ChatGPT was released and the entire industry is flocking to generative AI (GenAI), Apple kept silent and even refused to mention the “artificial intelligence” concept. Instead, they used “machine learning”, a more academic expression, whenever they wanted to refer to the concept. (Reuters, 2023) One year has passed and Apple has compromised in the wave of GenAI. People are fascinated by the powerful tool that demonstrates unlimited possibilities, ranging from producing human-like text to creating intricate art, all of these with just simple prompts. In almost every area, GenAI can be seen serving as an assistant to human productivity (Brynjolfsson et al., 2023). However, GenAI is not a silver bullet; It is not always reassuring when you choose AI generated content (AIGC) as your final production. (Sætra, 2023) Current generative AI is, actually, unlikely to become the autonomous agent as is wished, for it may involve the potential risk of infringement, pollute information sources with misleading content, and show lack of human value.
The performance of generative AI is heavily reliant on the training phase, during which the model would be fed on large amount of data. As a result, infringement issues arise inevitably. In fact, similar issues have been raised long before the AI era, when the web crawlers from search engines would grab data throughout the Internet to find valuable contents for their services. For a website, a common practice is to declare a statement that marks areas where crawlers are not welcome, which is the “robots.txt” rule (Sun et al., 2007). Nevertheless, it may happen when the crawlers actually ignore the rule and take your whole site away, but you may distinguish it immediately from the search result. When it comes to GenAI, there are dramatical changes in the definition of infringement. Creators can only claim that the style or certain parts of the visuals/text resemble their original work, thus inferring infringement in the training process by analyzing the generated content. (Murray, 2023) Therefore, for those who include AIGC in their works or product without examining possible infringement, this uncertainty would become a timebomb when they encounter any legal dispute afterwards. After all, it is currently impossible for AI to shoulder the responsibility, so humans must be accountable for the consequences of AI.
Apart from copyright issues, Generative AI poses a threat to information accuracy by providing users with fabricated data that appears authentic. These inaccuracies are so common that they’ve earned their own moniker, which is referred to as “hallucinations.” The phenomenon occurs because of a ground-level reason that current large language models (LLMs) produce content by predicting the probability distributions of following words based on observed patterns. That means any accuracy in their outputs is often coincidental. As a result, they might produce content that sounds plausible but is inaccurate (O’Brien, 2023). AI hallucinations demonstrate that it does not ‘understand’ the content it produces and cannot independently verify facts. Therefore, users must approach AI outputs with a critical eye and evaluate them with human judgment (Silberg & Manyika, 2019). These features make it impossible for generative AI to operate independently, as it lacks autonomous capabilities.
What’s more, some groups have already highlighted the “pollution” problem AI hallucinations pose. For example, The New York Times (NYT) has brought legal cases against OpenAI and Microsoft in February, claiming that it was under the risk of suffering reputational harm from AI-generated false information attributed to the NYT (Vaughan, 2024). The erroneous and false-attributed information provided by AI requires secondary proofreading, which needs extra labor costs that irresponsible users would not pay. As a result, they disseminate the information without review, using it as firsthand material, and through their actions, they spread and amplify the flawed content. In this case, NYT content possesses enhanced value and desirability as public data. Huge amounts of fake news and unverified viewpoints are posted onto the internet under the label of “NYT”, which in turn will be recollected by the AI trainers for further AI training, due to the NYT’s reputation for trustworthy news and information — from which a vicious cycle emerges.
Even if efforts have been made to avoid the aforementioned issues, AI generated content still faces bias and prejudice because it has “no human involved,” as reliance upon AI will overshadow human creativity (Caporusso, 2023). No professor would be glad if you submit AIGC as your homework or project. Instead, they would question whether you have revealed your own independent thinking and innovation. Similarly, in the ACGN (Anime, Comic, Game and Novel) field, resistance to AI is fiercer than ever. Radical enthusiasts identify the paintings that are generated by AI, and criticize them for “lack of human beauty.” Some even refer to the features extracted and outputted by AI as "body parts,” which implies that the characteristic parts of the content generated are “stolen” from human artists (Baidu Developer Center, 2024). In their eyes, people who use AI to generate artwork are merely plagiarizing the original creators’ hard work. Though most people may not take such an extreme view, works generated by AI are still perceived as inherently inferior in the public eye due to the instrumentality of AI, much like the distinct value placed on “handmade bubble tea” compared to mass-produced alternatives. As Karl Marx said, humans are the principal agents of productivity and the most crucial factors in the forces of production. When you rely on AI for content generation, you are essentially giving up the unique value and irreplaceable role of human creativity and effort, which, despite AI’s advanced capabilities, it cannot fully replicate or surpass. You cannot rest easy and simply delegate tasks to AI without oversight.
With the discussions above, we have grounds to deduce that while generative AI is a powerful tool, it is not a panacea. When users use AI to generate content, they are exposed to the risk of infringement and erroneous content. Moreover, they may also face a crisis of public trust due to the lack of human wisdom input. Therefore, before relying heavily on generative AI, we are supposed to be fully aware of its pros and cons, avoiding both abuse and resistance. In view of the features of generative AI, it is necessary to have ongoing oversight and evaluation to limit its usage scope and environment, and be prepared for its unexpected consequences. More specifically, it is urgent to establish clear legal and ethical guidelines to govern copyright violations and the spread of misinformation. We should also recognize the instrumentality of AI and pay more attention to human capabilities. With joint efforts, we may proudly embrace the future where innovation in AI is harmonized with society expectations.
References
Reuters. (2023, June 6). Apple spruces up products with AI but avoids the buzz word. Reuters. https://www.reuters.com/technology/apple-spruces-up-products-with-ai-avoids-buzz-word-2023-06-06/
Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work (No. w31161). National Bureau of Economic Research.
Sætra, H. S. (2023). Generative AI: Here to stay, but for good? Technology in Society, 75, 102372.
Sun, Y., Zhuang, Z., & Giles, C. L. (2007, May). A large-scale study of robots. txt. In Proceedings of the 16th international conference on World Wide Web (pp. 1123-1124).
Murray, M. D. (2023). Generative AI Art: Copyright Infringement and Fair Use. SMU Sci. & Tech. L. Rev., 26, 259.
O’Brien, M. (2023, August 1). Chatbots sometimes make things up. Is AI’s hallucination problem fixable? AP News. https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4
Silberg, J., & Manyika, J. (2019, June 6). Tackling bias in artificial intelligence (and in humans). McKinsey & Company. https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans
Vaughan, P. (2024, February 13). The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission. The Conversation. https://theconversation.com/the-new-york-times-ai-copyright-lawsuit-shows-that-forgiveness-might-not-be-better-than-permission-222904.
Caporusso, N. (2023). Generative Artificial Intelligence and the Emergence of Creative Displacement Anxiety. Research Directs in Psychology and Behavior, 3(1).
Baidu Developer Center. (2024). AI Painting: the Reason Behind Corpse Piecing. Baidu Developer Center. https://developer.baidu.com/article/detail.html?id=3104512
When You Use Generative AI: Assistant, but Far from Agent