Byungchae Ryan Son

Our Changing Relationship with Algorithms

Created: 2024-05-09

Created: 2024-05-09 15:01

“Now we have AI models that analyze human language to extract models of the world.”


At a charity event held in Ontario, Canada, on the 19th of last month, clinical psychologist and University of Toronto professor Jordan Peterson mentioned the recent noteworthy use cases of ChatGPT and discussed the new tension that algorithms will create between humans and AI in the future. Professor Peterson warned that large language processing models, generative AI, will soon become intelligent enough to extract patterns on their own using images and actions, and then test them in the real world, completing the role of human scientists in a matter of seconds.


GPT-3, DALL-E, StableDiffusion, etc., are now the foundation of almost all AI systems and are visualizing the paradigm shift in AI. And these powerful systems that generate images and text based on user needs are inevitably causing conflict with existing creators within industries. In November, Microsoft's ‘GitHub Copilot’ faced a class-action lawsuit alleging that it infringed on the legal rights of numerous authors who had posted their code under open-source licenses for AI training. Also, in October, the Recording Industry Association of America (RIAA) stated that AI-based music generation and remixing could threaten not only the rights of musicians but also their financial situations.


These cases remind us of the question of ‘whether it is fair to everyone’ regarding the method of using copyrighted material-based datasets to train systems and produce results. However, it is important to note that these controversies are largely focused on the new technology-centric aspects. Ultimately, since it is humans who input text into AI models to obtain desired results, we need to prioritize the question of how humans should interact with algorithms in the future.


Algorithms are already a part of the process of generating our world, just like humans are. We have been considering social monitoring regarding the opacity of algorithms for some time now. In particular, we have been aware that it is difficult to assess who should be held accountable due to the lack of transparency, and we have been uneasy about the possibility of unfair outcomes due to the persistence of hidden biases within these systems. Therefore, 'how we should treat algorithms' may be a more crucial question, and we can find clues to this question in our already familiar relationship with content-generating algorithms.


First, we are aware of the existence of algorithms. The frequent appearance of words like ‘recommendation’ and ‘selection’ in conversations related to content and advertising reveals how people are building their vocabulary around algorithms in online shopping and social media. Also, we are curious about algorithms. When we see our YouTube main page filled with content from a specific category or when we feel that our posts are not getting enough exposure, we often express our curiosity about the algorithm with unfavorable reactions.


Finally, we want algorithms to exist for us as active and living entities. We rely on algorithms to develop new habits, learn, and remember things, and we even try to completely control them for this purpose. This includes attempts such as using seemingly irrelevant hashtags, activating Do Not Disturb mode, or sending feedback on ad options. And when all these attempts fail, we even try to disconnect from algorithms through digital detox or consuming newsletter content.


In summary, when their relationship with algorithms does not proceed as they wish, people tend to show a lack of trust, give negative evaluations, and get stuck in the past. And this is quite similar to the ‘social relationships’we have in our daily lives. Furthermore, while the relationship with existing content-generating algorithms has largely been a one-way relationship in the realm of ‘consumption,’ the relationship with current large language processing AI algorithms can be defined as a two-way relationship in the realm of ‘generation.’ Considering that the results requested by users are based on someone else's creations in the world, not entirely original, we need to realize that our approach and attitude towards generative AI algorithms must fundamentally change.


Even with the ChatGPT window open, the AI algorithm simply waits. Perhaps, hidden behind its astonishing capabilities, it simply helps bloom an unseen social relationship with another person in your world when you write something.


*This article is the original content published on January 9, 2023 in the Electronic Times Column.


References


Comments0