![translation](https://cdn.durumis.com/common/trans.png)
This is an AI translated post.
The Evolving Relationship Between Us and Algorithms
- Writing language: Korean
- •
-
Base country: All countries
- •
- Information Technology
Select Language
Summarized by durumis AI
- Generative AI, including ChatGPT, has advanced to the point of potentially replacing human roles, raising concerns about the infringement of creators' rights and prompting reflection on a new relationship with algorithms.
- Algorithms influence many aspects of our lives, and we recognize their existence, embrace curiosity about them, and perceive them as active entities.
- Specifically, generative AI utilizes existing content in the process of generating output based on user requests, necessitating a redefinition of our relationship with AI algorithms and a consideration of ethical responsibilities.
“We now have AI models that analyze human language and extract a model of the world.”
Last month, on the 19th, at a charity event held in Ontario, Canada, clinical psychologist Jordan Peterson, a professor at the University of Toronto, mentioned the recent use cases of ChatGPT and spoke about the new tension that algorithms will create with humans in the future. Peterson warned that large language processing models, generative AI, will soon become so intelligent that they will be able to use images and movements to extract patterns on their own and then test them on the world, effectively processing the role of human scientists in seconds.
GPT-3, DALL-E, StableDiffusion, etc., are now the basis of almost all AI systems and are making the AI paradigm shift visible. And this powerful system that generates images and text according to the user's needs inevitably causes conflict with creators already within existing industries. Last November, Microsoft's ‘GitHubCopilot’ was hit with a class-action lawsuit alleging that it violated the legal rights of countless authors who had posted their code under an open-source license for AI learning. Also, the Recording Industry Association of America (RIAA) in a statement last October emphasized that AI-based music generation and remixing could threaten not only musicians’ rights but also their financial situations.
These cases remind us of the question of ‘is this fair to everyone’ in terms of how systems can be trained and produce results using datasets based on copyrighted materials. However, it is important to note that much of this controversy is focused on the new technology-centric side. Ultimately, it is the human being who inputs text into the AI model to obtain the desired results, so it is important to first consider how humans should interact with algorithms in the future.
Algorithms are already an axis in generating our world just like humans do. We have been considering a lot of social surveillance regarding the opacity of algorithms from the past. Especially, we knew that it was difficult to evaluate who should be held accountable for the lack of transparency, and we were afraid that the hidden biases within it would persist, leading to unfair results. Therefore, the question of ‘how should we treat algorithms’ may be more important, and we can find clues in our familiar relationship with content-generating algorithms.
First, we are aware of the existence of algorithms. The words 'recommendation' and 'selection', which often appear in conversations related to content and advertising, confirm the phenomenon that people are building vocabulary centered on algorithms within online shopping or social media. Also, we are curious about algorithms. When the YouTube main page is filled with content of a particular category or when we feel that the exposure of the content we posted is not enough, we often express our curiosity about algorithms with unfriendly responses.
Finally, we want algorithms to exist for us like active and living beings. We rely on algorithms to create new habits, learn, and remember, and we sometimes try to control algorithms completely for this purpose. Attempts to use irrelevant hashtags, activate do-not-disturb mode, or send feedback on ad options. And when all these attempts are unsuccessful, we even attempt to disconnect from algorithms, such as through digital detox or newsletter content consumption.
In short, people tend to show a lack of trust, poor evaluation, and a past-bound attitude when the relationship with the algorithm does not go as they wish. And this is largely similar to ‘social relationships’that we have in our daily lives. Furthermore, while the relationship with existing content generation algorithms was mostly a one-way relationship in the realm of ‘consumption’, the relationship with AI algorithms, large language processing models, can be defined as a two-way relationship in the realm of ‘creation’. Given that the results requested by the user are based on someone else’s creation in the world, not with complete originality, it is necessary to recognize that the attitude and approach to generative AI algorithms must also fundamentally change.
Even if you have a ChatGPT window open, AI algorithms will just wait. Perhaps they are just hidden behind their amazing abilities, but when you write something, they may just be helping to blossom an undisclosed social relationship with someone else in your world.
*This article is the original text published on Electronic Newspaper Named Column on January 9, 2023.
References