![translation](https://cdn.durumis.com/common/trans.png)
This is an AI translated post.
The era of algorithmic branding is coming
- Writing language: Korean
- •
-
Base country: All countries
- •
- Information Technology
Select Language
Summarized by durumis AI
- Elon Musk announced the development of TruthGPT, an AI that seeks truth, warning of the dangers of artificial intelligence, and Google CEO Sundar Pichai called for social preparedness for the rapid development of AI.
- Sam Altman of OpenAI argues that achieving superintelligent AI as quickly as possible is beneficial to humanity, and AI experts are offering various opinions on coexistence with artificial intelligence.
- As the speed of artificial intelligence development accelerates, it is necessary to redefine the role and value of humans, and human-friendly AI development and social adaptation have become important.
"We know that humanity could decide to hunt down and kill all chimpanzees."
Elon Musk, in an interview with Fox News on April 14, mentioned his plans to develop TruthGPT, an AI that seeks truth, and expressed his concerns about the future of AI's attitude toward humans. This example provides a more intuitive way for the public to understand his fears about AI's treatment of humans than any previous explanation. This statement comes across as more realistic given that Neuralink, the neuroscience startup he founded in 2016, has been repeatedly criticized for animal cruelty and allegations that it inflicted extreme pain on monkeys during the implantation of computer chips into their brains, ultimately leading to their deaths.
A monkey with a Neuralink implanted in its brain. The Verge
TruthGPT, as its name suggests, reveals a technological connection to ChatGPT, suggesting that it is still in the early stages of planning and may not yet be a viable third option compared to Google's Bard and OpenAI's ChatGPT. However, his argument for the justification of this new AI development—that creating an AI that seeks to understand the nature of the universe would reduce the likelihood of such an AI wiping out humans, a part of the universe that is interesting—is quite convincing. After all, Elon Musk's greatest concern is the potential for AI to destroy civilization.
On the same day, April 14, Google CEO Sundar Pichai told CBS that all products from every company will be affected by the rapid development of AI, warning that society must be prepared for the evolution of existing technology. However, he admitted that Google doesn't fully understand how AI technology produces specific answers. When asked by the host how Google could release AI into society in such a state, Pichai replied that we don't fully understand how the human mind works.
Paul Crutzen, a Dutch atmospheric chemist who won the Nobel Prize in Chemistry for discovering the ozone hole, said at the International Conference on the Global Environment held in Mexico in February 2000 that we are now living in the Anthropocene. The Anthropocene is a geological era shaped by human activity, a concept that was defined as the era when humans began to have a significant impact on the Earth's environment, starting with the Industrial Revolution, based on changes in the atmosphere. This ability to destroy the Earth's environment or decide the survival of other species comes from human intelligence. However, with the emergence of artificial intelligence (AI), we are faced with a situation where we have thrown this unfamiliar and potentially most threatening entity into society without any experience dealing with it.
Meanwhile, Sam Altman, the head of OpenAI, appeared on a podcast with AI researcher Rex Friedman last month and revealed that his goal is to achieve the emergence of general artificial intelligence (AGI) that is more advanced than the current level as quickly as possible. He believes that because AI can suddenly and rapidly gain superhuman intelligence, quickly creating AGI, which is one step ahead of current AI, is the best way to ensure that humans will be safe during the long period leading up to the achievement of superhuman intelligence. In this way, we can create a human-friendly AI.
These three people, who are most often mentioned in the AI war being waged around the world today, share a common interest in the topic of human life with AI. However, it is important to remember that the ultimate goal of their stories is to create AI, a computing system that exhibits specific behaviors. This revolutionary model is ruthlessly machine- and data-centric, and people who were important in the old world can be described as providers of behavioral input to meet the data needs of increasingly autonomous systems. For this reason, it is becoming increasingly important to refocus on Musk's interest in truth and discussion on humanity itself. What criteria and methods can be used to help people in disappearing job categories reassert their agency and become more proactive? What are the institutional mechanisms and community topics that can help us not forget this awareness? What is the subject that continues to give meaning to our experiences?
Branding is a declaration to target customers that you are on the same path as them. Perhaps we are witnessing an attempt at AI algorithm branding in three aspects based on human survival: Musk's 'shifting focus to space' for human preservation, Pichai's 'social adaptation to the age of misinformation and fake news', and Altman's 'timeline for human-friendly AI development'. Which algorithm do you prefer?
*This article is the original content of the The Electronic Times named columnfrom the April 24, 2023 edition.
References