![translation](https://cdn.durumis.com/common/trans.png)
This is an AI translated post.
OpenAI's Governance Drama: The Ethical Future of AI
- Writing language: Korean
- •
-
Base country: All countries
- •
- Information Technology
Select Language
Summarized by durumis AI
- The firing and reinstatement of OpenAI CEO Sam Altman by the board is an event that highlights the unusual structure of OpenAI, a non-profit with a for-profit arm, and the board's emphasis on safety.
- This event has amplified societal concern over AI development and regulation, revealing diverse perspectives and interpretations of the future of AI.
- It particularly demonstrates how imagination about the future of AI can influence reality, suggesting the need to consider diverse social and cultural contexts when deciding the future of AI.
A few months after the launch of ChatGPT, a massive drama unfolded within the board of directors of OpenAI, a non-profit organization that grew from zero to $1 billion in annual revenue. Sam Altman, the company's CEO, was fired, and after announcing his move to Microsoft, he was reinstated as OpenAI CEO. It is rare for a board of directors to fire a founding CEO, especially given that the company is a giant worth $80 billion, as the founding CEO is usually the most powerful figure in the company.
However, this five-day-long drama could unfold thanks to OpenAI's unique structure, bound by a mission statement "for humanity." Driven by the goal of "preventing the disappearance of humanity and everything observable in the universe," the three independent board members known to have led the decision to fire Altman are all associated with effective altruism (EA), which is tied to the company's mission.
the board structure of OpenAI
Throughout this year, Altman has toured the globe, warning the media and governments about the existential risks of the technologies he is developing. He has described OpenAI's unique non-profit-within-profit structure as a warning against the reckless development of powerful AI, and mentioned in a June interview with Bloomberg that if he were to act dangerously or against the interests of humanity, the board would have the power to dismiss him. In other words, it was a structure intentionally designed to allow the board, which prioritizes safety over uncontrollable AGI, to fire the CEO at any time.
So, how should we view the current situation where the new CEO of OpenAI is the same as the previous one?
This is difficult to conclude as just a happening where nothing changed. The fact is, we have confirmed that decisions related to the development of ethical artificial intelligence, which could have the greatest impact on our society today, have been made based solely on the opinions of a very small group. Sam Altman is now a symbol of the era in which the world's attention is focused on AI development and regulation. We have seen the process by which the only external means that could have blocked his future judgment and decisions was effectively scrapped, and we have confirmed the importance of additional external means in the future.
Furthermore, this incident has made clearer the positions and interpretations of those who worry that AI will destroy humanity, those who believe that technology will accelerate a utopian future, those who believe in free-wheeling market capitalism, and those who support strict regulation to suppress big tech companies who believe they cannot balance the potential harm of powerful disruptive technology with the desire to make money. This all stems from a fear of humanity's future with AI, highlighting the need for a more diverse community to identify the actors who are predicting this future.
Yejin Choi, a professor at the University of Washington, who is included in the top 100 most influential people in the world in the AI field, explained in her TED talk that the reason why artificial intelligence, which passes various national exams, adds a foolishly unnecessary process using 12-liter and 6-liter kettles to measure 6 liters of water is because of a lack of learning of common sense, which humans acquire in society.She explained.
When predicting the future, we often identify new things from an outsider's perspective, using the "boundary" that indicates where the mainstream is heading. What appears to be a stable vision of the future from the outside is always a “vivid experience” abstracted from the present.Arjun Appadurai, an American anthropologist, argued that imagination is not a private and personal capacity, but a social practice. This means that various imagined futures can become realities. This incident can be interpreted as one of the landscapes created by the imagination of an uncertain future related to the emergence of AGI.
Having confirmed that the expectations of industry leaders towards the future have significant political implications, we will need a deeper understanding of the future that is collectively imagined and constructed in various social and cultural contexts when it comes to deciding the future of AI. The question now is how we can create opportunities to actively present collective expectations based on vivid experiences within more diverse communities.
References