![translation](https://cdn.durumis.com/common/trans.png)
Ez egy AI által fordított bejegyzés.
A vezető AI modellek paradoxona: az átláthatóság
- Írás nyelve: Koreai
- •
-
Referencia ország: Minden ország
- •
- Informatika
Válasszon nyelvet
A durumis AI által összefoglalt szöveg
- A Stanford Egyetem kutatói 10 legkorszerűbb AI rendszert vizsgáltak meg, köztük a GPT-4-et is, és megállapították, hogy hiányzik az AI modellek átláthatósága az adatgyűjtemények, a képzési módszerek stb. tekintetében.
- Különösen az OpenAI és a Google, mint vezető AI vállalatok, nyereségorientált üzleti modelljüket követve vonakodnak az adatok közzétételétől, ami gátolhatja az AI technológia fejlődését és a jövőben monopolhelyzet kialakulásához vezethet.
- A szakértők azt javasolják, hogy növeljék az AI modellek átláthatóságát a reprodukálhatóság biztosítása és a társadalmi felelősségvállalás erősítése érdekében, és szorgalmazzák az AI technológia fejlődésével párhuzamos társadalmi konszenzus és szabályozási vita kialakítását.
A Stanford University research team published a study on the 18th, which shows how deep and potentially dangerous the secrets of GPT-4 and other state-of-the-art AI systems are.
Bemutatjuk az alapmodell-átláthatósági indexet, Stanford Egyetem
They examined a total of 10 different AI systems, most of which were large language models, such as those used in ChatGPT and other chatbots. This includes widely used commercial models like OpenAI's GPT-4, Google's PaLM 2, and Amazon's Titan Text, including how transparently developers have disclosed the data used in model training (how data was collected and annotated, whether copyrighted materials were included, etc.). They evaluated openness based on 13 criteria. They also investigated whether they disclosed the hardware used to train and run the models, the software frameworks used, and the energy consumption of the project.
The result was that none of the AI models achieved more than 54% on the transparency scale across all criteria mentioned. Overall, Amazon's Titan Text was rated as the least transparent, while Meta's Llama 2 was selected as the most open. What's interesting is that Llama 2, a leading representative of the recent open-source versus closed-source model opposition, is an open-source model, but it did not disclose the data used in training, the data collection and curation methods, etc. This means that despite the growing influence of AI on our society, industry-related opacity is a widespread and persistent phenomenon.
This means that the AI industry is at risk of becoming a profit-oriented field rather than a field of scientific development, potentially leading to a monopolistic future dominated by certain companies.
Eric Lee/Bloomberg a Getty Imagesen keresztül
OpenAI CEO Sam Altman has already met with policymakers around the world to actively explain this unfamiliar and new intelligence to them and offer to help refine related regulations. However, he generally supports the idea of an international body overseeing AI, but also believes that some limited rules, such as prohibiting all copyrighted material from data sets, could be unfair barriers. This is clear evidence that the 'openness' embedded in the company name OpenAI has been distorted from the radical transparency it presented at its inception.
However, it is also worth noting that the results of this Stanford report show that there is no need to keep their models so secret for the sake of competition. Because the results are also an indicator of the underperformance of almost all companies. For example, it is said that no company provides statistics on how many users rely on its models, or on the region or market segment where its models are used.
In organizations that follow the principle of open source, there is a proverb, 'Many eyes make all bugs shallow.' (Linus's law) Primitive numbers help identify problems that can be solved and fixed.
However, the practice of open source tends to gradually lose its social status and value recognition within and outside of public companies, so unconditional emphasis is not very meaningful. Therefore, rather than sticking to the framework of whether the model is open or closed, it may be a better choice to focus on the discussion on gradually expanding external accessibility to the 'data' that is the basis of powerful AI models.
For scientific progress, it is important to ensure reproducibilityof specific research results. Without specifying ways to ensure transparency towards the key components of each model generation, the industry is likely to remain in a closed and stagnant monopoly situation. And this should be considered a high priority in the current and future context, where AI technology is rapidly permeating all industries.
It has become important for journalists and scientists to understand data, and transparency is a prerequisite for planned policy efforts for policymakers. Transparency is also important for the public, as the end users of AI systems, who may become victims or perpetrators of intellectual property rights, energy use, and potential problems related to bias. Sam Altman argues that the risk of human extinction caused by AI should be a global priority, like pandemics or nuclear war. However, we should not forget that the existence of our society maintaining a healthy relationship with developing AI is a prerequisite for reaching the dangerous situation he mentioned.
This article is the original text published in the electronic newspaper named column on October 23, 2023.
References