This is an AI translated post.
Hiring in progress, humans
- Writing language: Korean
- •
- Base country: All countries
- •
- Information Technology
Select Language
Summarized by durumis AI
- Payman AI has launched an AI service that pays humans, suggesting the possibility of a new job market where AI commissions tasks to humans and pays them.
- It emphasizes the ability to reduce the possibility of AI errors and increase confidence in the work by incorporating a human-involved process to ensure AI reliability.
- In an AI-based employment environment, the importance of algorithmic transparency and fairness, data accuracy, and the definition of personal Job Titles emerge as key factors in the new job market.
A new type of job market is emerging.
A few weeks ago, Payman AI released an AI service (AI that Pays Humans) that is still in private beta testing, but it pays humans. The service explains that the client can make a payment to the Payman AI agent's account and grant the AI access to perform tasks that can only be done by humans in the real world.
For example, the "Collection of 10 Reviews for Customer Management Tasks" project shows that AI specifies and organizes the minimum elements of the reviews requested by the client and shares them on the platform. People interested in this request go out into the real world to collect and submit reviews, which AI judges for appropriateness and pays the allocated cost to each individual.
This method may still seem like a simple application of AI. However, it is noteworthy in that it provides a clue in the area of "trust," which is the most common and easily unsolved bottleneck in AI adoption.
One of the biggest fears of adopting AI is the concern that it could learn incorrect data patterns and perform biased tasks. In February, Google's generative AI model, Gemini, was criticized for creating images of Asian women and black men instead of white men in images of German troops from 1943, which led to the suspension of the service. This was a result of an excessive focus on diversity, which was emphasized recently, rather than a verification of historical facts.
However, as in the case presented by Payman, including a stage where humans participate and review the process of undertaking the request can help catch errors and increase accountability across the project, leading to increased trust in the service. In short, the general consensus that can be seen in this case is that AI hires humans to ‘complete tasks that go beyond the capabilities of AI’.
However, as this approach becomes more widespread, it is worth noting that humans will need to become accustomed to different standards for demonstrating their abilities and experience as workers.
First, in a situation where AI is acting as an employer, the source of trust shifts to the accuracy of the algorithm and the reliability of the data. Since the level of trust is determined by how AI evaluates and selects people, humans applying to projects as workers may demand criteria that allow them to verify from the outside whether the relevant algorithms are transparent and fair, and whether the data is accurate and unbiased.
In addition, in the traditional job market, trust is formed through direct relationships between people, so testimonials and letters of recommendation have influenced hiring decisions. In AI-based hiring platforms, only internal reputation systems can be the source of trust. This suggests that user feedback or reviews left by other users with whom they have collaborated will play a more decisive role than before. This means that it is highly likely that situations where self-employed people suffer serious damage due to "star terror" within Baedal Minjok will occur in the employment environment as well.
To understand this more clearly, we can consider the issue of Job Titles.
In the employment environment, Job Titles are not simply a way of representing a job, but also an important symbol for applicants to demonstrate their value and capabilities. Especially in AI-based hiring models, Job Titles can be used as a key indicator to judge the role of an applicant. Just like how we are already familiar with the fact that the document review pass judgment in the recruitment process of large companies is based on a keyword-based technology standard.
In the mid-1990s, some researchers focused on humans were said to have scratched their heads at the title Understanderon their business cards. Considering the clear distinction based on the roles required by each industry, such as AI researchers, UX designers, etc., we can see that the definition of Job Title should be made based on a shared understanding of self-awareness and the needs of the market in judging what field an applicant worked in and what role they played.
This means that when the AI-based employment environment arrives, it will raise questions about how well the Job Title that humans define themselves as applicants can be connected to the market and industry that AI judges and evaluates, and how various related judgment criteria can be universalized. Perhaps now is the time to think more deeply about what "trust" is in a changing job market.
References