AI is turning workers in ‘cyborgs,’ ‘centaurs,’ and ‘meat puppets’

Advancements in Artificial Intelligence are impacting workplaces in a wide range of ways. From the types of work that employees do, to the safety of their environment, AI is transforming the labor market.

For instance, when it comes to workplace safety, technologies like AI-powered machine vision can improve workplace safety by early identification of risks, including access by unauthorized people or failure to use equipment safely. These tools can also improve hiring, task design and training. However, their employment requires careful consideration of employee privacy and agency, especially in remote work settings where home surveillance becomes a concern.

Companies must maintain transparency and clear guidelines about data collection and usage to balance safety enhancements with individual rights. When thoughtfully implemented, these technologies can create a mutually beneficial environment of increased safety and productivity.

Co-pilots and meat puppets

Technology historically transforms jobs rather than outright eliminating them. For instance, word processors changed secretaries into personal assistants, and radiology AI enhances rather than replaces radiologists. Roles requiring specialized skills, nuanced judgement or real-time decision-making are less susceptible to full automation. However, as AI takes on more tasks, some people could become ‘meat puppets’, executing manual tasks under AI supervision, which deviates from the idealistic promise of AI freeing us for creative work.

Big Tech’s early adoption of AI has given it a competitive edge, leading to industry consolidation and new business models. In various sectors, humans are increasingly acting as conduits for AI – call centre agents follow machine-generated scripts and salespeople receive real-time advice from AI.

In healthcare, while roles like nursing are considered irreplaceable due to their emotional and tactile aspects, AI ‘co-pilots’ could handle tasks like documentation and diagnostics, thereby reducing the human cognitive involvement for non-essential tasks.

Cyborgs and centaurs

The Cyborg and Centaur models describe two distinct frameworks for human-AI collaboration, each with its own advantages and limitations. In the Cyborg model, AI is seamlessly integrated into the human body or workflow, becoming an extension of the individual – akin to a prosthetic limb or cochlear implant. This deep integration blurs the boundary between human and machine, sometimes even challenging our notions of what it means to be human.

The Centaur model, on the other hand, emphasizes a collaborative partnership between humans and AI, often outperforming either AI or human competitors. This model preserves the values of human insight, using it to augment the machine’s capabilities, creating something more than the sum of its parts. In this setup, the human remains in the loop, making strategic decisions and providing emotional or creative input, while the AI focuses on computation, data analysis or routine tasks. Here, the entities remain distinct, and their collaboration is clearly delineated. However, the rapid advancement of chess AI, culminating in systems like AlphaZero, has shifted this dynamic. These days, the prowess of AI in chess is such that the addition of human strategy might even detract from the AI’s performance.

In a business setting, the Centaur model promotes a collaborative partnership between AI and humans, each contributing their strengths to achieve common objectives. For instance, in data analysis, AI could process large datasets to identify patterns, while human analysts apply contextual understanding to make strategic decisions. In customer service, chatbots could manage routine queries, leaving complex, emotionally nuanced issues to human agents. Such divisions of labour optimize efficiency, while augmenting human capabilities rather than replacing them. Maintaining a clear delineation between human and AI roles also aids in accountability and ethical governance.

Worker-led co-design

Worker-led co-design is an approach that involves employees in the development and refinement of algorithmic systems that will be used in their workplace. This participatory model allows workers to have a say in how these technologies are implemented, thereby ensuring that the systems are attuned to real-world needs and concerns. Co-design workshops can be organized where employees collaborate with designers and engineers to outline desired features and discuss potential pitfalls. Employees can share their expertise about the nuances of their job, flag ethical or practical concerns and help shape the algorithm’s rules or decision-making criteria. This can make the system more fair, transparent and aligned with workers’ needs, reducing the risk of adverse effects like unjust penalties or excessive surveillance. Moreover, involving employees in co-design can foster a sense of agency and ownership, potentially easing the integration of new technologies into the workplace.

C-suite AI

AI holds the potential to substantially augment executive functions by rapidly analyzing complex data related to market trends, competitor behavior and personnel management. For instance, a CEO could receive succinct, data-driven recommendations on acquisitions and partnerships from an AI adviser. However, AI currently can’t replace the human qualities essential for leadership, such as trustworthiness and the ability to inspire.

Additionally, the rise of AI in management can have social implications. The erosion of middle management roles due to automation could lead to identity crises, as the traditional understanding of ‘management’ undergoes a transformation.

In management consultancy, AI has the potential to disrupt by providing data-backed strategic advice. This could even lend a perceived objectivity to tough decisions, like downsizing. However, the deployment of AI in such critical roles demands careful oversight to validate their recommendations and mitigate associated risks. Striking the right balance is crucial: underutilizing AI might mean missing out on transformative benefits, while overreliance could risk ethical and public relations pitfalls.

Excerpted and adapted with permission from Taming the Machine by Nell Watson © 2024 is reproduced with permission from Kogan Page Ltd.

No comments

Read more