Don’t be fooled by AI companies’ ‘ethics washing’

During the Big Tech antitrust hearings in July 2020, Maryland Congressman Jamie Raskin asked the CEOs of Amazon, Apple, Facebook, and Google, “Are any of your companies benefit corporations, and is that something you have considered doing?” After being met with silence, he concluded, “Okay, I take it the answer is no,” and moved on.

Times have changed significantly since then, and the benefit corporation legal structure is a hot topic among leading AI companies. Unlike traditional corporations that legally prioritize delivering profits to shareholders, this new type of company is ostensibly committed to considering broader public benefits in addition to shareholder interests.

The legal filings of AI companies portray the benefit structure as a crucial mechanism to balance the societal and environmental risks of artificial intelligence with the likely economic returns that come with the technology’s astounding growth prospects.

For instance, in its filing as a benefit corporation in Nevada, Elon Musk’s xAI stated the company’s purpose as creating “a material positive impact on society and the environment, taken as a whole.” Anthropic, another leading AI company, organized as a public benefit corporation in Delaware, with a stated purpose of “the responsible development and maintenance of advanced AI for the long-term benefit of humanity.” It has been reported that OpenAI, which recently closed a record round of investment at a $157 billion valuation, is organizing as a benefit corporation.

The term benefit corporation does sound positive, as do the accompanying noble pronouncements from AI firms. But will such structures provide the social and environmental safeguards the lofty statements of purpose say? Or are they an example of “ethics washing,” where companies use benefit corporate governance to obfuscate bad behavior, and potentially avoid more rigorous governmental oversight?

On the one hand, we clearly need to think of new ways to create societal safeguards as AI is developing. In addition to safety and ethics concerns, there are also significant environmental implications for the power- and water-hungry technology. A recent calculation estimates that using AI to write one 100-word email requires a bit more than a bottle of water and the same electricity as powering 14 LED light bulbs for about an hour.

Multiplied by the tens of millions of daily queries on today’s major artificial intelligence platforms, AI is now responsible for derailing tech companies’ environmental promises. Google, a major player in the AI space, has committed to being net zero by 2030, but recently released a report showing its carbon emissions rose by 48% in the past five years. It also reported it had only replenished 18% of the water it consumed in 2023, a significant shortfall from its 120% replenishment goal for 2030.

Microsoft, despite also having pledged to be net zero by 2030, reported its carbon emissions grew by 30% in the past three years. Both of these companies, and many other leading technology firms, attribute their environmental backsliding to AI.

The benefit corporate structure does provide a number of advancements over today’s dominant corporate forms, which legally require directors and senior managers to prioritize short-term profits. This is important because aligning corporate governance with public benefit not only aids long-term decision-making but also provides a degree of legal cover for enacting a broader societal focus.

Further, benefit corporations are required to be accountable and transparent about their stated purpose, which allows the public and investors to better understand the specific steps they’re taking. Many businesses that are known for their sustainability practices, such as Patagonia, Ben & Jerry’s, and Warby Parker, have become benefit corporations. (Although this legal structure should not be confused with B Corporations, which are companies certified for their social and environmental performance by the nonprofit B Lab).

But OpenAI’s decision to take significant investment before settling on its benefit corporation structure points to some of the drawbacks of voluntarily adopting governance provisions as a means of reform. First, lead investor Thrive Capital and other investors will be able to have a significant say in what the corporate purpose is and how it’s enacted. If OpenAI really cared about public benefit, it would have defined the structure before seeking billions in capital at a massive valuation. But changing its structure at an earlier stage could have undermined the high valuation, which supports Musk’s contention that OpenAI is a “for-maximum-profit AI” company, not one with an interest in public benefit.

While it’s essential to reform corporate governance standards, such moves by the biggest AI companies are yet more examples of the oft-used corporate playbook of adopting voluntary half measures to avoid governmental oversight and scrutiny.

In 2019, to much acclaim, the Business Roundtable of leading American companies declared corporations have a responsibility to deliver value for “stakeholders” like employees and communities, not just shareholders. Yet recent journalistic and academic investigations have shown this statement to be mostly greenwashing. Following the murder of George Floyd in 2020, many companies received widespread praise for their new diversity, equity, and inclusion initiatives, yet most have since backpedaled from their commitments. In short, corporations are well schooled in lulling the public into complacency by saying they’ll voluntarily do the right thing.

While adopting the benefit corporation structure is a step in the right direction, it is far from sufficient to protect society from the risks posed by AI and its significant environmental impacts. To truly ensure that AI companies prioritize the public good, we need mandatory regulatory oversight and legally binding ethical and environmental standards. Transparency and accountability must also be central, as well as legal liability for harms caused by AI systems.

Public involvement in governance is also crucial. We can’t abdicate responsible management to the choices made by Elon Musk, Sam Altman, and their peers. We need to ensure that decisions reflect societal well-being rather than the interests of investors. These measures, together with the benefit corporation structure, are essential to ensure AI development aligns with long-term societal welfare rather than short-term profit.

No comments

Read more