Loading

Three Pathways to Mitigating AI Power Concentration

Matt Prewitt calls for collective bargaining as a powerful means to empowering workers in the AI economy.
Twelve characters inspired by the twelve Chinese Zodiac animals are all gathered around a long square table. The characters each have a human like body, but their heads each represent different zodiac animals. On the table are various tools and machines related tp technology – like computers, hardrives, files, data charts, files and keyboards. The characters all seem to be engaging in conversation with one another. Behind the table is an apple tree, with red apples amongst the green leaves on the branches. The room's walls are gradient orange, pink, purple to green and there is a series of 01 lining the walls - representing binary code. At the back of the room there is a 'window' which is shaped like a traditiona; 'window' tab on a computer. Inside the window is the classic computer desktop depicting a green field and blue sky with clouds. In the centre of the window is an old style Microsoft logo.

On Jan 15, 2025 at Stiftung Mercator in Berlin, RadicalxChange Foundation, along with partners Global Solutions Initiative and Sciences Po Technology and Global Affairs Innovation Hub, co-hosted a side event to the Paris AI Action Summit. We focused on the future of collective bargaining in the context of the AI revolution. The discussions helped to advance our thinking in several important ways. Here are some quick initial reflections.

History suggests that following significant technological breakthroughs, individuals and communities often endure temporary but harmful losses of economic bargaining power. For example, real living standards declined in industrializing countries between the mid-18th and the early-to-mid-19th centuries, partly because individuals’ contributions to vital productive processes became more interchangeable and lacked bargaining power. On a longer arc of history, new technology’s benefits usually accrue to whole societies, but such short-term social disruptions partly offset those benefits and frequently destabilize societies. It is therefore important to strategize toward achieving social equilibrium quickly, robustly, and without undermining the processes of technological development.

Power rebalancing after technological breakthroughs occurs through at least three pathways: technological, political, and social. These pathways are not mutually exclusive, possess unique benefits and drawbacks, and are more or less suitable in different societal and technological situations.

What might these modes of rebalancing look like in the nascent AI revolution? Which are likeliest to mitigate losses of bargaining power and/or uphold the integrity of individuals and communities? We will first define, then critique, and evaluate three pathways.

Technological pathway: Open source and technology sharing

Technological rebalancing occurs when the dissemination or cheapening of the relevant technology undermines the advantage of the technology’s owners (as in the personal computer and software revolutions). Possibly, open-source AI models will develop in such a way that they remain competitive with the top proprietary models. If so, the power that accrues to the top models’ owners may not be extreme or unprecedented. 

An important worry with this approach, amply documented elsewhere, is AI safety. But what about the power concentration logic – can “open” AI forestall power concentration? While far from certain, it is possible that the capacities of frontier and open models will not remain as similar as they are now, i.e., that the frontier models will pull away, especially as the availability of new general training data diminishes and the techniques for improving the models through compute power continue to advance.

Political pathway: Public AI and regulation

Political rebalancing occurs when direct state interventions check the rights of businesses to exploit the new technology (as in the 18th century, when speech controls and intellectual property statutes limited the power of printing press owners). Possibly, the state or states with the best AI technology will both (a) remain democratic and (b) retain meaningful and public-interested power over the top models. If so, they may be able to redistribute profits and ownership enough to offset disruption, and regulate harmful misuses.

States that do not control the leading models are in a similar position to any other powerful agent in society. They have an incentive to reduce the extent to which the leading models achieve strong economic or intelligence dominance over them and their societies. If accountable to their citizens, they will try to secure broad-based benefits to their societies from AI, but their leverage to act as a counterpower is unclear. On the other hand, states that do control leading models are, for that very reason, likely to have difficulty remaining truly accountable to citizens.

If privately-controlled AI achieves extreme capability it will likely accrue enough capital to thoroughly control politics, undermining democratic accountability. If state-controlled AI achieves such capabilities, the state will simply become an unaccountable power.

Regulations directly limiting AI or distributing its power in the public interest – a modern version of those that limited the power of the printing presses through speech restrictions and IP – are unlikely to enable these nations to act as a counterpower because of the international nature of the technology. However, intelligent application of state power and regulation may significantly strengthen the other two approaches: maximizing the competitiveness of open source models, and removing obstacles to the collective creation of unique, deep datasets that can anchor countervailing economic power.

Social pathway: Collective bargaining, information protection, and production

Social rebalancing occurs when social or labor organizations form a collective counterpower, achieving an economic foothold vis-a-vis the technology’s owners (as in the late part of the industrial revolution). Both more computing power and more high-quality data have the potential to unlock new frontier AI capabilities. Computing power is a fairly direct function of money, so this factor is likely to tend toward a concentration of AI’s power.

However, new data cannot always be straightforwardly purchased. Thus, if important unique datasets are produced and not easily or practically available for the top AI model owners, the owners of these datasets have a unique chance of producing AI technology on non-frontier or open-source models whose performance can compete, at least in certain domains, with the leading proprietary models. Thus, maximizing both “open” model provision and “closed” data collection might provide the best formula to give leverage to organizers of AI counterpower. In this picture, trusted managers of significant, unique, collectively-produced datasets could act as tomorrow’s version of labor unions.

Specialized data, controlled by trusted intermediaries, is unlikely to be a critical ingredient to creating premier general AI models. However, it may allow its beneficiaries (from small collectives to sovereign states) to build non-frontier models that are maximally competitive with frontier models in the domains pertinent to the data. The deeper and more unique their data, the likelier this is to be true.

Synthesizing These Three Pathways Into a Productive Agenda

None of these approaches suffices by itself to insure against extreme power concentration. In concert, they reinforce each other, pointing toward three points to consider as a promising strategy:

  1. Seek to minimize the distance between open models and leading proprietary ones (within the bounds of safety).
  2. Avoid overreach with direct regulation of non-frontier AI…instead, develop a regulatory program that makes it simple and minimally legally risky to set up “trusted data intermediaries” (on the scale of communities, industries, and countries) which can act as collective bargaining agents in the marketplace, and which are obligated to advocate actively for their constituents’ complex interests in a fiduciary-style manner.
  3. Encourage the production of unique datasets, managed by trusted intermediaries, which enable non-frontier AI to compete or outperform in particular domains or industries. To have a meaningful role in the marketplace, trusted data intermediaries must do much more than simply steward local data. They must, instead, become a socio-political vector that unlocks the unique collection of exponentially more data than presently exists.

For a more detailed look, read the extended version of Matt’s article published on the RadicalxChange Blog.

Featured image: Yutong Liu / Better Images of AI / Joining the Table / CC-BY 4.0

Register for Updates

Would you like to receive updates on the Global Solutions Initiative, upcoming events, G7 and G20-related developments and the future of multilateralism? Then subscribe here!

1 You hereby agree that the personal data provided may be used for the purpose of updates on the Global Solutions Initiative by the Global Solutions Initiative Foundation gemeinnützige GmbH. Your consent is revocable at any time (by e-mail to contact@global-solutions-initiative.org or to the contact data given in the imprint). The update is sent in accordance with the privacy policy and to advertise the Global Solutions Initiative’s own products and services.