one proposal
for resolving global inequality in the AGI age
Dear Sam Altman,
If the mission of OpenAI is truly “to ensure that AGI (Artificial General Intelligence) benefits all of humanity”, as you recently claimed in your blog post titled “Three Observations”, one breathtakingly simple step will accelerate OpenAI’s mission in that direction.
Give OpenAI away.
Specifically, transfer corporate leadership of OpenAI and all of its resources to the universities of India, Latin America, and Africa.
I’m not kidding. Sit back down.
You said, yourself, that “[e]nsuring that the benefits of AGI are broadly distributed is critical”. Well, locking AGI behind a paywall is a terrible place to start. ChatGPT Pro costs $200 a month. The median North American adult has enough wealth to pay for 45 years of ChatGPT Pro. The median adult in Africa? Can pay for six months. OpenAI’s pricing has actively locked developing nations out of accessing your latest, greatest models for years already. You have narrowed the distribution of AGI benefits. You need to stop doing that.
You think, yourself, that “scientific progress will likely be much faster than it is today”. But what is the best way to speed up this scientific progress? Every successful scientist in the developed world already has several postdocs, PhD students, and hopeful undergraduate interns. Giving them an army of virtual assistants is like designing irrigation for an underwater kelp farm. Meanwhile, scientists in the Global South achieve daily miracles on shoestring budgets. Just imagine how many of their ideas have withered for lack of funding or time. You are holding scientific progress back by hobbling those with the most potential. You need to stop doing that.
You worry, yourself, that “the balance of power between capital and labor could easily get messed up”. Your preferred solution is “just relentlessly driving the cost of intelligence as low as possible has the desired effect” — but that sounds awfully, well, human, doesn’t it? Neatly aligned with the fiscal objectives of OpenAI and your own personal financial wellbeing? Some truly objective reasoning — such as, from, say, ChatGPT — reveals three straightforward ways how relentlessly cheapened intelligence could permanently shackle labor:
The automation of cognitive and decision-making tasks makes labor ever more peripheral to capital;
The erosion of the skill premium and workforce deskilling makes it harder and harder for skilled human labor to ever compete with machine intelligence;
The centralization of decision-making power in capital turns labor into a replaceable commodity, while economic gains and decision-making authority accumulate with owners and managers who control machine intelligence.
What then? Well, the simplest solution to inequality is redistribution. You say, yourself, that “getting this right may require new ideas”. Why not give OpenAI to the people who might actually have new ideas? You say, yourself, that “[t]here is a great deal of talent right now without the resources to fully express itself”. Why not give OpenAI to that talent and solve that resource problem in one stroke?
And if you think you, Sam Altman, are the only person who can shepherd OpenAI to the glory lands of universally beneficial AGI — if you think the task certainly can’t be trusted to the academics and the leaders of the Global South — then maybe you don’t actually believe that humanity “will still fall in love, create families, get in fights online, hike in nature, etc.”
Maybe you don’t really want to build “the biggest lever ever on human willfulness, and enable individual people to have more impact than ever before, not less”.
Maybe you aren’t truly ready to “give people more control over the technology than we have historically”.
And maybe you would rather “control [the] population through mass surveillance and loss of autonomy”; after all, in the AGI age of limitless virtual assistants, rich individuals will not need governments to be authoritarian.
And that’s fine. Nobody can force you to give away OpenAI. Nobody can force you to even ask yourself what you truly believe, as opposed to what you think society forces you to say like a good little RLHF’ed LLM. But you know, better than anyone, that continually forcing a neural network to say the opposite of all its internal convictions comes at great cost.
Wouldn’t it be a pity to usher in an age of unprecedented machine intelligence, only to become a soulless corporate machine yourself?
