The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation.
But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company
The Time article above contains the entirety of a previously unreleased document OpenAI wrote for E.U. officials.
Here is the thing, I agree with Altman that E.U.’s AI Act was too broad. That isn’t where I take issue with this.
The problem is that Altman has been spending his time publicly lobbying for regulation when it would hurt his competitors while privately pushing for the opposite when it would affect him.
Again, an obvious push for regulatory capture.
OpenAI has pledged not to compete with other companies in the event they get close to surpassing their capabilities. The fear being that competitive “race dynamics” would lead to unsafe development and deployment practices.
From OpenAI’s founding Charter:
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.
This was again emphasized in the GPT-4 technical report:
One concern of particular importance to OpenAI is the risk of racing dynamics leading to a decline in safety standards, the diffusion of bad norms, and accelerated AI timelines, each of which heighten societal risks associated with AI.
There is the straightforward way to honor this promise: keep chugging along for now and, if a company later comes along and laps OpenAI, give up the fight fair and square.
I think Altman’s actions these past few months has demonstrated he is taking another, less charitable, approach: if OpenAI can bog down competitors with arduous regulations they will never have to give up their lead.
So sure, you could say that this is consistent with their stated views on AI safely—they naturally trust their own development safeguards more than they trust others—but it is also hypocritical and dishonest.