What OpenAI’s Power Struggle Means to China?

cs_opinion_img
Witness how the conflict between rapid AI advancement and the quest for safe, human-aligned AI is not just a corporate affair but a crucial pivot shaping the trajectory of global technology and power. As the world watches this ‘House of Cards,' we're prompted to question: Who will steer the course of AI, and how will it redefine our collective future?
November 22, 2023
author_image
Chinese journalist, Former Editor-in-Chief of Global Times
Click Register
Register
Try Premium Member
for Free with a 7-Day Trial
Click Register
Register
Try Premium Member for Free with a 7-Day Trial

What’s Happening at OpenAI? Simply put, it’s a friction between practical interests and idealism. Sam Altman, who was ousted by the board, supports effective accelerationism, prioritizing the advancement of artificial intelligence; the company’s Chief Scientist, Ilya Sutskever, focuses more on mitigating AI risks, advocating for super alignment of AI. These are both essential driving and restraining forces in the development of AI. Now, Altman has gained the support of investors and almost all employees, gaining a significant upper hand, while Sutskever has switched sides.

After being removed, Altman enters the OpenAI office building with a visitor pass (Image source: Altman’s social media)

This is no longer just a quarrel and game among a few young scientists. The interests of all employees, capital, and American societal business and AI hegemony have come into play. The victory of effective accelerationism over super alignment is a clear and undeniable trend.

Super alignment in artificial intelligence needs to align with human safety, ensuring that as AI becomes more powerful, it develops a kind of love for humanity, akin to a mother’s (the powerful AI) care for her baby (humanity). OpenAI was initially established as a non-profit organization, with the aim of its research serving all of humanity and avoiding monopolization by a single company. Elon Musk was one of its earliest investors. Later, to grow stronger, OpenAI became a limited-profit company. After releasing ChatGPT, it showed boundless prospects like a great river flowing into the sea. Once fully commercialized, it could become a top trillion-dollar publicly traded company. Can its original intentions still be upheld?

It turns out, Sutskever and the board fired Altman, but the situation reversed within hours. Of the 770 employees, over 700 signed a petition demanding Altman’s return, or they would all resign and join him at Microsoft. This shows that under immense financial incentives, super alignment is unlikely to be maintained in a commercial environment; its breach is inevitable.

Altman may not be unconcerned about AI going out of control and causing potential harm to humanity, but his promoted effective accelerationism is a realistic choice that takes into account various interests. Unless the U.S. government intervenes to prevent the rapid progress of AI without absolute safety measures, super alignment cannot be upheld solely by market forces.

Sutskever, the staunchest supporter of super alignment, facing strong opposition from employees and investors, quickly switched sides, joining the petition for Altman’s return. This could be seen as his reluctant “defection” in the face of collective employee opposition and investor pressure. This is a glimpse of the fragility of idealism in the tides of commerce. Yes, so many investors are waiting for OpenAI to fully commercialize and go public to make a fortune. Countless companies are waiting for OpenAI to create and open large-scale applications, enabling their success in various fields.

Moreover, if OpenAI invest more resources in AI safety, would other private companies in the U.S. and other countries do the same? AI competition has become part of national competition. Would the U.S. government want American companies’ AI overall iteration speed to slow down? Currently, one of the U.S.’s primary targets in its campaign against China is our AI development. I believe Washington would be pleased to see Altman’s approach prevail over Sutskever’s.

On October 17, local time, the U.S. Department of Commerce announced new restrictions on chip exports. U.S. Commerce Secretary Raimondo stated that the purpose is to restrict China’s “access to advanced semiconductors that could drive breakthroughs in AI and precision computing.”

From a distance, watching this major personnel upheaval at OpenAI, and the drama within the drama, what comes to my mind is that OpenAI’s AI research has indeed progressed very quickly. They say AI has developed “consciousness.” This dispute over strategies shows that we have reached or are close to the critical point where humanity needs to decide how much risk it’s willing to take next.

I also think about the lack of discussion among Chinese internet chat groups and the intellectual and research community regarding China’s AI research facing the aforementioned real risks. I believe it’s not due to a lack of responsibility among Chinese intellectuals, but more likely because our AI research lags behind that of the U.S., and such risks haven’t yet formed a sense of urgency among our intellectual community. Therefore, everyone is discussing OpenAI’s situation, enjoying the conversation as if recounting treasures.

AI has just begun, and its potential depth is hard to imagine. China must accelerate its AI development to broaden our understanding and intentions regarding AI’s deep space. It’s not just about AI applications driving economic development, but also about maintaining basic national security and a significant stake in shaping the future of humanity.

Right now, we’re watching a few young OpenAI entrepreneurs play a “House of Cards.” It seems as if the future direction of AI will be determined by their struggle. This is unacceptable. So, what is acceptable? We can only gain understanding and a voice in that speed by accelerating our own AI development.

VIEWS BY

author_image
Chinese journalist, Former Editor-in-Chief of Global Times
Share This Post