Connect with us

Civilization

In the AI Race Against China, Public-Private Partnerships are Critical

The AI race is on, and America must form public-private partnerships and reform AI research to stay ahead in the race.

Published

on

Digital matrix listing, symbol of artificial intelligence, or AI

As Washington nervously observes the private sector to verify that its demands are being met on the Artificial Intelligence (AI) front, one company has attracted particular attention of late: Nvidia. CEO Jensen Huang recently assured Washington that Nvidia will comply with advanced AI chip export curbs after Commerce Secretary Gina Raimondo emphasized the importance of blocking China’s access to this technology. The fate of geopolitics in the Indo-Pacific is increasingly tied to whether companies such as Nvidia will uphold their promises in the tug of war between profit and federal regulation.

Parallel AI developments in Singapore signal that Huang’s assurances may hold water. On December 4, Singapore released its National AI Strategy 2.0, which will entail an enormous surge in personnel focusing on AI and further integration of smart technology into the public sector. This will likely be accompanied by a trickle-down effect to private companies that the Singaporean government has expressed interest in working with. Not coincidentally, Huang supported the idea of investing in an AI site in Singapore a few days later, arguing that the country holds the potential for extraordinary progress in AI. This suggests that Huang, alongside many other American executives, has recognized that other Asian countries present tantalizing opportunities for AI investment.

This should come as no surprise. In September, Singapore scored the highest by a significant margin in the 2023 Asia Pacific AI Readiness Index prepared by the U.S.-based software company Salesforce. Japan placed second, China third, and South Korea and Australia trailed just behind. The score was determined by a combination of AI infrastructure, workforce development, and effective integration into company workflows. This transition away from Beijing was accelerated by the Biden administration’s August executive order restricting private equity flows into China’s AI sector. Earlier this month, Secretary of Defense Lloyd Austin announced AUKUS’ plans to launch an Indo-Pacific Innovation Challenge with a focus on AI in addition to cyber and electronic warfare capabilities.

Although other Indo-Pacific countries have become more attractive markets for American companies, it would be naive to neglect the lead that China holds over its neighbors. It benefits from the highest number of AI patents, the second highest number of AI businesses behind the United States, and, perhaps most importantly, an almost nonexistent separation between the public and private sector that throws all guardrails aside. This allows both sectors to remain synchronized, whereas in the United States, private companies have no obligation to heed the government’s warnings.

The opposite trend seems to be taking place today, however. The American government is trying to interfere as little as possible in the private sector’s fruitful AI developments while attempting to selectively crack down on potential dangers. In doing so, it has adopted what can be called a “deploy and modify” approach. The start of this year saw a series of meetings between White House officials and companies leading the AI race that had voluntarily submitted regulations on their own products. These regulations responded to mistakes or fears that emerged from the first deployment cycle. AI developers are aware of the huge risks that their products present yet are obligated to keep moving forward to stay ahead of their domestic and foreign competitors.

Advertisement

These White House meetings occurred after AI companies turned to the government. In the case of Nvidia, though, the government is turning to the private sector. Washington finds itself playing a different role based on the type of AI it is dealing with, but in both cases, the government is dependent on the choices made by private companies. The difficulty of building clear public-private partnerships is nothing new in world history, however.

For example, it has often happened that corporations feel insufficiently rewarded for restructuring their manufacturing and supply chains for uncertain ventures. This took place during the early years of the American Civil War. Private companies that controlled the telegraph industry were locked out from war council decisions despite their contributions to battlefield successes, while generals were left in the dark when communication lines emanating from Washington were cut off when these firms made repairs. Eventually, despite strains that persisted throughout the war, the North established a partnership between generals and civilian telegraphers more successfully than the South. Similarly, efforts to forge a strong public-private partnership might be what discourages Nvidia from concluding that the Commerce Department’s export controls on China will lead to unacceptable losses in profit.

Another historical tendency is that fear of monopolistic practices leads to squabbles between private companies which stymies fruitful cooperation between a preponderant firm and the government. For instance, in 1859, English engineer William Armstrong enjoyed a closer relationship with the government after giving his patents for one of the first breech-loading rifles to British officials. His competitors accused him of neglecting rival designs, which led to the cancellation of the government’s arrangements with the engineer. Britain scrambled to make up for lost time over a decade later when it became clear that Armstrong’s model was technically superior to the muzzle-loading rifles of the time. In similar fashion, Google’s service agreements, which prevent users from creating rival AI products and services, have come under scrutiny, though the existence of rival companies like OpenAI and Microsoft diminishes the probability that any one of these companies will be accused of monopolistic practices.

More often than not, the principal obstacle to cooperation is that governmental bureaucracy disincentivizes entrepreneurial, fast-paced companies from slowing down their pace to abide by regulations. The sudden advent of Large Language Models (LLMs) epitomizes this phenomenon. Government agencies have been rushing to integrate constantly evolving LLMs into the workplace despite the classification and data breach risks which they present. AI-oriented companies are hesitant to share their models with the government, though, which means that the public sector must build analogues from the ground up.

The fundamental problem in today’s AI competition is that the public and private sector operate along different timelines. When a company forges a new innovation in AI, it is in its interest to push forward and further refine its discovery to outpace its competitors. The government, on the other hand, prefers pausing at every new stage of development to minimize the risk of tampering with unexplored technologies. This is no longer possible due to the unprecedented speed at which AI is being rolled out and the immense technical knowledge gap between politicians and software developers. The time it would take for a governmental oversight agency to sanction an AI project based on a comprehensive scrutiny of its proposed advantages and risks is unsustainable, so companies feel compelled to continue along their own path.

Advertisement

What is different in today’s era of technological innovation is that these very same companies are simultaneously turning toward the government as a result of the incredible uncertainty bred by the rise of AI. However, due to the technical knowledge gap between politicians and AI developers, the public sector must assume that the private sector will integrate recommendations regarding safety and disinformation without strict verification methods. For instance, the White House’s Office of Science and Technology Policy is red-teaming tools like LLMs and providing risk assessments to their creators, but in order not to fall behind Beijing, it is not mandating a complete pause on the advancement of these models.

In addition, President Biden’s October executive order on “Safe, Secure, and Trustworthy Artificial Intelligence” does not come with any enforcement mechanisms. Since AI is moving so quickly, regulation at an early stage can inhibit innovation. Chinese AI regulation from 2021-2023—which should be seriously studied for its strong articulation of performance standards and disclosure requirements—was likely introduced so quickly because it is in Beijing’s interest to shape early-stage technology in a way that aligns with its ideological goals.

The world is at a technological turning point, and this time, the keys are almost exclusively outside of the U.S. government’s reach. Private industry has shown a willingness to temporarily show the door plan to officials to seek advice, but both sides are in agreement that it should not hand over the keys entirely, since this would slow progress. In China, the government has a duplicate key which it may not know how to use, but it nonetheless acts as leverage. Regulation will have to be imposed at some point as it was in China, but the United States will have to do so in a way that addresses the dangers already appearing in generative AI and LLMs without setting itself too far back in the AI race. Washington must continue the dialogue between the public and private sector, narrow the knowledge gap between both sides, and collaborate with Indo-Pacific countries investing in AI.

This article was originally published by RealClearDefense and made available via RealClearWire.

Research Assistant at | + posts

Axel de Vernou is a junior at Yale University majoring in History and Global Affairs with a Certificate of Advanced Language Study in Russian. He is a Research Assistant at the Yorktown Institute.

Trending

0
Would love your thoughts, please comment.x
()
x