Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
OpenAI on Monday published what’s called an “economic plan” for AI: a living document that lays out the policies the company thinks it can build with the US government and its allies.
The draft, which includes proposals from Chris Lehane, OpenAI’s vice president of global affairs, argues that the US must act to attract billions in funding for the chips, data, energy and talent needed to “win at AI.”
“Today, while some countries are bypassing AI and its economic potential,” Lehane wrote, “the U.S. government can pave the way for its AI industry to continue the country’s global leadership in innovation while protecting national security.”
OpenAI has multiple times called on the US government to take more material action on artificial intelligence and infrastructure to support technology development. The federal government has largely left the regulation of artificial intelligence to the states, a situation OpenAI describes in the draft as unsustainable.
Only in 2024 state legislators introduced almost 700 laws related to artificial intelligence, some of which conflict with others. Texas’s Responsible AI Governance Act, for example, imposes heavy responsibility requirements to the developers Open source AI models.
OpenAI CEO Sam Altman also criticized existing federal laws on the books, such as CHIPS Lawwhose goal was to revitalize the US semiconductor industry by attracting domestic investment from the world’s leading chipmakers. In a recent interview with Bloomberg, Altman he said that the CHIPS Act “[has not] was as effective as any of us had hoped” and that he thinks there is a “real opportunity” for the Trump administration to “do something much better as a follow-up”.
“A thing I really deeply agree with [Trump] it’s just weird how hard it has become to build things in the United States,” Altman said in an interview. “Power plants, data centers, any of that. I understand how red tape piles up, but it generally does not help the country. It’s especially unhelpful when you think about what needs to happen for the US to lead AI. And the US really needs to lead the way in AI.”
To fuel the data centers needed to develop and run artificial intelligence, OpenAI’s draft recommends “dramatically” increased federal spending on energy and data transmission, and substantial construction of “new energy sources,” such as solar, wind and nuclear power. OpenAI — along with its own AI rivals — has previously provided support for nuclear energy projects, arguing that they are needed to meet the electricity needs of next-generation server farms.
Tech giants Meta and AWS have run into trouble with their nuclear efforts, albeit for reasons which have nothing to do with nuclear energy itself.
In the near future, the OpenAI draft proposes that the government “develop best practices” for implementing models to protect against misuse, “streamline” the AI industry’s engagement with national security agencies, and develop export controls that allow models to be shared with allies while “limiting[ing]” their exports to “adversary countries”. In addition, the draft encourages the government to share certain information related to national security, such as AI industry threat briefings, with vendors and helps vendors provide resources to assess risks in their models.
“The federal government’s border model approach to safety and security should simplify requirements,” the draft said. “Responsible export of … models to our allies and partners will help them establish their own AI ecosystems, including their own communities of developers who innovate AI and distribute its benefits, while also building AI on American technology, not technology funded by the Chinese Communist Party. “
OpenAI already counts several US government departments as partners, and — if its blueprint gains traction among policymakers — more are on the cards. The company has contracts with the Pentagon to work on cybersecurity and other, related projects, and it does joined together with defense startup Anduril to supply its AI technology for systems used by the US military to counter drone attacks.
In its draft, OpenAI calls for the creation of standards “recognized and respected” by other nations and international bodies on behalf of the American private sector. But the company does not accept mandatory rules or regulations. “[The government can create] a defined, voluntary pathway for developing companies [AI] work with the government to define model evaluation, test models and information sharing to support companies’ safeguards,” the draft said.
Biden administration took a similar approach with its Artificial Intelligence Executive Orderwhich sought to enact several voluntary high-level AI safety and security standards. The executive order established the American Artificial Intelligence Security Institute (AISI), a federal government body that studies risks in artificial intelligence systems, which is in partnership with companies including OpenAI need to assess the security of the model. But Trump and his allies have promised to rescind Biden’s executive orderputting its codification — and the AISI — at risk of being overturned.
The OpenAI draft also addresses copyright related to AI, a hot topic. The company claims that AI developers should be able to use “publicly available information,” including copyrighted content, to develop models.
OpenAI, along with many other AI companies, trains models public data from all over the web. The company has licensing agreements on a site with numerous platforms and publishers and offers limited ways for creators to “opt out” of model development. But OpenAI also has he said that it would be “impossible” to train AI models without using copyrighted materials, ia number of creators have sued the company for allegedly training on their works without permission.
“[O]other actors, including developers in other countries, make no effort to respect or cooperate with intellectual property rights owners,” the draft reads. “If the US and like-minded nations do not address this imbalance with sensible measures that help advance AI in the long term, the same content will continue to be used for AI training elsewhere, but to the benefit of other economies. [The government should ensure] that artificial intelligence has the ability to learn from universal, publicly available information, just as humans do, while protecting creators from unauthorized digital replicas.”
It remains to be seen which parts of OpenAI’s plan, if any, affect the legislation. But the proposals are a signal that OpenAI intends to remain a key player in the race for a unified American AI policy.
In the first half of last year, OpenAI more than tripled its lobbying spending, spending $800,000, up from $260,000 in all of 2023. The company has also brought former government leaders into its executive ranks, including former Defense Department official Sasha Baker, head of the NSA- e Paul Nakasoneand Aaron Chatterji, former chief economist at the Commerce Department under President Joe Biden.
Since it employs i it spreads its global affairs division, OpenAI is more vocal about what laws and rules AI prefers, for example throwing his weight behind him Senate bills that would establish a federal AI rulemaking authority and provide federal grants for AI research and development. The company also has he objected bills, especially California bills SB 1047arguing that it would stifle AI innovation and crowd out talent.