The Biden administration is considering an executive order to establish a working group of government officials and tech industry executives to review new artificial intelligence (AI) models before their public release, according to a report by The New York Times. The proposal includes potential oversight by the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence. However, the review would not necessarily block models from reaching the public.
This initiative marks a notable shift following the administration’s earlier dismantling of some Biden-era AI safety frameworks. It reflects a growing recognition among federal officials of increasing public concern over AI risks. Yet, critics emphasize that simply vetting models without enforcement power or true independence may not effectively address safety issues.
Industry Influence and Limited Oversight
The proposed working group would include representatives from AI companies, raising questions about its independence. Currently, most AI research expertise and computational resources are concentrated within private companies. Approximately 80 percent of global AI computing power is privately controlled, and nearly 70 percent of new AI PhDs enter the industry, limiting the capacity for truly independent evaluation.
Historically, universities led AI research, but today academics depend heavily on corporate partnerships for access to computing infrastructure, which influences research priorities. The administration’s focus on maintaining technological dominance over global competitors, especially China, has accelerated this consolidation within the private sector.
A federal review process staffed by intelligence officials and influenced by industry actors may perpetuate a system where companies simultaneously develop and self-assess AI safety. This arrangement contrasts with other high-stakes sectors like pharmaceuticals, where independent regulatory frameworks such as those enforced by the Food and Drug Administration require rigorous safety testing.
Existing Government Efforts and Limitations
The Center for AI Standards and Innovation (CAISI), formerly the AI Safety Institute, was established to evaluate AI systems for the federal government. However, under the current administration, CAISI has reportedly been sidelined, functioning primarily through voluntary agreements with AI developers and concentrating on national security concerns such as cybersecurity and biosecurity rather than broader safety issues.
After news of the White House’s proposed working group, CAISI announced expanded testing partnerships with Google DeepMind, Microsoft, and xAI, joining previous collaborations with Anthropic and OpenAI. Despite possibly rigorous security-related evaluations, CAISI is not independent from industry influence, limiting its capacity for unbiased oversight.
Calls for Independent AI Safety Research
Advocates for AI safety recommend creating independent research institutions separate from both government intelligence communities and industry control. Organizations like METR, a nonprofit research group with access to some AI models and compute resources, conduct evaluations and publish findings to promote transparency. Yet, many independent researchers lack access to complete models or data, hindering their ability to verify safety claims or assess social impacts such as bias and misinformation risks.
An effective independent research network would include universities and nonprofits equipped with dedicated computing power and stable funding from diversified sources, including government and philanthropy. Such a framework could enable thorough pre-deployment and post-deployment testing and help develop standards and public benchmarks.
Why it matters
As AI technologies rapidly advance and integrate into society, independent and rigorous safety evaluations are critical to ensuring public trust and minimizing harm. The White House’s move toward federal AI model vetting is a step toward government involvement, but without clear enforcement mechanisms or independence, it may fall short of establishing effective safeguards. The reliance on companies to self-regulate AI safety raises concerns about conflicts of interest and insufficient oversight, especially given AI’s broad societal impact.
Sources
This article is based on reporting and publicly available information from the following source:
Read more AI Regulation stories on Goka World News.
