AI Regulation

UK Online Safety Act Faces Test Amid 2026 Local Elections

The UK’s Online Safety Act came fully into force ahead of the May 7, 2026 local elections in England, Wales, and Scotland, presenting an early test of the country’s approach to regulating online harm and potential election interference. The act requires major tech companies such as Google, TikTok, and Meta to be more transparent about their content moderation and to address systemic risks on their platforms.

Ofcom, the British regulator tasked with enforcing the Online Safety Act, has expanded its staff by over 500 officials in recent years in order to meet the demands of the complex legislation. However, tensions are growing between Ofcom and UK lawmakers and advocates who feel the regulator’s enforcement efforts fall short of expectations.

Focus on Systemic Risks over Individual Content

The Online Safety Act was designed primarily to hold tech companies accountable for their overall trust and safety policies rather than giving regulators direct authority to remove specific pieces of harmful content. Ofcom’s mandate emphasizes mitigating “systemic risks” such as terrorist content, hate speech, and child sexual abuse material, without acting as a direct content censor for every offending post.

This distinction led to criticism from some politicians who want more immediate action against harmful online material, especially as misinformation and hate speech escalated in 2024 after violent incidents in northern England, where false accusations targeted Muslim and migrant communities. Despite calls from officials for stronger measures, Ofcom’s powers remain limited to overseeing company policies rather than intervening on individual cases.

Child Online Safety Measures and AI Challenges

In response to concerns over children’s exposure to harmful content, Ofcom has set requirements for platforms to conduct risk assessments and implement protections against pornography and other problematic material accessible to minors. The UK is also considering legislative amendments that could introduce social media bans for children, following examples from Australia and other jurisdictions.

Ofcom faces challenges regulating emerging technologies like AI chatbots. In February 2026, it launched an investigation into Elon Musk’s Grok AI chatbot on the social media platform X after reports of sexualized deepfake images involving minors. Yet the regulator clarified that AI chatbots outside the definition of interactive social services or those not generating prohibited content may fall outside the Act’s scope, limiting immediate regulatory action.

Election Period Online Safety Oversight

With the May local elections approaching, Ofcom sent a letter to social media companies reminding them of their duties to mitigate online harms during the voting period. However, the Act does not expressly classify misinformation or disinformation as harms requiring intervention, except when they overlap with types outlined in the legislation, such as content harmful to children or illegal offenses.

Consequently, enforcement during elections largely relies on companies’ internal controls rather than direct regulatory oversight. Observers anticipate that although problematic content may circulate online, the scale of interference or harassment during these elections will be limited.

Why it matters

The UK’s Online Safety Act represents one of the world’s most ambitious regulatory frameworks addressing online content risks. Its effectiveness is under close watch as the country navigates the balance between safeguarding free speech and protecting citizens from harm, especially in the politically sensitive context of elections. The May 2026 local elections serve as a practical indicator of Ofcom’s regulatory impact and the broader limitations faced by governments seeking to govern global social media platforms.

Read more AI Regulation stories on Goka World News.

Sources

This article is based on reporting and publicly available information from the following source:

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, U.S. news, politics, business, climate, science, technology, health, security, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, publicly available information, and relevant source material.

View all posts by Giorgio Kajaia