AI Regulation

OpenClaw Demonstrates AI Agents Can Operate Without Vertical Integration

Austrian developer Peter Steinberger’s open-source AI agent, OpenClaw, illustrates that AI agents do not require vertical integration with any single foundation model or ecosystem to function effectively. Instead, OpenClaw allows users to seamlessly swap between different AI models—such as Claude, ChatGPT, or open-source options like DeepSeek—using a simple command, all while maintaining local control over personal data.

At the core of OpenClaw is a component called the “Gateway,” software that runs locally on the user’s device. The Gateway manages connections to external services such as email and calendar, handles user memory and preferences, and stores all data on the device. When a user issues a command, the Gateway supplies the necessary contextual information to the chosen AI model without requiring the user to rebuild their profile if they switch models. For example, it can access calendar availability and dietary preferences for booking a restaurant.

OpenClaw’s modular design contrasts with the dominant AI services from major companies like Google, Microsoft, and OpenAI, which tightly integrate AI agents into their platforms. Google’s Gemini can access photos and messages, and Microsoft’s Copilot integrates deeply with Office 365 applications, extending each company’s reach into users’ digital lives. These vertically integrated systems collect extensive personal data centrally, enabling targeted advertising and platform lock-in.

This centralized control also raises concerns about self-preferencing, where platforms tend to favor their own products or partners in AI agent recommendations, potentially limiting user choice. Moreover, the accumulation of detailed user data within a single ecosystem increases switching costs, as users would lose personalized memories and app connections when moving to a different platform.

OpenClaw and similar projects from Chinese companies—including Xiaomi’s Miclaw, Moonshot AI’s Kimi Claw, and Zhipu AI’s AutoClaw—demonstrate an alternative approach. Nvidia has also introduced NemoClaw, an open-source agent built on OpenClaw’s framework, emphasizing added security and privacy. These modular agents allow for rotating foundation models and local data storage, helping reduce surveillance and vendor lock-in.

However, the modular approach comes with security challenges. Researchers have identified risks from malicious add-ons and prompt injection attacks that can exploit the agent’s access to sensitive personal information. Unlike vertically integrated agents where security is centrally managed, modular designs place more responsibility on users and developers to maintain protections. Emerging infrastructure and safeguards will be necessary to address these vulnerabilities.

Why it matters

OpenClaw’s design challenges the prevailing trend of tightly integrated AI agents controlled by large tech firms, illustrating that modularity and user data control are feasible. This matters for competition policy, user privacy, and digital autonomy, as built-in lock-in and data concentration by dominant platforms can limit consumer choice and increase surveillance risks. The development of open-source modular AI agents offers a pathway to greater user control and interoperability in the growing AI agent market.

Background

The leading AI agents from Google, Microsoft, and OpenAI are embedded into widely used software and services, creating extensive “walled gardens” that make switching costly for users. For example, Google integrates its Gemini agent into Android, Search, and Workspace, while Microsoft builds AI features into Office 365 tools. These companies monetize detailed user data through advertising and platform extensions.

Regulatory bodies have begun addressing anti-competitive behaviors linked to AI and platform lock-in. For instance, Meta’s attempt to block rival AI assistants from WhatsApp prompted interventions from the Italian Competition Authority and the European Commission. Still, broader regulatory requirements for interoperability and data portability are needed for projects like OpenClaw to realistically serve as alternatives to dominant vertically integrated agents.

Sources

This article is based on reporting and publicly available information from the following source:

Read more AI Regulation stories on Goka World News.

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, U.S. news, politics, business, climate, science, technology, health, security, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, publicly available information, and relevant source material.

View all posts by Giorgio Kajaia