US News

AI Shopping Agents Raise Security and Trust Concerns Despite Growing Use

Artificial intelligence-based shopping agents are becoming more common, allowing consumers to automate purchases based on preferences and budgets. However, experts caution that relying on AI to make autonomous buying decisions exposes users to significant risks, such as costly errors and potential data theft.

AI agents, which can organize emails or shop for products, are still considered risky for handling financial transactions. Matt Kropp, an AI specialist at Boston Consulting Group, noted that while these systems can carry out complex purchases, insufficient safeguards currently make entrusting them with credit card information unwise.

Among major companies embracing AI commerce, American Express recently introduced new protections for customers using specified AI agents for purchases, including identity verification and coverage against agent errors. Meanwhile, Amazon’s AI assistant “Rufus” tracks product prices and completes orders once a set price point is reached, and Walmart offers a conversational AI agent named “Sparky” to help shoppers find products and place orders.

Market research from Statista indicates about 25% of Americans aged 18 to 39 have tried AI tools for researching or shopping online. Despite this adoption, real-world experiences highlight the ongoing challenges. For example, Sebastian Heyneman, a tech startup founder, instructed an AI agent to book a speaking engagement at the World Economic Forum in Davos, but the bot secured a $30,000 slot he could not afford—demonstrating how imprecise prompts can lead to costly mistakes.

Andrew Lee, founder of AI automation company Tasklet, which produces shopping-capable bots, stresses that while agentic AI can perform many consumer tasks, its use for autonomous shopping is not advisable at this stage. He explained the technology remains difficult to trust and recommended consumers retain control over financial decisions themselves.

Security concerns also persist. Bretton Auerbach, founder of a New York tech startup, warned that AI agents could be manipulated into divulging credit card information by malicious websites posing as legitimate ones, underscoring the vulnerability of outsourcing online payments to AI.

As AI commerce tools evolve, experts agree that enhanced guardrails and security measures are necessary before widespread autonomous shopping by AI agents becomes safe and dependable for consumers. Until then, caution is advised when giving AI unrestricted access to financial data and purchasing authority.

Why it matters

The increasing integration of AI agents in shopping introduces new convenience but also novel security and financial risks. Without robust safeguards, consumers face potential data breaches and unauthorized charges, raising questions about the readiness of AI commerce for mainstream adoption.

Background

AI agents can perform varied tasks autonomously, from managing communications to making selections online. Major retailers and financial companies are investing in AI tools to drive sales and improve customer experience, but experts emphasize that the technology’s rapid expansion outpaces safeguards needed for secure financial interactions. Recent incidents highlight the gap between AI capability and trustworthiness in commerce.

Read more US News stories on Goka World News.

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, politics, business, climate, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, and publicly available source material.

View all posts by Giorgio Kajaia