The U.S. House Energy and Commerce Committee recently introduced the SECURE Data Act, a proposed federal privacy law led by House Republicans. The bill aims to create a uniform nationwide framework for consumer data protection, but experts warn it falls short of meaningful safeguards and could weaken existing state regulations.
Eric Null, director of the Privacy & Data Project at the Center for Democracy & Technology (CDT), described the bill as “a major step backward.” According to Null, the SECURE Data Act adopts an industry-friendly approach that limits key privacy protections found in stronger state laws such as those in California, Connecticut, and Virginia.
Key Shortcomings of the SECURE Data Act
The bill’s definition of sensitive data is significantly narrower than many state laws. For example, it restricts health data to only diagnoses, excluding other types such as neural data or communication contents. It also drops state requirements for impact assessments and omits protections for data from devices like smart TVs.
Another critical issue is the law’s adoption of a broad data minimization standard. Under this standard, as long as companies disclose their data practices in privacy policies, they can continue existing data collection and usage. This effectively allows companies to maintain current practices without meaningful limitation.
The SECURE Data Act contains numerous exemptions that could allow companies to avoid compliance. These include allowances for data collected to provide individual services, data covered by contracts or terms of service, and data used for internal research and development — which is widely seen as exempting AI training datasets from regulation.
Context of U.S. Privacy Legislation Efforts
The SECURE Data Act follows two prior bipartisan bills that failed to advance: the 2022 American Data Privacy and Protection Act (ADPPA) and the 2024 American Privacy Rights Act (APRA). Both sought more comprehensive protections but never reached votes on the House floor.
The effort reflects ongoing challenges in passing federal privacy legislation, owing to divergent stakeholder interests and the complex, wide-ranging nature of data privacy issues across sectors. Meanwhile, several states have adopted stricter laws, creating a patchwork of protections.
Why it matters
As artificial intelligence systems increasingly rely on vast data collections for training and decision-making affecting employment, credit, healthcare, and more, the absence of robust federal privacy standards raises concerns. Critics argue that without stronger protections, companies will continue data practices that consumers widely distrust, while attempts to streamline privacy rules through preemption could erode existing state safeguards.
Experts emphasize the need for precise data handling rules, transparency, and restrictions on manipulative consent practices known as “dark patterns.” The SECURE Data Act currently lacks such measures, potentially undermining consumer control over personal information in an era of expanding AI technology.
Read more AI Regulation stories on Goka World News.
Sources
This article is based on reporting and publicly available information from the following source:
