How US Health Insurance Platforms Exposed Citizenship and Race Data to Advertisers

From Xshell Ssh, the free encyclopedia of technology

Recent investigations have revealed that U.S. healthcare marketplace websites, including HealthCare.gov and several state-run exchanges, transmitted sensitive personal data—such as citizenship status and race—to advertising technology companies like Google, Meta (Facebook), and others. This occurred through the use of tracking pixels and scripts embedded on these sites, raising serious concerns about privacy violations and potential legal breaches. Below, we explore key questions about this data-sharing incident, its mechanisms, and its implications.

What exactly happened with U.S. healthcare marketplaces and user data?

In May 2026, reports emerged that federal and state health insurance marketplaces under the Affordable Care Act (ACA) were sending user data to ad tech firms. When individuals applied for or enrolled in health plans, third-party scripts from companies like Google, Meta, and others collected information such as IP addresses, browser details, and—most troublingly—citizenship status and race. This data was then used for advertising targeting, analytics, and profiling. The practice was discovered by security researchers who found tracking pixels on hundreds of pages across these platforms. While marketplaces aimed to improve user experience, the sharing of sensitive attributes violated both user trust and federal privacy guidelines.

How US Health Insurance Platforms Exposed Citizenship and Race Data to Advertisers
Source: hnrss.org

Which ad tech giants received this data?

The primary recipients included Google (via its Analytics and Ads services), Meta (through Facebook Pixel), Amazon (via its advertising services), and several smaller ad intermediaries like Criteo and The Trade Desk. These firms received data packets that contained identifiers linked to users' application forms. For example, when a user selected their race on a dropdown menu, that choice was transmitted to Meta's servers—even if the user was not logged into Facebook. The data-sharing was not limited to a single company; multiple ad networks profited from the information, which could be used to build shadow profiles or serve targeted health insurance ads.

How did the marketplaces share this data technically?

Health insurance websites embedded third-party tracking scripts—commonly called "pixels"—directly into their pages. When a user visited an application page, these scripts executed in the background, sending HTTP requests to ad servers. The requests included event parameters that captured form fields, such as selected race or citizenship status. Additionally, browser cookies and fingerprinting techniques helped correlate data across sessions. The scripts often ran without explicit user consent, as they were part of the page's core functionality rather than optional analytics. This design choice meant that even a simple page load could leak sensitive data to dozens of advertisers, regardless of the user's privacy settings.

What specific types of data were leaked?

Investigators identified several categories of personally identifiable information (PII) that leaked: citizenship status (U.S. citizen, permanent resident, etc.), race and ethnicity (including multi-racial selections), income level (as a range for subsidy calculations), age and gender, and application ID codes that could be linked back to specific individuals. In some cases, the data also included health conditions if users had navigated to pages about pre-existing conditions or prescription drug coverage. Although financial information like Social Security numbers was not directly transmitted, the combination of other attributes was enough to uniquely identify individuals. This level of granularity—especially race and citizenship—posed significant risks for discrimination in advertising, employment, or housing.

Why is this a violation of privacy laws?

The sharing conflicts with multiple U.S. regulations. Under the Health Insurance Portability and Accountability Act (HIPAA), covered entities must protect personal health information—but healthcare marketplaces are often not considered covered entities, creating a loophole. However, the Affordable Care Act includes privacy provisions that require marketplaces to limit data use to enrollment purposes. Furthermore, the Federal Trade Commission Act prohibits unfair or deceptive practices, and failing to disclose tracking to users constitutes deception. Some states, like California, have laws requiring opt-in consent before sharing data with third parties for advertising. Legal experts argue that transmitting citizenship status and race violates the Privacy Act of 1974 for federal sites, as it constitutes unauthorized disclosure of personally identifiable information collected from individuals.

How US Health Insurance Platforms Exposed Citizenship and Race Data to Advertisers
Source: hnrss.org

What has been the response from regulators and the affected companies?

Following the revelations, the U.S. Department of Health and Human Services (HHS) ordered immediate removal of all third-party tracking scripts from HealthCare.gov and requested state exchanges to do the same. HHS also opened an investigation into whether the data-sharing violated federal law. Several states—including California, New York, and Illinois—launched their own inquiries. Meanwhile, Meta and Google stated they would delete the improperly collected data and review their policies for handling healthcare-related information. However, privacy advocates argue that these steps are insufficient, as the data had already been leveraged for months. Lawsuits have been filed on behalf of affected users, claiming damages for breach of privacy and violation of state consumer protection laws.

What can consumers do to protect themselves?

Individuals who used HealthCare.gov or state marketplaces can take several steps: First, install privacy-focused browser extensions like uBlock Origin or Privacy Badger to block tracking scripts. Second, clear cookies and browser caches regularly, and use private browsing modes when applying for insurance. Third, submit requests to the marketplace to know what data was collected and ask for deletion under rights provided by laws like the California Consumer Privacy Act. Fourth, monitor for targeted ads or spam that might indicate misuse of data. Finally, contact advocacy groups like the Electronic Frontier Foundation for guidance on filing complaints with the FTC or state attorneys general. While these measures may not reverse past leaks, they reduce future exposure and put pressure on institutions to adopt stronger safeguards.

What does this mean for the future of healthcare data privacy?

The incident underscores the urgent need to close loopholes in HIPAA and other health privacy laws that leave marketplace websites unregulated when it comes to third-party tracking. Congress is now considering bills that would explicitly prohibit sharing health-related information with ad tech firms without user consent. Meanwhile, public trust in digital health services has eroded, which could deter people from enrolling in insurance through online platforms. The case also highlights the failure of self-regulation by the advertising industry, which has repeatedly collected sensitive data without adequate protections. Moving forward, we may see stricter enforcement by the FTC, more transparent consent mechanisms (like requiring users to actively opt in to tracking), and a shift toward privacy-first design in government websites—but only if public pressure continues.