European privacy advocates say the complex bidding process behind online behavioral advertising threatens consumers’ privacy. To place ads on webpages, companies widely broadcast what they know about a user visiting the page, including potentially sensitive data about the type of content that person watches, listens to, or reads.

New documents filed Monday with regulators in Poland, the UK, and Ireland claim that the way personal data is handled during the process of matching advertisements to ad slots does not comply with the European Union’s General Data Protection Regulation, a strict set of consumer privacy rules that went into effect in May.

The documents focus on the categories that key players in the ad-tech industry have adopted to instantly match advertisers with appropriate users or content. Although most categories are benign, like “Tesla motors” or “gadgets,” some are highly sensitive. For instance, the list of labels agreed upon by the Interactive Advertising Bureau, a trade group that establishes industry norms, includes categories like incest/abuse support, gay life, hate content, substance abuse, and AIDS/HIV.

The advocacy groups, led by Brave, a privacy-focused browser that competes with Google, allege that over time, those labels can be linked to users and incorporated into profiles, through cookies and other technology that track a user’s web browsing. “Labels about what you read and watch online stick to you for a long time,” says Johnny Ryan, chief policy officer at Brave, citing a report from New Economics Foundation, a British think tank, published in December, which estimated that ad-industry companies broadcast profiles about the typical UK internet user 164 times a day. The groups say those profiles are then passed around by players in the internet ad ecosystem without regard for GDPR’s strict privacy rules.

In an email to regulators, Ravi Naik, the lawyer representing Brave, said some guidelines from the IAB “suggest that personal identifiers ‘about the human user of the device,’ the ‘User’ attributes, are ‘strongly recommended’ to be involved in a bid request.” Naik also represents David Carroll, a professor of media design at The New School, in his high-profile quest to retrieve his data from Cambridge Analytica.

The IAB says the categories were established by the IAB Tech Lab, a partner organization, and developed in consultation with academics, ad measurement companies, and IAB members. In a blog post from November 2017, the lab said its goal in creating the current set of categories was to help content creators facilitate more “relevant, brand safe, and effective advertising,” in part to help with “audience analysis and segmentation.”

IAB’s list also includes labels for special needs kids, autism, incontinence, infertility, as well as religious categories for Islam, Hinduism, and alternative religions. In an email, Dennis Buchheim, senior vice president and general manager of the IAB Tech Lab, said any legal obligation under GDPR is on ad-tech companies that use the categories, not the categories themselves. The categories are “used by organizations, at their sole discretion, to categorize the type of content that a website contains,” Buchheim wrote.

Google maintains a similar list used to facilitate real-time bidding requests. It is part of Google’s Authorized Buyer program, the new name given to DoubleClick Ad Exchange, also known as AdX, last year. Google’s categories do not include sexual abuse, but do include categories for substance abuse, including steroids & performance-enhancing and drug and alcohol treatment. Other categories include unwanted body & facial hair removal, sexually transmitted diseases, male impotence, as well as right-wing and left-wing politics.

Publishers can opt out of Google’s list, called “Publisher Verticals,” which is generated automatically based on keywords on a webpage. The list is used for contextual ads, which can be targeted based on what webpage you are on, rather than personal info. It is also used to help advertisers avoid certain content, if, say, a liquor company did not want to show its ads to pregnant women or if an advertiser wanted to avoid any political sites. In real-time bidding, Google uses these categories to give the bidder an indication of the ad space up for auction.

In a statement to WIRED, a spokesperson for Google wrote, “We have strict policies that prohibit advertisers on our platforms from targeting individuals on the basis of sensitive categories such as race, sexual orientation, health conditions, pregnancy status, etc. If we found ads on any of our platforms that were violating our policies and attempting to use sensitive interest categories to target ads to users, we would take immediate action.”

Brave initially filed complaints about the online advertising system in the UK and Ireland in September, alleging that the process could expose a user’s location, and tracking identifiers, which can be used to build long-term profiles. The complaint said these profiles can also be combined with offline data, such as a user’s income bracket, social media influence, gender, political leaning, and sexual orientation.

To demonstrate how bid requests contain categories, Ryan pointed to a sample bid request on Google’s blog for developers, which showed the person’s latitude and longitude, zip code, device details, and tracking ID.

Ryan says the categories illustrate the human dimension of behavioral targeting, a process so pervasive and opaque, it can seen abstract. The complaint in Poland, filed by Panoptykon Foundation, a Polish privacy-focused nonprofit group, incorporates both the earlier allegations made in the UK and Ireland, as well as the new content category claims.

Last week, French privacy watchdogs fined Google $57 million for violating GDPR because the company had not properly gained consent from users for personalizing their advertising.