
A privacy check-in with noted privacy expert and lawyer Jessica Lee.
As privacy regulations evolve and enforcement deepens, AdMonsters hears the same questions and concerns from ad ops professionals we meet at conferences, in working groups and across our community. How much liability do publishers carry for downstream vendor behavior? Are probabilistic age estimates enough? What does “strictly necessary” really mean for measurement?
To help unpack the regulatory reality facing ad ops teams today, we brought those questions to Jessica Lee, chief privacy and security partner and chair of Privacy, Security & Data Innovations at Loeb & Loeb LLP.
AdMonsters: Many state privacy laws now have exceptions for “strictly necessary” data use. In your view, do things like ad measurement, frequency capping and cross-device deduplication qualify as strictly necessary under current legal interpretations?
JL: That’s the million-dollar question and one we don’t fully have an answer to yet. “Strictly necessary” language typically comes into play when we’re talking about sensitive data. For nonsensitive data, the laws rely more on purpose limitation and data minimization concepts; essentially, the idea that data should only be used for what it was collected for or what’s required to deliver the service.
So if the service is ad-supported content and the user is getting that content for free, then a certain amount of data has to be collected to support that ad experience. The gray area is how much data is “too much.” That line hasn’t been clearly drawn yet.
That said, measurement and frequency capping are generally seen as lower risk and necessary from a business operations standpoint. So, while the legal landscape isn’t entirely settled, those uses are less likely to be prioritized in enforcement, at least for now.
AdMonsters: Most publishers with general websites try to guess users’ ages instead of actually verifying them. What do you think about using probability models to estimate how old someone is, especially when buying and selling ads automatically?
JL: That’s a question that’s somewhat in flux, in part because some of the laws that require age verification are being challenged. Sites with general audiences are getting swept into the conversation about kids and teen data in two ways.
First, because these general audience sites are now subject to laws that cover the 13-17 age range, which wasn’t the case before. We’re outside of the COPPA “directed to children” standard now. And some of these laws have moved the knowledge standard, so it’s no longer just “actual knowledg” but “willful disregard” of whether there are kids or teens on your site.
Then there are state laws that say if publishers know a visitor is under 16, they can’t sell or share their data without consent. So if the legal standard is “willful disregard” or “knowledge,” then probabilistic modeling could serve as a backstop, especially for publishers who don’t want to collect more personal data than necessary, like birthdates.
A lot of publishers don’t collect age data by default, so using inference models could actually reduce the need for more intrusive data collection. It’s not without controversy, even among kids’ advocates, but I do think probabilistic age gating is going to be one solution we see more of in the future.
AdMonsters: Many publishers pass opt-out signals via frameworks like TCF or US Privacy, but enforcement down the chain is inconsistent. If a vendor fails to honor an opt-out, how much legal exposure does the publisher face?
JL: In most state privacy laws, publishers are protected by a safe harbor—as long as they don’t know, and don’t have reason to know, that a downstream partner is ignoring the signal. That said, regulators are increasingly expecting a baseline level of accountability. That could mean periodic audits or other efforts to confirm that vendors are honoring consent signals. It’s not about verifying every partner in the chain, but publishers should be able to demonstrate they’ve taken reasonable steps to assess compliance and mitigate risk.
AdMonsters: Do you see the IAB’s TCF and US privacy frameworks as becoming sort of like national standards for consent compliance?
JL: I do. Not that they’ll be the only standard necessarily, but I think those strings are widely used and widely adopted. If you’re making ad space available and using the open RTB protocol, it makes sense to use the IAB string. It makes sense to work within their infrastructure.
From a technical solutions point of view, that’s the best, most seamless way to have these signals go back and forth in a chain that provides some level of auditability, as opposed to everyone using their own string or CMP that might not be interoperable. I think the strings offer a lot of opportunity for the industry to make this a lighter lift when sending consent signals downstream.
AdMonsters: Got it. So, switching to AI tools, as ad ops teams use AI to create lookalike audiences or optimize creative delivery, what privacy rules apply? Do laws like California’s or Colorado’s treat these tools as automated decision-making?
JL: If personal data is involved, then yes, the existing state privacy laws kick in. Publishers need to assess whether the activity falls under profiling or automated decision-making provisions. AI-specific laws tend to focus on high-risk uses, such as targeting vulnerable populations with harmful content. That said, even if you’re not crossing that line, privacy regulations still apply and may require disclosures or opt-out mechanisms, depending on how the tools are used.
AdMonsters: New York’s Algorithmic Pricing Disclosure Act requires companies to be transparent about how they use algorithms to set consumer prices. But publishers also use algorithmic tools—for things like dynamic paywalls, auction floors and real-time ad personalization. Do those publisher applications fall under this law, too?
JL: No, they shouldn’t. One of the main requirements of the law is that companies disclose to consumers when a price has been set using personal data and algorithms. This is really meant to be consumer-facing, to give people visibility when they’re being shown a price that’s personalized using their data. It’s not meant to apply to business-to-business use cases, such as programmatic advertising or setting auction floors. Even though consumer data is involved, those kinds of behind-the-scenes tools aren’t what this law is targeting.
AdMonsters: Based on recent enforcement actions, what types of publisher practices are most likely to draw regulatory scrutiny today?
JL: We’ve moved past the first wave of enforcement, which focused on obvious gaps like missing “do not sell” links or outdated privacy policies. Now, regulators are digging into whether consumer choices are actually being honored. Are opt-outs working across browsers and devices? Are consent platforms properly configured?
Some of the recent California cases have looked closely at the technical implementation of opt-out signals and found that they weren’t functioning as expected. This is where it gets tricky—publishers are being held to legal expectations that, in some cases, outpace current technology. When users are anonymous and spread across multiple devices, enforcing a persistent opt-out becomes a serious challenge. That’s why regulators are increasingly asking not just what you’ve implemented but how it works and whether you can defend it.
AdMonsters: Any parting thoughts regarding issues AdOps teams need to think about in the months ahead?
JL: There are three key areas or pain points that AdOps needs to work through. First is making sure opt-out signals are passed and honored correctly. Second is ensuring that your technical tools, such as your consent platforms, actually function as expected. And third is to be aware of the elevated regulatory attention around sensitive data such as race, ethnicity and health.
Even if AdOps isn’t leading policy decisions, they’re on the front lines of implementation. The choices a company makes around data use must be defensible, which means being able to explain clearly what was done, why it was done and how it aligns with the law. The ability to document and operationalize compliance is going to be a key part of the job going forward.