Last week I was in Atlanta, attending OneTrust’s Trustweek conference. At the same time, online, Google was hosting Google Marketing Live, which I caught on demand after sessions for Trustweek ended. Both of these conferences had the same message, that we’re shifting to a trust based market, and that consent, purpose and transparency will be critical going forward.
If I had any doubts to the above (I didn’t), they would have easily been laid to rest by the announcement of the CPRA Draft Regulations which were also released last week. The draft regulations, if adopted, would put a number of the things discussed at the conferences into legal requirements for businesses interacting with residents of California.
For those not keeping up, we’re rapidly entering a space where regulation and policy define a companies data collection practices, how they may activate upon that data and what the punishment for failing to comply may be. How well a company adheres to these requirements will ultimately determine a consumer’s trust and may ultimately begin to shift the market to the more trustworthy and transparent companies as companies not compliant with regulation may begin to lose out on contracts due to the due diligence clauses starting to appear in regulation such as the CPRA.
Let’s take a look at the sheer amount of things underway and how this rate of change can cause unforeseen side effects between unrelated systems over time.
The Shifting Privacy Quagmire
I don’t think I would be far off if I said some brands felt like they were spending enormous energy and effort on privacy simply to keep above water and mostly compliant. For those whom may be unaware, a brief review of last year, this year and next year may prove helpful.
In 2021, we saw Apple begin to enforce data collection limits on the App Store, introducing App Tracking Transparency and the App Transparency Report. We saw the arrival of Mail Tracking Protection and the shielding of IP Addresses from Advertisers. The States of Virginia and Colorado passed data privacy laws and we finished out the year with multiple companies losing billions of dollars from the changing privacy restrictions.
In 2022, we’ve seen two states pass data privacy legislation (Connecticut and Utah). We’ve seen Android begin its own consent requirements. We’ve seen Google release EU specific controls and require consent under it’s EU User Consent Policy and we’re just a few weeks from Apple requiring the User Deletion policy requirements for apps to be adhered to. We’re not even to June yet (at the time of writing) and I am sure more things will be announced over the next 7 months.
Then next year, in 2023, we already see the arrival of the enforcement of 5 separate state data privacy laws, with differing requirements, timelines for response, UX requirements for consent, and enforcement mechanics. This will spur a number of companies take privacy seriously and to integrate consent management for the first time.
So for those keeping track, we saw brands have to:
- Add Data Privacy Disclosures to the app download page to both the Apple App Store and Google Play Store
- Prompt for consent in a number of scenarios for apps distributed by the Apple App Store and Google Play Store
- Change which SDKs fire or collect data based on consent preferences for Apps in the Apple App Store and Google Play Store
- Evaluate Email Marketing Programs and related KPIs then potentially retool what ‘success’ looks like
- Ensure EU residents are prompted for consent prior to sending their data to any Google Service
- Ensure that users can delete accounts from with-in the App on the Apple App Store.
- Prepare to comply with 5 different state laws with varying requirements, enforcement mechanics and fine structures, which for many involves building out the privacy infrastructure.
Now with the regulations, I feel this is a natural evolution of the changes we’ve been seeing over the years. We’ve been seeing tech companies attempt to be proactive in effort to stave off regulatory enforcement after the release of Europe’s General Data Protection Regulation. We’re seeing companies that fail to respect the shifting requirements and narrative become subject to massive fines and now increasingly we’re seeing increased regulation across the globe. Compliance and transparency will be critical for brands moving forward as regulators finally catch up to the previous decade of technical advancement.
In effect these changes mean that companies may need to build out a privacy program which works with marketing, legal and engineering in order to ensure that everything aligns properly and won’t get the company into hot water in the event a regulator or platform wants to open a investigation. I think having a well functioning privacy department will be critical to most of domestic (US) industry inside of 2-3 years, particularly if they interact directly with consumers.
However, if we zoom out from the specific requirements, it can be said that between all these changes listed above the core mechanics often put more control in the hands of the user over how their data is collected or used. Having the structure to do this is now table stakes in many areas of the world and can be very expensive to bolt on to an existing product post development. Leaders should plan accordingly.
Many of the laws require a user to consent prior (or enable opt-out after) to a business using personal data for non-service activities. How this is normally done is via a Consent Management Platform in which users can consent to specific data being used for specific purposes, and that’s where I think we may run into problems as some of the laws have we’ll say “potentially interesting effects” when compared with current privacy technical efforts.
Let’s take a brief look at how we arrived here.
The Rise of Third Party Consent Management Platforms
When privacy regulation or policy begins companies are often faced with the choice – build a consent management system, or outsource it. For many companies, employing a third party vendor for consent frees them up to focus on core efforts, and allows a specialist to handle all the edge cases around consent (assuming the integration is done properly). So as you can imagine, this is a very popular choice.
This has been clearly advantageous to the makers of consent management platforms, whom have seen immense success as companies are required to en masse to come up with a consent management solution in order to comply with regulation to support their marketing and analytics activities.
Often these solutions are JavaScript driven and via the use of client side cookies or other mechanics integrate with Tag Management System, which in turn controls the data collection activities of the marketing and analytics tags which are part of the tag container.
This was a great solution and accomplished the requirements for many regulations and allowed companies to completely outsource the majority of consent management development to these vendors.
The Impact of Intelligent Tracking Prevention
I’ve written a lot about Intelligent Tracking Prevention, but like most JavaScript based solutions this adversely affected JavaScript based Consent Management Platforms in that the preference cookies they used to store consent selections would systematically be reset by the browser after 7 days of inactivity in most cases, and 24 hours in some marketing related cases.
This is annoying for consumers, because they are re-prompted for consent the second time they return (assuming enough time had passed), even if the underlaying site can still identify them via another means. This problem only escalated with iOS14, in which ITP was added to all browsers on the platform via a modification to the WebView functionality in the Operating System.
However, without a server side persistence layer, the client-based consent management platforms continue to suffer this engineering limit when interacting with Webkit browsers. The systems are working as designed, it’s just the constraints under which they were designed have changed. All in all, it’s not a great user experience, even if it is technically compliant with most regulation currently.
Which brings us to the issue with California, and how default privacy technical controls and privacy regulation may produce unforeseen interactions.
Unforeseen Interaction with the CPRA Requirements
A lot of laws talk about when consent is required. California has a interesting clause in the CPRA requirement draft which talks about how quickly you can re-prompt for consent in the event a user opt-out of data collection or sharing.
Except as allowed by these regulations, a business shall wait at least 12 months from the date the consumer’s request before asking a consumer who has opted out of the sale or sharing of their personal information to consent to the sale or sharing of their personal information.
https://cppa.ca.gov/meetings/materials/20220608_item3.pdf
So if a site exists, and uses a 3rd party JavaScript based consent management provider they could end up in one of the following scenarios:
Scenario 1: User uses a non-webkit browser, and consent is remembered for as long as the cookie exists. User is not re-prompted unless cookie is deleted. By default, the consent cookie typically exists for 12 to 14 months for common platforms.
Scenario 2: User uses a webkit browser and consent is remembered for a max of 7 days. User is then reprompted, even if they declined the first time. Site also does not remember them so appears as new user to both site and consent platform. Not ideal, but at the very least is understandable.
Scenario 3: User uses a webkit browser and consent is remembered for a max of 7 days. User is then reprompted, even if they declined the first time. Site does remember them as the mechanic the site is using isn’t JavaScript based so is not subject to Intelligent Tracking Prevention restrictions in the same way the consent cookie set by the consent platform is.
It’s this last scenario that gives me pause. It’s very possible a site may remember a user across sessions even if all the JavaScript based vendors do not. Users may not even know about Intelligent Tracking Prevention, which is enabled by default. In this scenario the company may not intentionally mean to, but would prompt the user for consent again due to the behavior of the browser unless this edge case is specifically accounted for. I ponder if this technically may be in violation of the regulations because as I mentioned the site may continue to identify them under some architecture designs.
However, would the California Privacy Protection Agency find them in violation and fine the website? I’m not sure. Very possible, but I really don’t know. However let’s say it does fine them. Will we then see further advancement on selection persistence on the part of the consent management vendors? Will we see widespread migration to in house consent management solutions? Will nothing happen? Will good faith count for anything? I really don’t know, but it’s definitely something I think to be considered and evaluate the risk for.
What I do know, is that clause has the potential power to dramatically reshape how consent management solutions functionally work after January 1st, 2023, that companies who believe they have solved consent management may be caught off guard, and that no one I’ve seen to date is talking about it.