Skip to content

The Future of Tag Management – Part 3: The Privacy War

In this edition we leave the swamp of security behind and stumble into a battlefield.  All we wish for is an analytical sanctuary where tag management always just works as intended. Instead, we find ourselves seeking cover, dodging shells and just trying to stay alive.

While the previous two posts in this series dealt with if the tags would load at all this one deals with how the tags actually work.  Times are changing. Let go of what you think you know about your tagging solution and peer past the veil that is the cascade of browser changes which have impacted tagging this year.

The privacy war begins

In the wake of Cambridge Analytical many browsers have begun steps to limit the data which can be collected by the sites that users visit.  Slowly over 2017 and 2018 changes were enacted, but it is this year, in 2019 everything started to go haywire and the war between organizations seeking to collect data, and browsers seeking to prevent the same, entered full swing. 

Let’s review 2019 – here are some (but not all of) the major events.

  • Firefox announces Enhanced Tracking Protection testing, January 29th, 2019
  • Safari ITP 2.1 announced Feb 21st 2019
  • Webkit Stance on Link Click Analytics April 11th 2019
  • ITP 2.2 announced Apr 24th 2019
  • iOS upgrade deployed ITP 2.2 to iOS devices – May.
  • Google announces changes for Cookie attributes, May 7th, 2019
  • Webkit proposal for Ad Tracking May 22nd 2019
  • Mozilla announces Default settings matter – applies to new installs, June 4th
  • Microsoft announces anti-tracking tech for Edge, June 27th 2019
  • Webkit Stance on Privacy August 14th 2019
  • Google announces “privacy sandbox” and plans, August 22nd, 2019
  • Firefox Default settings rolled out – all users September 3rd, 2018
  • Firefox announces DNS over HTTPs, September 6th 2019
  • Mac OSX upgrade, deploying ITP 2.2 to Desktop – September 20th
  • ITP 2.3 released with Safari 13 aligning desktop and mobile
  • Google announced plans to use Machine Learning in ad targeting to get around lack of a cookie being present. 
  • Opera released a Machine Learning powered ad blocker on October 7th
  • Firefox expands clarity around tracking tech – October 22nd release.
  • Google recommended shifting to a mixed media attribution model. – October 24th

Upcoming things I know of as of time of writing:

  • Chrome will enable the sameSite cookie standard change in February 2020.  Firefox has announced intent to implement. Edge will run a test, Safari is considering the change.
  • Google has announced plans to require HTTPS for cookie transmission.
  • Google has announced plans to block all “Mixed Content” scenarios in 2020.
  • Safari and Google have announced intent to downgrade the referrer in all 3rd party scenarios
  • There is a proposal for a new API called isLoggedIn and if the user is not logged in, all client side storage would be purged.

What we’ve seen is either a major announcement or implementation every month of 2019 aside from March. 

What is important to note here is that each of these browsers change things in different ways and they align in methods very little.  Also worth noting is that not all of these changes were announced ahead of time. Safari’s ITP 2.3 was announced after it already went live and was affecting production websites.  This gives companies very little time (to none at all) to turn around changes in their platform to ensure their solutions work as intended. This forces companies to be proactive if they care about functionality and accuracy. 

With that said—maybe this doesn’t apply to your specific setup…

Getting the lay of the land

This is why it’s important to understand how the tags work.  Does the tag use cookies? That’s a signpost to investigate.  Does it use any client writable storage? That’s another marker.  Is it loaded externally? Another sign to say—start here.

Sadly, this type of information isn’t often easily accessible.  Vendors don’t often include it in their documentation and as tagging is often handled by non-developers, a skill gap may exist in which those responsible for the tagging lack the skills to make a determination of if there is an impact to a given tag, and if so, what that means for whatever it’s powering.

Still, any trek should start with a checklist and here is ours:

  • Does the tag load externally?  May be impacted by ETP, Edge
  • Does the page contain name/value pairs?  May be impacted by ITP
  • Does the tag use client writable storage?  May be impacted by ITP, ETP, Edge
  • Does the tag use 3rd party cookies?  May be impacted by ITP, ETP, Edge, Chrome
  • Do you need the referrer field?  May be impacted by ITP / Chrome

But how do you know?  That’s where it gets a bit tricky and some developer skills may become required.  I have written an overview of the different tooling here.

Major Impacts

Depending on the respective tags, here are some major themes that could result from various changes.

  • New users increase, retention decrease.  As the browsers deny access to storage or delete storage—users get classified as ‘new’ more often in the absence of some deterministic event (such as log in).  This can affect A/B test software and Recommendation engines as well as Analytic platforms.
  • If the tag is determining advertising attribution the advertising channel may either not get credit (resulting in ‘direct’ attribution) or the lookback window could be severely reduced (as low as 24 hours).  
  • If the tag is determining commission—this may not work as expected resulting in you paying less to your affiliate for traffic they should get credit for. 
  • Audiences derived from cookies may become ‘ghosts’ effectively being unmarketable because the identifiers they are tagged with cease to work after a short period of time.
  • If the referrer is critical to functionality or reporting, it may not work as expected due to referrer downgrading.
  • If the tag requires 1st party cookies—these may not exist due to Safari purging them after set time frames resulting in different than expected behavior.  This can affect things such as plug and play GDPR consent dialogs.
  • If the tag requires 3rd party cookies—these may be blocked outright, blocked conditionally, or function differently depending on which browser is in use.
  • If client writable storage is used (such as localStorage) scenarios exists where Safari will purge that after 7 days, this can result in changes in functionality. 

The changes do have an impact.  German publishers, for example, found that the Firefox changes (which account for 20-30% of traffic in Germany) resulted in an up to 15% reduction in revenue based on ad spend.  Collectively, the array of changes could easily wreak havoc with client side reporting and attribution.

Sadly, which impacts apply to your site specifically depend on which browser is being used, and how your website is built/fires tags.  I recommend working with your development team and engaging vendors as needed to determine actual impact for your specific scenario as the collective changes for 2019 are very likely already affecting your data and systems if it has not already been addressed.

Won’t validation save us?

When I was attending a session at the DAAOne conference, this was brought up and someone behind me explained that they believe validation was the answer.  My take? Maybe, but unlikely. It comes down to how the validation works. Let me explain.

Validation is useful in determining when you have a valid and invalid scenario.  For example.

You expect “Cat”.  The user enters “Dog”  you can easily tell those words are different and if you were expecting “Cat” and got “Dog” you could fail the validation, stop the data flow, fire an alert etc.

Here’s why that’s complicated from a tagging point of view.

  1. The user arrives without a cookie—code creates cookie—works as expected.
  2. The user returns with a cookie—code doesn’t create cookie—works as expected.
  3. Returning user’s cookie is deleted or blocked by browser—the code thinks the user arrives without a cookie so issues one, works as expected… but not as intended.

The intended behavior would be for that returning user to be identified by the original cookie and counted/processed correctly.  However due to changes on the browser that may not happen, leading to the impacts listed above.

So basic validation doesn’t work, because scenario #1 and #2 are both valid.  There are times when new users visit the site and they should get a cookie. There are times returning users visit the site and they should use the existing cookie.  However, there are also times when a new user is actually a returning user as in scenario #3.

This isn’t a new scenario—people log in from new devices or clear the cookies on the browser.  The major difference here is that requires action on the part of the user. It’s quite a different scenario when the computer does this automatically. So now your validation has to account for the possibility of the 3rd state far more often, which means it needs to account for time or changes in some 3rd party list, which is more fragile and hard to automate.  If you want to account for the 3rd state you likely need some sort of server set deterministic marker to link the sessions, then compare values across the sessions.

So maybe alerting is the answer?  Again, on it’s own—likely not. The reason is browser upgrades roll out over time. The variance day to day may not be enough to fire your alerts.  You’d have to be comparing differences on a longer than day by day scale in hopes that the volume of traffic is sufficient in scale to fire your alerting—the result is time to identify and resolve the problem increases.

The Path Forward

So what to do?  Abandon all hope?  Mourn the loss of accuracy?  Well, here are a few suggestions to consider.

  • Work with vendors and development to figure out your risk impact
  • Figure out if you can deploy the tags differently to remove scenarios in which they’d fail in ways you’d rather avoid.
  • Monitor the various browser blogs to stay ahead on changes.
  • Test the website/application with beta browser builds.
  • Validate via testing after new announcements the impact on your site/application.  Manually, if needed.

But where should you go to learn about changes?  I have created a handy reference chart: here

While it is hard to keep up with the changes I recommend that someone do so or risk suffering unexpected issues like Microsoft found testing the Chrome changes or suffering a loss of data accuracy.  

It should also be mentioned that Google and Webkit have posted notices on how they would like to preserve Ad Click Attribution which may come to pass and require further development work.  Google has also started encouraging the use of Mixed Media attribution reporting and released a statement on how they will use modeling for ad targeting and reporting, this can shift reporting from deterministic to probabilistic which may make a difference in how results are communicated.

Combined, we’re likely to see shifts in both collection and reporting play out into 2020.  I would expect the most obvious place you’ll see impact to be year over year channel attribution reports, but know some of this can be mitigated provided you are willing to put in the development effort and are not prevented by law from doing so.

Conclusion 

There is a lot to take in.  Browser are changing the very nature of how the internet works, security teams insisting that changes be made in the name of security and tag management is caught in the crossfire from multiple angles.  

How should you structure  your team and what processes should you use to best handle the challenges as I have defined them?  Tune in next time for the final post in the series – Part 4: Tag Management Redefined.

Published inBrowser UpdatesPrivacyTag Management

2 Comments

  1. mikeharmanos mikeharmanos

    So many questions… but I will hold off until Part 4 is out.

    Validation will clearly not save us. It is not in the interest of vendors to provide it because it largely runs afoul of their incentives. What can be done to overcome that?

    • Cory Underwood Cory Underwood

      Such a complex question, alas no easy answer exists. I’d always recommend validating claims if possible. It’s not a trust issue as much as ensuring a fair transaction takes place for both parties. Should the vendor not want to buy in, that says a lot about the relationship. Then you’d have to decide of that’s the type of relationship you’d want to continue.

Comments are closed.