ePrivacy, the regulation that is coming together with the GDPR, and will elaborate the issue of personal data in electronic communications, is moving forward in the European Parliament. It has just recently entered the Trilogue mode. And as it gets closer to passage, more and more questions are arising about its impact and the changes it will bring for digital marketers.
That’s why we want to share some of our thoughts and answer the most common questions concerning personalization, marketing automation, and re-marketing.
There seems to be a lot of confusion about the notions of “first- and third-party”, as these questions keep coming up. This is probably because of the “cookie wall” drama of 2009-2011 (a.k.a. “cookie consent havoc”), and the repeal of Directive 2002/58/EC by the ePrivacy Regulation.
There are 3 aspects we should focus on here: cookies, parties, and storage:
Interestingly, references to this distinction are absent from the current ePrivacy proposal. So there is no more mention of first- or third-party cookies within the current draft of the ePrivacy Regulation.
But this doesn’t mean we don’t know anything.
The previous version of the ePrivacy Regulation required consent for so-called third-party cookies, the ones mainly used for advertising purposes.
Several issues came up, dating back to 2009-2011. One of them was the following question: “what should consent look like?” There are multiple answers:
- Implicit consent. At least some form of a cookie banner, which means we assume it’s enough to inform the user that cookies are being used via that banner (the United Kingdom is an example here)
Another issue was the widespread practice of cloaking third-party cookies for first-party. This practice has been partially addressed by Apple in the newest Safari 11 and iOS 11 releases, which employ “Intelligent Tracking Prevention” logic.
If you want to learn more about first-party and third-party cookies be sure to check out this article.
Find out if your analytics solution guarantees data accuracy and privacy, including GDPR compliance:Download FREE Guide
This aspect is partially reflected today in the idea of first-, second- and third-party data. How parties are involved when it comes to this notion is best answered by the GDPR rebalancing of obligations between controllers and processors. In this context, first-party data belongs to the data controller, whose responsibilities are shared to a larger extent with service providers. They support data processing activities by providing eg. analytical services. These service providers could be described as external third parties, and under the GDPR they are often data processors.
Even though the notion of second-party data doesn’t exist in GDPR, Article 26 of the regulation introduces the novel concept of joint-controllers, which would probably cover current data selling practices.
Note the rather clear transparency obligations under paragraph 2 of said article: “The essence of the arrangement shall be made available to the data subject” for companies engaged in 2nd party data buying or selling when “processing of personal data of data subjects who are in the Union” (art. 3(2) Territorial scope).
The joint-controllers concept initially addressed the power imbalance faced by data controllers, who often face a “take it or leave it” stance about Terms and Conditions from certain powerful processors. The only available option was to accept those unbalanced terms or not use the service.
Chapter IV: Controller and processor, Articles 24 to 31, focus on re-balancing this relationship and obligations to support the accountability principle laid down in Article 5 (2): The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 (“accountability”).
Recently the push for more transparency when it comes to data usage increases proportionally to the level of risk associated with data uses. And as it increases, the data controllers have found themselves stuck with gluttonous SDKs that could not be altered, effectively creating frictions between the data controllers’ obligations towards the data subject and those stipulated within the T&C.
The changes in obligations with respect to controllers and processors are most clearly visible when comparing the territorial scope of the Regulation to the one within the current Directive:
When it comes to cloud and SaaS, the issue of storage becomes relevant for both first and third parties. They have to decide whether to use internal hosting (first-party) or SaaS (third-party). And this, of course, is directly connected to the processing obligations associated with each choice.
As the issue of storage is mainly about data transfers – which alone would require consent if involving personal data – we have to look at Chapter V of the GDPR on Transfers of personal data to third countries or international organisations. This will give us insight on what to do if the data is stored outside of the European Union.
Article 44 talks of the general principle for transfers, then Article 45 explains transfers on the basis of an adequacy decision.
The discussion then often revolves mainly around transferring data to the United States. Because of the invalidation of Safe Harbor agreement back in October 2015 (you can learn more here), the available frameworks/tools used to assure lawfulness of transfers can be one of the following:
- Privacy Shield
- Standard contractual clauses, also called model contractual clauses
- Binding Corporate Rules (covered in Article 47 of the GDPR).
The easiest choice seems to be the Privacy Shield, as it’s only a one-time effort. Signing contractual clauses forces companies to reevaluate it for every “business relationship”, which, of course, is a bigger hassle. The last option typically takes longer than the others, but has one very important advantage: it assures companies that they will be compliant with EU law, which both the Privacy Shield and contractual clauses might not guarantee in the future (even though they remain a relatively safe bet today).
Content personalization is anchored within a 2-step analytics engineering logic: identification followed by personalization rules. Identification here is possible either because the individual logged in and/or because a cookie was dropped and is subsequently read.
In a previous blog post on lawfulness of processing under the GDPR, we touched upon privacy principles as well as the available mechanisms available for ensuring lawful personal data processing. We highlighted the fact that “legitimate interest” doesn’t appear in the current ePrivacy text (unlike in the GDPR text). From there on, and to the great dismay of the advertising industry, collecting consent from the user remains the only option.
Imagine shopping on your favorite Spanish designer’s website, where you set up an account to deal with payment and shipping. Within this set-up, asking for consent related to sizes and colors should be feasible.
The problem occurs when we’re talking about accessing for example publisher’s content where, in the currently proposed amendment, the following should be taken into consideration:
Reading as from amendments 85 and 89, focusing specifically on point “d.” which is the analytics exception now drafted around reach instead of audience measurement:
- The use of processing and storage capabilities of terminal equipment and the collection of information from end-users’ terminal equipment, including about its software and hardware, other than by the user concerned shall be prohibited, except on the following grounds:
- it is strictly necessary for the sole purpose of carrying out the transmission of an electronic communication over an electronic communications network, or
- the user has given his or her specific consent; or
- it is strictly technically necessary for providing an information society service specifically requested by the user; or
- if it is technically necessary for measuring the reach of an information society service requested by the user, provided that such measurement is carried out by the provider, or on behalf of the provider, or by a web analytics agency acting in the public interest including for scientific purpose; that the data is aggregated and the user is given a possibility to object; and further provided that no personal data is made accessible to any third party and that such measurement does not adversely affect the fundamental rights of the user; Where audience measuring takes place on behalf of an information society service provider, the data collected shall be processed only for that provider and shall be kept separate. from the data collected in the course of audience measuring on behalf of other providers.
There’s a reference to data leakage and the decoupling of data – something the analytics industry is unfamiliar with. Data leakage is addressed here: “further provided that no personal data is made accessible to any third party”, which means it stays with the controller. It means that hashing and encrypting is not enough, as it process the data to make it less personal before passing it along to another tool, which does not make data protection obligations disappear. Therefore the tools have to be decoupled.
Another important implication is hinted at in this phrase: “the user is given a possibility to object”, which refers to Art. 21 of the GDPR. Here we also see the consequences of having to first inform the data subject about processing, and then be able to stop processing when they object:
The controller shall no longer process the personal data unless the controller demonstrates compelling legitimate grounds for the processing which override the interests, rights and freedoms of the data subject or for the establishment, exercise or defense of legal claims.
In a nutshell, it’s no longer an option to personalize content by default. Permission to do so should be retractable and based on consent terms in line with the GDPR. Opting in for content personalization will become part of the standard user options you would expect from a data controller.
Find out if your analytics solution guarantees data accuracy and privacy, including GDPR compliance:Download FREE Guide
But I hear you asking:
“What about forcing users to consent to get access to the content? Or forcing them to disable an AdBlock?”
There were discussions about not showing content or refusing access if ad blockers were being used. Amendment 92 with respect to Article 8(1a) addresses that by stating:
No user shall be denied access to any information society service or functionality, regardless of whether this service is remunerated or not, on grounds that he or she has not given his or her consent under Article 8(1)(b) to the processing of personal information and/or the use of processing or storage capabilities of his or her terminal equipment that is not necessary for the provision of that service or functionality.
So consent can be asked for, but if not given, it doesn’t mean access can’t be granted. Again, these amendments are currently rather complex and under discussion. This might not be the final version of the ePrivacy Regulation, and this particular interpretation may not “win”.
This is where not only consent might be difficult to get, but also where transparency will be required to assure purposes are correctly aligned and respected by the involved parties. On top of the GDPR obligations inherent to data subjects’ rights like access, erasure, and deletion, there is another problem. What should happen when the platform responsible for marketing automation and email marketing wants to re-use the data?
The fact that someone subscribes to an email service and agrees to receive further communication from that company, either as a customer or a prospect, doesn’t necessarily mean they think they will be receiving information related to other products or services.
It all depends on what is described within the consent notice when registering for the email in the first place, and how far we can stretch the meaning of such wording. That’s why consent should be recorded accurately, as described in the ICO’s Consent Guidance released several months ago.
It’s best to tackle this using practical examples:
- A complaint to a major supermarket about intrusive wifi connections in their stores that resulted in being added to their newsletter. Obviously, the purpose of the communication with that supermarket was not to be updated on their marketing activities! Should the store be allowed to use my consent this way?
- Using online banking and then logging off to browse mortgages: can/should that information be re-used to target that specific product, without being creepy or overriding the data subject’s reasonable expectations related to data uses?
As we know, companies might be pushing for marketers to upload their email lists while ensuring there are no GDPR issues. And while these examples mainly relate to internal processes, the initial mention of platforms is inherently connected to the privacy by default options available within the platform.
In response to this, some functionalities and contracts have been revisited to make sure the required options exist. However, it’s rarely “by default” as it requires a special mention within these contracts.
The impact on re-marketing (re-targeting campaigns, launching pixels to set third-party cookies so that users can be re-targeted on Google Search/Google Display Network or DSPs such as Adform) is a natural follow-up topic to the previous issue.
The main challenges to overcome here are complexity of systems along with the opacity of data flows and exchanges that are often server-side operations.
In order to understand legal liabilities, we need to remember that each case is different. Our focus is on the “customer journey” on the one hand, and the data trail and tools used on the other. Ask yourself one question: how does data flow align with what was promised in the contracts?
Additionally, there has been a lot of discussion about Article 30 of the GDPR on Records of processing activities to help define what this should encompass. And while, for example, I understand and recommend Nymity’s simplified stance, I would also like to highlight that monitoring the terms and conditions of digital tools used to support digital transformation can be a full time job in light of ePrivacy!
Finding solutions to make legal obligations aligned with data uses, independent of how ePrivacy pans out, is the challenge we face right now.
GDPR and ePrivacy lead to the emergence of two opposing forces within companies:
- the so-called customer activation platform stack of those involved with digital transformation vs.
- risk and compliance, a DPO who will look for a risk mitigation stack to tackle security alerts, consent management, data subject rights management, anonymization functionalities, accountability based workflows, etc.
Discussing these types of challenges with companies such as Immuta forces us to reboot, to think about digital data in challenging and innovative ways. One topic such discussions could cover is that of purpose.
The re-marketing data privacy challenge is about defining data flows, checking what was promised in documents, and making sure accountability is dealt with by ideally going through a Data Protection Impact Assessment as required under Article 35 of the GDPR.
This was further reinforced for our industry by Article 29 Data Protection Working Party (WP29) in their recent updates. The guidelines on Data Protection Impact Assessments (DPIA) have been updated to help companies determine whether processing is “likely to result in high risk”, triggering the need for a DPIA to assure compliance.
So, what is high risk according to the data protection authorities? Criteria #6 might sound familiar:
“Matching or combining datasets, for example originating from two or more data processing operations performed for different purposes and/or by different data controllers in a way that would exceed the reasonable expectations of the data subject”.
Hence many of today’s practices of DMPs and data providers should be thoroughly examined, questioned, remediated, and documented using a Data Protection Impact Assessment to ensure your company can prove accountability (Art. 5.2 of the GDPR) and limit risk.
The questions now are:
- Who is going to step up inside companies and among website owners to ring the alarm bell about data privacy risks, and
- How will intermediaries, id est data processors, align with such requirements?
When it comes to data privacy, the devil is in the details. It’s about asking the tough questions, because not only is your company’s reputation at stake: the bottom line is at risk: fines in the ePrivacy Regulation are the same as in GDPR, up to 20 million euros or 4% of global turnover.