GDPR - Your questions answered on Data Protection Day
Dr W Kuan Hon, director in the Privacy, Security & Information Law team at Fieldfisher, answers IT leaders’ enduring questions about GDPR around data erasure, subscriber data and Schrems II
The General Data Protection Regulation (GDPR) legislation was put into effect on May 25th 2018, and there are still many areas of confusion for firms, especially around which types of data to delete, and how long they are permitted to retain necessary information. Computing gathered these questions and put them to Dr W Kuan Hon, director of the Privacy, Security & Information Law team at Fieldfisher.
January 28th being Data Protection Day, or Data Privacy day outside Europe, this is a perfect time to share this expertise.
Question: An organisation takes personally identifiable information from the public. That includes a set of fields with personal information, and fields with comments which may have personal information. What are the rules around that data? Does that need to be part of their erasure process?
Answer: In short, yes, comments made by X or about X are X's personal data (although there's an exemption for purely personal processing of personal data). And it's not just an erasure/deletion issue (though the deletion process should certainly extend to those comments too). Policies/processes should be put in place to ensure inappropriate types of comments are not even recorded in the first place, as per the example fine below.
In France, Futura Internationale was fined €500k.
One of the things they did (machine translation):
"...customer data was processed in the Progibos customer management software, in which teleoperators could record comments on customers intended for employees of the company FUTURA INTERNATIONALE. The [regulator] noted, among these comments, comments relating to the state of health of the people approached as well as insulting comments against them.
"The restricted committee [regulator] observed that offensive terms relating to the state of health of persons were found in the Progibos software allowing the management of the company's customers. It considers that, by their very nature, offensive comments are inadequate with regard to the purpose for which the data are processed and that there is no justification, in this case, for the presence of data relating to the health of individuals in the software of customer and prospect management. It notes in this regard that the excessive nature of this data is not questioned by the company.
"The restricted committee also notes that the company has not demonstrated that it had deleted the excessive or inadequate comments at the expiry of the period granted in the formal notice and that, consequently, the breach of the obligation not to deal with that adequate, relevant and limited data to what is necessary in relation to the purposes for which they are processed was established on that date…"
Orders against it included to:
"...take measures to effectively prevent excessive comments from being recorded in the PROGIBOS software, for example by setting up an automatic detection system for words that are inadequate, irrelevant and excessive with regard to the purpose of the processing , in order to exclude them from comment areas or prevent them from being entered."
Question: There is a need under GDPR to remove details of subscribers who no longer subscribe to your service. Do you also need to remove that data from your backups, or from your email? If a company has a policy of deleting all backup data after six years, is that sufficient under GDPR?
Answer: Yes, they must be removed from backups etc too. Without wishing to be legalistic here, regarding the six years, it really depends - e.g. is it a controller, is it a processor, why does it think it needs to keep the data in backups for as long as six years etc? (probably because six years is the limitation period for being sued for various matters in the UK, e.g. by customers under a contract).
Some controllers have produced documents many pages long for their own reference, with different retention periods for different types of data/purposes and justifications for those periods.
After termination of their processing contract, processors are required to delete or (at the customer's option) "return" to the controller customer all personal data that they process as processor for their controllers. The only exception is where continued retention of the data by the processor is required by EU or national Member State law (or UK law of course, in the case of the UK GDPR).
A practical trap is that many archiving systems don't allow for easy location and deletion of personal data that is no longer needed. In Germany, property company Deutsche Wohnen was fined some €14.5 million because, to store tenants' personal data (including financial and other sensitive data, some many years old), it used an archive system that did not provide for the possibility of removing data that was no longer required. (Also, tenants' personal data was stored without checking whether the archiving was permissible or even necessary). The regulator termed these systems "data graveyards", and of course data graveyards pose additional risks for organisations because they contain unnecessary data which could be breached or compromised (but wouldn't have been if the data had been deleted when it should have been).
[Turn to next page for detail on Schrems II]
GDPR - Your questions answered on Data Protection Day
Dr W Kuan Hon, director in the Privacy, Security & Information Law team at Fieldfisher, answers IT leaders’ enduring questions about GDPR around data erasure, subscriber data and Schrems II
Question: Schrems II dictates that organisations shouldn't make new arrangements with data if it touches the US - does that mean you can't go with a US software or infrastructure provider if they're going to see / manage your data?
Answer: No, the Court of Justice of the EU in Schrems II didn't go as far as to say that if it touches the US you can't do it; it didn't say you can't use a US software or infrastructure provider. Again in France, a court pointed out that the CJEU didn't actually say that you can't use a US cloud provider per se. It's partly an issue of geographical hosting location, partly EU customers'/individuals' (and regulators') nervousness about the use of US providers.
Ch.V of the GDPR sets out the infamous restriction on "transfers" to third countries outside the EEA (like the US, and now the UK) or to international organisations (like the UN). It also lays down the conditions under which these transfers are permitted, i.e. what is acceptable to legitimise a transfer - such as where the transfer is to a country declared "adequate" by the European Commission, or if the transfer is made under so-called "model clauses", standard contractual clauses (SCCs) approved by the Commission.
The court in Schrems II made two key points. Firstly, it invalidated the EU-US Privacy Shield, which was previously "adequate" for transfers to US organisations that had signed up to it. Secondly, SCCs survived by the skin of their teeth - the court said that SCCs were still valid as a transfer mechanism, but only if "additional safeguards" were (where necessary) implemented to prevent excessive access to the transferred personal data by the recipient third country, i.e. government/authorities' access going beyond what's necessary and proportionate in a democratic society. These safeguards, which privacy regulators have termed "supplementary measures", could be technical, organisational (policies/processes) and/or contractual (i.e. additional terms to supplement the SCCs). Regulators have provided some recommendations on further measures that could be taken to legitimise "transfers" of personal data, but some of those could make certain transfers, or even ways of doing business, wholly impracticable or impossible. They're still in draft form so we'll see what the final version says.
There are some major reasons why "transfers" or data export/data localization issues are so problematic particularly post-Schrems II. One big reason is that the GDPR, like its predecessor the Data Protection Directive, is based on outdated 20th century assumptions regarding technology. In particular, it implicitly assumes that physical/geographical of personal data equates to access to intelligible data. In other words, the country where data is physically located can access, and control access to, that data in intelligible form. On that assumption, data localization - hosting/storing the data only in particular "safe" countries or regions - can stop other countries getting their hands on that data. Of course, as we all know, that assumption no longer holds true. The link between data's physical location, on the one hand, and the ability to access intelligible data, on the other, has been broken by the availability of two things: remote access over the Internet/other networks; and encryption. (All this is discussed in detail in my book on data localisation which Computing reviewed in 2018.
Another problem is that neither the GDPR nor the Directive chose to clarify what it means by "transfer". Most now take that to mean, transmitting personal data so that it ends up physically in a non-EEA country, or allowing someone located in a non-EEA country to have remote access to personal data (even if the data's physically hosted in the EEA). Actually, the risk isn't so much about the geographical location of personal data, but rather about what I call "effective jurisdiction" over intelligible personal data. What I mean by that is, what country or countries can legally force access to intelligible data to be given, regardless of where the data is physically hosted? If you look at the issues through the lens of effective jurisdiction, then concerns about using US companies are more understandable. US authorities could make US companies hand over data controlled by those companies, wherever located, because they are subject to US jurisdiction and US laws.
However, the GDPR focuses on "transfers" - not effective jurisdiction. Of course, focusing on effective jurisdiction could be problematic also, because that would (if taken to its logical extreme) mean that even UK/EEA companies should not do business in non-EEA countries, in order to protect their personal data from undesirable third country authorities - which makes little policy, economic or social sense. The underlying problem there really is what lawyers call conflict of laws - under the law of country A you can't disclose the data, but under the law of country B you have to hand it over - so, if you do business in both countries, and there's a clash of laws between those countries, you're stuck between a rock and a hard place.
What would make the most sense - but that is not what the GDPR says - is to focus not on the geographical location of personal data, but instead on implementing proper technical, organisational and contractual security measures to protect that data from unauthorised access (including by undesired third country government authorities), regardless of the data's geographical location. However, abolishing all data localisation/transfers rules and focusing instead on security is probably just a pipe dream, as this is perhaps more of a political than legal issue: witness the many references, in debates on the topic, to data "sovereignty", with its emotive connotations regarding nationalism and national control of citizens'/residents' data, even though that phrase is not in fact used in the laws themselves!
The GDPR does already require appropriate security measures generally, and there is at least more emphasis now (including by regulators) on encryption as a technical measure to enable international transfers of personal data post Schrems II. Regulators have even mentioned MPC (multi-party computation) as a possible measure to allow transfers. But, given concerns about whether the third country recipient has the decryption key (which it needs to have, for many cloud services), personally I think there should also be more emphasis on the use of TEEEs/confidential computing, especially in cloud.
In summary, UK, EEA and other countries' rules on data exports/international data transfers will still need to be considered carefully. If using US providers, look at primary hosting locations, backup locations, any remote access for support/maintenance purposes, etc., and what supplementary technical as well as contractual and organisational measures can be implemented if there will be "transfers" to the US. (For US software installed on-prem, not provided as SaaS, there shouldn't be data protection issues unless the software "phones home" to the software provider or others with personal data in some way, or access to personal data stored in on-prem systems is given to the vendor's staff for support/maintenance purposes.)