Elizabeth Denham's latest blog busts the myths for UK small and medium sized businesses transferring personal data to and from the EEA
Like everyone in the UK right now, we are following the twists and turns of the Brexit negotiations. The sharing of customers’, citizens’ and employees’ personal data between EU member states and the UK is vital for business supply chains to function and public authorities to deliver effective public services.
At the moment personal data flow is unrestricted because the UK is an EU member state. If the proposed EU withdrawal agreement is approved, businesses can be assured that personal data will continue to flow until 2020 while a longer term solution can be put in place.
However in the event of ‘no deal’, EU law will require additional measures to be put in place by UK companies when personal data is transferred from the European Economic Area (EEA) to the UK, in order to make them lawful.
With less than two months to go until the UK leaves the EU, we recognise that businesses and organisations are concerned. My latest myth busting blog challenges some of the misconceptions about what a ‘no deal’ Brexit will mean for UK companies transferring personal data to and from the EEA.
Myth #1: Brexit will stop me from transferring personal information from the UK to the EU altogether.
In a ‘no deal’ situation the UK Government has already made clear its intention to enable data to flow from the UK to EEA countries without any additional measures. But transfers of personal data from the EEA to the UK will be affected.
The key question around the flow of personal data, is whether your data is going from the UK to the EEA or exchanged both ways? If you are unsure, start by mapping your data flows and establish where the personal data you are responsible for is going.
All businesses operating in the EEA should consider whether they need to take action now. Read our guidance pages to establish whether you need to prepare for data transfers in the event of ‘no deal’.
Myth #2: I have regular customers from Europe who come to my family’s hotel every year – I’ll need a special agreement set up to deal with their personal details.
When a customer passes their own personal data to a company in the EEA, it is not considered to be a data transfer and can continue without additional measures.
However, there may be other ways you transfer data, for example a booking agency transferring a list of customers, in this case you may need additional measures. If you are unsure please check the ICO’s guidance pages where we have a range of tools and advice to help.
Myth #3: Brexit will only affect data transfers of UK companies actually exporting goods or services to the EU.
Personal data transfers are not about whether your business is exporting or importing goods. You need to assess whether your business involves transfers of personal data, such as names, addresses, emails and financial details to and from the EEA and if this is going to be lawful in the case of ‘no deal’.
It is the responsibility of every business to know where the personal data it processes is going, and that a proper legal basis for such transfers exists. Our guidance – Leaving the EU – six steps to take will help.
Myth #4: My business will be fine because there will be a European Commission adequacy decision on exit day on 29 March 2019 to ensure the uninterrupted exchanges of personal data between the UK and the EU.
‘Adequacy’ is the term given to countries outside the EU that have data protection measures that are deemed essentially equivalent to European standards. Companies and organisations operating within countries with adequacy agreements enjoy uninterrupted flow of personal data with the EU. But an assessment of adequacy can only take place once the UK has left the EU. These assessments and negotiations have usually taken many months.
Although it is the ambition of the UK and EU to eventually establish an adequacy agreement, it won’t happen yet. Until an adequacy decision is in place, businesses will need a specific legal transfer arrangement in place for transfers of personal data from the EEA to the UK, such as standard contractual clauses.
Myth #5: Our parent company in Europe keeps all our personal data records centrally so I don’t need to worry about sorting any new agreements.
Don’t presume you are covered by the structure of your company. In the case of ‘no deal’, UK companies transferring personal information to and from companies and organisations based in the EEA will be required by law to put additional measures in place. You will need to assess whether you need to take action.
There are many mechanisms companies can use to legitimise the transfer of personal data with the EEA and standard contractual clauses is one of those. We have produced an online tool to help organisations put contract terms in place providing the lawful basis for the data transfers. Companies that need to act would also benefit from Leaving the EU - six steps to take guidance for more information.
You know your organisation best and will be able to use our guidance to assess if and how you need to prepare. Alternative data transfer mechanisms exist but it can take time to put those arrangements in place.
It is in everyone’s interests that appropriate exchanges of personal data continue whatever the outcome of Brexit. The ICO will carry on co-operating internationally to ensure protections are in place for personal data and organisations have the right advice and guidance.
The Bundeskartellamt has imposed on Facebook far-reaching restrictions in the processing of user data.
According to Facebook's terms and conditions users have so far only been able to use the social network under the precondition that Facebook can collect user data also outside of the Facebook website in the internet or on smartphone apps and assign these data to the user’s Facebook account. All data collected on the Facebook website, by Facebook-owned services such as e.g. WhatsApp and Instagram and on third party websites can be combined and assigned to the Facebook user account.
The authority’s decision covers different data sources:
(i) Facebook-owned services like WhatsApp and Instagram can continue to collect data. However, assigning the data to Facebook user accounts will only be possible subject to the users’ voluntary consent. Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook data.
(ii) Collecting data from third party websites and assigning them to a Facebook user account will also only be possible if users give their voluntary consent.
If consent is not given for data from Facebook-owned services and third party websites, Facebook will have to substantially restrict its collection and combining of data. Facebook is to develop proposals for solutions to this effect.
Andreas Mundt, President of the Bundeskartellamt: “With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data.In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts. The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data.The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users. Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”
Facebook is the dominant company in the market for social networks
In December 2018, Facebook had 1.52 billion daily active users and 2.32 billion monthly active users. The company has a dominant position in the German market for social networks. With 23 million daily active users and 32 million monthly active users Facebook has a market share of more than 95% (daily active users) and more than 80% (monthly active users). Its competitor Google+ recently announced it was going to shut down its social network by April 2019. Services like Snapchat, YouTube or Twitter, but also professional networks like LinkedIn and Xing only offer parts of the services of a social network and are thus not to be included in the relevant market. However, even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process.
Abuse of market power based on the extent of collecting, using and merging data in a user account
The extent to which Facebook collects, merges and uses data in user accounts constitutes an abuse of a dominant position.
The Bundeskartellamt’s decision is not about how the processing of data generated by using Facebook’s own website is to be assessed under competition law. As these data are allocated to a specific service users know that they will be collected and used to a certain extent. This is an essential component of a social network and its data-based business model.
However, this is what many users are not aware of: Among other conditions, private use of the network is subject to Facebook being able to collect an almost unlimited amount of any type of user data from third party sources, allocate these to the users’ Facebook accounts and use them for numerous data processing processes. Third-party sources are Facebook-owned services such as Instagram or WhatsApp, but also third party websites which include interfaces such as the “Like” or “Share” buttons. Where such visible interfaces are embedded in websites and apps, the data flow to Facebook will already start when these are called up or installed. It is not even necessary, e.g., to scroll over or click on a “Like” button. Calling up a website with an embedded “Like” button will start the data flow. Millions of such interfaces can be encountered on German websites and on apps.
Even if no Facebook symbol is visible to users of a website, user data will flow from many websites to Facebook. This happens, for example, if the website operator uses the “Facebook Analytics” service in the background in order to carry out user analyses.
Andreas Mundt: By combining data from its own website, company-owned services and the analysis of third party websites, Facebook obtains very detailed profiles of its users and knows what they are doing online.”
European data protection provisions as a standard for examining exploitative abuse
Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved.
In the authority’s assessment, Facebook’s conduct represents above all a so-called exploitative abuse. Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data. This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).
Andreas Mundt: “Today data are a decisive factor in competition. In the case of Facebook they are the essential factor for establishing the company’s dominant position. On the one hand there is a service provided to users free of charge. On the other hand, the attractiveness and value of the advertising spaces increase with the amount and detail of user data. It is therefore precisely in the area of data collection and data use where Facebook, as a dominant company, must comply with the rules and laws applicable in Germany and Europe.”
The Bundeskartellamt’s decision is not yet final. Facebook has one month to appeal the decision to the Düsseldorf Higher Regional Court.
On this international Data Privacy Day, and after a year of severe abuses, it is worth reflecting on why it is essential to protect privacy.
Privacy is often cast as an abstract or undervalued concept associated with a desire to keep secret certain aspects of our activities or our personality that we prefer to keep to ourselves.
This is a very narrow outlook. In fact, privacy is nothing less than a prerequisite for freedom: the freedom to live and develop independently as a person, away from the watchful eye of a surveillance state or commercial enterprises, while still participating voluntarily and actively in the regular, day-to-day activities of a modern society.
Data-driven technologies undoubtedly bring great benefits to individuals. They can be fun and convenient but they can also be powerful tools for personal development. They open the door to huge opportunities for improving health care and hold the promise for a future built on artificial intelligence (AI) in which the possibilities seem endless.
On the other hand, these technologies also create new risks. For example, some AI applications, which rely on the massive accumulation of personal data, also put other fundamental rights at risk.
One such risk is the potential for discrimination against people resulting from decisions made by artificial intelligence systems. These systems are generally non-transparent and some have been found to rely on data sets that contain inherent bias, in violation of privacy principles. Such discrimination could potentially result in the restriction of availability of certain services, or result in the exclusion of people from certain aspects of personal, social and professional life, including employment.
In December, AI ethics researchers released the Montreal Declaration for the Responsible Development of Artificial Intelligence – a set of 10 principles for developers and organizations that implement AI, as well as the individuals subject to it.
While this ethical framework marks an important, made-in-Canada development that should help guide this emerging sector, I would agree with the Declaration’s authors who say it is only a first step, and that public authorities now need to act. Governments and legislators in particular have an important role to play in drawing on ethical principles to create an enforceable legal framework for AI that formally requires relevant actors to act fairly and responsibly.
We have also seen in recent years, and in particular in 2018, how privacy breaches can adversely impact the exercise of our democratic rights. The massive accumulation of personal data by certain state and commercial actors makes our democracies vulnerable to manipulation, including by foreign powers. It is unfortunate that the 2019 federal election will take place without any significant strengthening of our personal data protection laws.
In 2019, as the federal government and legislators consider what should be Canada’s national data strategy and laws for the modern age, it is important as a society to remember privacy’s role in protecting other fundamental rights and values, including freedom and democracy. If this happens, we will have drawn the right lessons from 2018.
You check out Facebook to see if one of your friends or someone in your family has done something interesting. Your attention is drawn to a holiday advert. That’s a coincidence, you think, because just before you went to Facebook you had been searching internet for a holiday destination. But this is no coincidence: dozens of parties are looking over your shoulder to see what you are getting up to on internet and this influences which adverts you get to see and where. PhD Candidate Robbert van Eijk investigated this process and the observance of privacy legislation in European countries. He will defend his doctoral thesis on 29 January.
The technology which facilitates online advertising is called 'real-time bidding' (RTB). When you visit a website, within a few tenths of a second the advert space on that page is ‘auctioned’: on the basis of data saved in cookies it is determined what kind of adverts are most relevant for you. The provider who places the highest bid for this kind of advert ‘wins’ and is given - upon payment of course - space by the advertiser to promote his product. 'The motive to write this doctoral thesis came from the desire to investigate real-time bidding at the intersection between technology and the law’, Van Eijk explains. 'I wanted to find out more about what happens when as a visitor to a website you get to see adverts which appear to be tracking your steps. This topic is relevant in light of the application of the General Data Protection Regulation (GDPR) and the current cookie legislation and its rules which are laid down, among others, in Article 11.7a of the (Dutch) Telecommunications Act.'
In his research Van Eijk demonstrates that this privacy component can be measured. 'I combine law and data science in my research by applying mathematic algorithms to the network traffic picked up between the browser and the websites visited. Taking a network-science perspective to the privacy component of RTB is new, by being able to distinguish the networks of partners involved in an advertisement system when displaying an advert on the website which an internet user visits. These advertisement networks partly overlap one another. This new way of observing the process also shows which role partners have in an advertisement network in collecting and sharing the data of website visitors.'
Van Eijk demonstrates in the research that two kinds of algorithms enable transparency in the mutual collaboration arrangements (the betweenness). 'These are cluster edge betweenness and node betweenness. The first is a standard that is based on the shortest paths between the partners in an advertisement network. The algorithm solves an important issue: which RTB partners are clustered in an RTB system? The second solves another important issue: who are the dominant companies in a network of RTB partners? Node betweenness helps us to distinguish between the companies.'
In addition, the researcher provides transparency concerning various differences between European countries. 'I show that a Graph-Based Methodological Approach (GBMA) can indicate the situation concerning differences in permission in 28 European countries; for example, differences in cookie notifications and cookie walls. In Europe we see two mechanisms in relation to permission. An implicit permission (where tracking cookies have already been installed before the end user has given permission) and a strict permission mechanism (where the legal requirements are implemented to the extent that no tracking cookies are (allowed to be) installed on the equipment of the end user or information can be read from the equipment when he visits a webpage). In this way, countries with implicit mechanisms can be compared to countries where strict mechanisms predominate. This leads to unequal rights.'
The Commission has adopted today its adequacy decision on Japan, allowing personal data to flow freely between the two economies on the basis of strong protection guarantees.
This is the last step in the procedure launched in September 2018, which included the opinion of the European Data Protection Board (EDPB) and the agreement from a committee composed of representatives of the EU Member States. Together with its equivalent decision adopted today by Japan, it will start applying as of today.
Věra Jourová, Commissioner for Justice, Consumers and Gender Equality said: “This adequacy decision creates the world's largest area of safe data flows. Europeans' data will benefit from high privacy standards when their data is transferred to Japan. Our companies will also benefit from a privileged access to a 127 million consumers' market. Investing in privacy pays off; this arrangement will serve as an example for future partnerships in this key area and help setting global standards.”
The key elements of the adequacy decision
Before the Commission adopted its adequacy decision, Japan put in place additional safeguards to guarantee that data transferred from the EU enjoy protection guarantees in line with European standards. This includes:
A set of rules (Supplementary Rules) that will bridge several differences between the two data protection systems. These additional safeguards will strengthen, for example, the protection of sensitive data, the exercise of individual rights and the conditions under which EU data can be further transferred from Japan to another third country. These Supplementary Rules will be binding on Japanese companies importing data from the EU and enforceable by the Japanese independent data protection authority (PPC) and courts.
The Japanese government also gave assurances to the Commission regarding safeguards concerning the access of Japanese public authorities for criminal law enforcement and national security purposes, ensuring that any such use of personal data would be limited to what is necessary and proportionate and subject to independent oversight and effective redress mechanisms.
A complaint-handling mechanism to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities. This new mechanism will be administered and supervised by the Japanese independent data protection authority.
The adequacy decisions also complement the EU-Japan Economic Partnership Agreement- which will enter into force in February 2019. European companies will benefit from free data flows with a key commercial partner, as well as from privileged access to the 127 million Japanese consumers. The EU and Japan affirm that, in the digital era, promoting high privacy and personal data protection standards and facilitating international trade must and can go hand in hand.
The adequacy decision – as well as the equivalent decision on the Japanese side –will start applying as of today.
After two years, a first joint review will be carried out to assess the functioning of the framework. This will cover all aspects of the adequacy finding, including the application of the Supplementary Rules and the assurances for government access to data. The Representatives of European Data Protection Board will participate in the review regarding access to data for law enforcement and national security purposes. Subsequently a review will take place at least every four years.
The mutual adequacy arrangement with Japan is a part of the EU strategy in the field of international data flows and protection, as announced in January 2017 in the Commission's Communication on Exchanging and Protecting Personal Data in a Globalised World.
The EU and Japan successfully concluded their talks on reciprocal adequacy on 17 July 2018 (see press release). They agreed to recognise each other's data protection systems as adequate, allowing personal data to be transferred safely between the EU and Japan.
In July 2017, President Juncker and Prime Minister Abe committed to adopting the adequacy decision, as part of the EU and Japan's shared commitment to promote high data protection standards on the international scene (see statement).
The processing of personal data in the EU is based on the General Data Protection Regulation (GDPR), which provides for different tools to transfer personal data to third countries, including adequacy decisions. The European Commission has the power to determine whether a country outside the EU offers an adequate level of data protection. The European Parliament and the Council can request the European Commission to maintain, amend or withdraw these decisions.
The early fines to American tech firms will reveal another level of guidance from the Data Protection Authorities. First you should read the LAW. Then seek clarity from the official guidance documents. Then finally, look to the details of the violations. WHAT they fine for is critical information for operations people to set new practices. HOW MUCH they fine for is critical for business risk analysis.
The recent fine from CNIL for GOOGLE is based on "the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations."
GDPR is all about TRANSPARENCY. INFORMATION. CONSENT.
CNIL claim that when setting up an Android device (GOOGLE) consents for processing data must be
easy to understand,
communicate a legal basis for processing, and
must show a positive action on the part of the data subject.
Specifically, when signing up for an Android account the purpose of processing your personal data is far too generic and of a "vague manner". The same could be said for communicating "the CATEGORIES of data processing for various purposes". With more than 20 different service offerings, GOOGLE 's consent requests are not easily understood. It must be clear for each type of consent which legal basis for processing is being claimed AND how long GOOGLE planned to keep that information.
Therefore the consent GOOGLE believes they have, is not considered valid by the CNIL. While it "is possible to configure the display of personalized ads", CNIL determined that GOOGLE was "not sufficiently informed regarding the extent of the consents requested". GOOGLE was neither "specific" nor "unambiguous". In fact many of those consents were not easily accessed and when they were, the boxes were pre-checked, therfore no positive action was required on behalf of the data subject.
It is important to note: "the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement." Look for GOOGLE to be fined again for these very same activities if these practices are not corrected immediately.
I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed. With more than 1.7 billion members, Facebook is among the most influential businesses in the world. Whether they like it or not–whether Facebook is a technology company or a media company–the company has a huge impact on politics and social welfare. Every decision that management makes can matter to the lives of real people. Management is responsible for every action. Just as they get credit for every success, they need to be held accountable for failures. Recently, Facebook has done some things that are truly horrible, and I can no longer excuse its behavior."
Nine days before the November 2016 election, I sent the email above to Facebook founder Mark Zuckerberg and chief operating officer Sheryl Sandberg. It was the text for an op-ed I was planning to publish about problems I was seeing on Facebook. Earlier in the year, I noticed a surge of disturbing images, shared by friends, that originated on Facebook Groups ostensibly associated with the Bernie Sanders campaign, but it was impossible to imagine they came from his campaign. I wanted to share with Sandberg and Zuckerberg my fear that bad actors were exploiting Facebook’s architecture and business model to inflict harm on innocent people.
I am a longtime tech investor and evangelist. Tech has been my career and my passion. I had been an early adviser to Zuckerberg–Zuck, to many colleagues and friends–and an early investor in Facebook. I had been a true believer for a decade. My early meetings with Zuck almost always occurred in his office, generally just the two of us, so I had an incomplete picture of the man, but he was always straight with me. I liked Zuck. I liked his team. I was a fan of Facebook. I was one of the people he would call on when confronted with new or challenging issues. Mentoring is fun for me, and Zuck could not have been a better mentee. We talked about stuff that was important to Zuck, where I had useful experience. More often than not, he acted on my counsel.
When I sent that email to Zuck and Sheryl, I assumed that Facebook was a victim. What I learned in the months that followed–about the 2016 election, about the spread of Brexit lies, about data on users being sold to other groups–shocked and disappointed me. It took me a very long time to accept that success had blinded Zuck and Sheryl to the consequences of their actions. I have never had a reason to bite Facebook’s hand. Even at this writing, I still own shares in Facebook. My criticism of the company is a matter of principle, and owning shares is a good way to make that point. I became an activist because I was among the first to see a catastrophe unfolding, and my history with the company made me a credible voice.
This is a story of my journey. It is a story about power. About privilege. About trust, and how it can be abused.
The massive success of Facebook eventually led to catastrophe. The business model depends on advertising, which in turn depends on manipulating the attention of users so they see more ads. One of the best ways to manipulate attention is to appeal to outrage and fear, emotions that increase engagement. Facebook’s algorithms give users what they want, so each person’s News Feed becomes a unique reality, a filter bubble that creates the illusion that most people the user knows believe the same things. Showing users only posts they agree with was good for Facebook’s bottom line, but some research showed it also increased polarization and, as we learned, harmed democracy.
To feed its AI and algorithms, Facebook gathered data anywhere it could. Before long, Facebook was spying on everyone, including people who do not use Facebook. Unfortunately for users, Facebook failed to safeguard that data. Facebook sometimes traded the data to get better business deals. These things increased user count and time on-site, but it took another innovation to make Facebook’s advertising business a giant success.
From late 2012 to 2017, Facebook perfected a new idea–growth hacking–where it experimented constantly with algorithms, new data types and small changes in design, measuring everything. Growth hacking enabled Facebook to monetize its oceans of data so effectively that growth-hacking metrics blocked out all other considerations. In the world of growth hacking, users are a metric, not people. Every action a user took gave Facebook a better understanding of that user–and of that user’s friends–enabling the company to make tiny “improvements” in the user experience every day, which is to say it got better at manipulating the attention of users. Any advertiser could buy access to that attention. The Russians took full advantage. If civic responsibility ever came up in Facebook’s internal conversations, I can see no evidence of it.
The people at Facebook live in their own bubble. Zuck has always believed that connecting everyone on earth was a mission so important that it justified any action necessary to accomplish it. Convinced of the nobility of their mission, Zuck and his employees seem to listen to criticism without changing their behavior. They respond to nearly every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. They cannot imagine that the recent problems could be in any way linked to their designs or business decisions. It would never occur to them to listen to critics–How many billion people have the critics connected?–much less to reconsider the way they do business. As a result, when confronted with evidence that disinformation and fake news had spread over Facebook and may have influenced a British referendum or an election in the U.S., Facebook followed a playbook it had run since its founding: deny, delay, deflect, dissemble. Facebook only came clean when forced to, and revealed as little information as possible. Then it went to Plan B: apologize, and promise to do better.
Thanks to Facebook’s extraordinary success, Zuck’s brand in the tech world combines elements of rock star and cult leader. He is deeply committed to products and not as interested in the rest of the business, which he leaves to Sandberg. According to multiple reports, Zuck is known for micromanaging products and for being decisive. He is the undisputed boss. Zuck’s subordinates study him and have evolved techniques for influencing him. Sheryl Sandberg is brilliant, ambitious and supremely well organized. Given Zuck’s status as the founder, the team at Facebook rarely, if ever, challenged him on the way up and did not do so when bad times arrived. (A Facebook spokesperson replies: “People disagree with Mark all the time.”)
You would think that Facebook’s users would be outraged by the way the platform has been used to undermine democracy, human rights, privacy, public health and innovation. Some are, but nearly 1.5 billion people use Facebook every day. They use it to stay in touch with distant relatives and friends. They like to share their photos and their thoughts. They do not want to believe that the same platform that has become a powerful habit is also responsible for so much harm. Facebook has leveraged our trust of family and friends to build one of the most valuable businesses in the world, but in the process, it has been careless with user data and aggravated the flaws in our democracy while leaving citizens ever less capable of thinking for themselves, knowing whom to trust or acting in their own interest. Bad actors have had a field day exploiting Facebook and Google, leveraging user trust to spread disinformation and hate speech, to suppress voting and to polarize citizens in many countries. They will continue to do so until we, in our role as citizens, reclaim our right to self-determination.
We need to begin to reform Facebook and Big Tech in these key areas:
Security and risk management leaders, including CISOs and privacy professionals, must recognize maturing privacy regulations to ensure a privacy-friendly operation.
Privacy is a business-critical discipline for many organizations, enforced by multiple regulations. Most recently, the European Union’s General Data Protection Regulation (GDPR) has driven a global movement of maturing privacy and data protection laws with stricter requirements.
“Privacy requirements dramatically impact an organization’s strategy, purpose and methods for processing personal data”
“Multiple countries are implementing regulations inspired by the GDPR principles, a movement that is likely to continue into the foreseeable future,” says Bart Willemsen, Senior Director Analyst, Gartner. “These privacy requirements dramatically impact an organization’s strategy, purpose and methods for processing personal data. Furthermore, breaches of these requirements carry financial, reputational and regulatory implications.”
Security and risk management leaders must take note of the following Gartner 2019 predictions for privacy to ensure transparency and customer assurance.
By 2020, the backup and archiving of personal data will represent the largest area of privacy risk for 70% of organizations, up from 10% in 2018
Today, organizations hold backups of large volumes of personal data that is both sensitive and vulnerable with no clear intentions of using it. Because the sensitivity is a constant characteristic and the vulnerability is arguably equivalent, the volume dictates the level of risk, and represents the largest area of privacy risk today. Additionally, privacy regulations have introduced penalties and stiff fines for violations, making the risk of holding unused personal data potentially very expensive.
Over the next two years, organizations that don’t revise data retention policies to reduce the overall data held, and by extension the data that is backed up, will face a huge sanction risk for noncompliance as well as the impacts associated with an eventual data breach. GDPR, for example, introduced regulatory fines of up to 4% of annual global turnover or €20 million, whichever is greater, for noncompliance.
By 2022, 75% of public blockchains will suffer “privacy poisoning” — inserted personal data that renders the blockchain noncompliant with privacy laws
Blockchain is a promising technology; however, businesses looking to implement blockchain technology must determine whether the data being used is subject to any privacy laws. For example, public blockchains need an immutable data structure, meaning once data is recorded, it cannot easily be modified or deleted. Privacy rights granted to individuals include the option for customers to invoke the “right to be forgotten.” In many such cases, personal data processed about them must be deleted.
This raises immediate concerns, as entries in a public blockchain poisoned with personal data can’t be replaced, anonymized or structurally deleted. Therefore, businesses are unable to meet the need to keep records with their obligations to comply with privacy laws. Organizations that implement blockchain systems without managing privacy issues by design will run the risk of storing personal data that can’t be deleted without compromising chain integrity.
By 2023, over 25% of GDPR-driven proof-of-consent implementations will involve blockchain technology, up from less than 2% in 2018
Although GDPR guidelines have been in effect since 25 May 2018, organizations are at different levels of compliance. The pressure to fully comply is increasing, driving organizations in or doing business with the EU to further evaluate their data collection processes. However, most are struggling with integration costs and technologies that can help speed up compliance.
“The application of blockchain to consent management is an emerging scenario at an early stage of experimentation,” says Willemsen. “Various organizations have started exploring the use of blockchain for consent management because the potential immutability and tracking of orthodox blockchains could provide the necessary tracking and auditing required to comply with data protection and privacy legislation.”
We previously reported on Brexit's impact on data protection here and here.
Shortly before Christmas, the draft Data Protection, Privacy and Electronic Communications (Amendments etc) (EU Exit) Regulations 2019 ("Exit Regulations", available here) were laid before Parliament. In this blog, we outline the changes under the Exit Regulations and consider what impact they will have if Brexit leaves the UK in a no-deal scenario.
Preparing for a no-deal Brexit
As a brief reminder of the current legislative landscape in the UK, so long as the UK is in the EU the GDPR has direct effect. The Data Protection Act 2018 ("DPA") must be read alongside the GDPR and has multiple functions. Firstly, it supplements the GDPR and contains derogations that are permitted by the GDPR (such as providing for additional conditions around the processing of special categories of personal data and exemptions in respect of data subject rights). The DPA also applies a broadly equivalent regime to certain data processing activities which fall outside the scope of the GDPR. These relate to, for example, the processing of personal data for immigration purposes and manual unstructured data held by a public authority covered by the Freedom of Information Act 2000. Finally, the DPA covers processing by law enforcement bodies and UK intelligence services.
In the event that the UK leaves the EU without a withdrawal agreement, the GDPR will form part of UK domestic law as 'retained EU law' ("UK GDPR") by virtue of section 3 of the EU (Withdrawal) Act 2018 ("EUWA"). However, in its current form the UK GDPR will not function effectively on the day that the UK leaves the EU ("exit day", currently scheduled for 29 March 2019) due to the numerous references to EU laws and institutions and the fact that the UK will cease to be a Member State of the EU. The Exit Regulations, made under powers conferred in the EUWA, will be required to ensure that the UK's legal framework for data protection continues to function.
If there is a withdrawal agreement, then the Exit Regulations will not come into force and the UK will instead enter the "transition period" where EU law will continue to apply as if the UK were still an EU Member State (subject to certain exceptions).
1. Territorial scope and UK representative
By virtue of the Exit Regulations, the UK GDPR will apply to any controllers and processors based in the UK as well as those outside the UK but which offer goods and services in the UK or monitor the behaviour of UK individuals. The UK GDPR will therefore continue to have extra-territorial effect in the same way as the GDPR currently does.
Many controllers and processors, whether based outside of the EEA or in one of the remaining EEA countries, will have already considered whether they are subject to UK data protection law. However, they will now also need to consider the requirement to appoint a representative in the UK. This will impact both non-EEA controllers and processors who may have already appointed a representative in a non-UK Member State as well as EEA controllers and processors who don't have a UK presence (and for whom this will represent an entirely new obligation).
Equally, the ICO has also indicated that a UK-based controller or processor that does not have any offices or establishments in the EEA but offers goods or services to or monitors the behaviour of EEA individuals will need to consider appointing a European representative.
2. The merger of the GDPR and "applied GDPR"
As previously mentioned, the DPA currently provides for two separate regimes for general processing: one for processing that falls within the scope of the GDPR and a separate, broadly equivalent regime for processing that falls outside the scope of the GDPR (the so-called "applied GDPR"). Given that EU law will not apply in the UK after Brexit, there will no longer be a need to distinguish between processing within the competence of EU law and that which is governed solely by UK law. The Exit Regulations will therefore merge these two regimes on exit day to create a single, unified regime for all general processing activities.
However, it is worth mentioning that this will not be a complete merger. Under s6 of the EUWA, any question around the interpretation of retained EU law (including the GDPR) must be decided in accordance with EU case law and general principles of EU law as they apply immediately before the UK leaves the EU. However, the Exit Regulations indicate that may not be the case for processing under the applied GDPR, which governs the processing of personal data in areas where the EU has no competence.
3. Data transfers outside the UK
Currently, any transfer of personal data from the UK to a country outside the EEA may only be made if that country has been granted adequacy status by the EU Commission or by using one of the "appropriate safeguards" described under Article 46 of the GDPR (i.e., the EU Commission's standard contractual clauses or approved BCR).
The Exit Regulations maintain the same restrictions for data transfers outside the UK (whether to a non-EEA country or a remaining member of the EEA) but ensure that data flows are not disrupted on exit day. They specify that certain countries and bodies are considered to have adequate status: these include all of the remaining EEA countries as well as Gibraltar, non-EEA countries which have already been granted adequacy status by the EU Commission or granted adequacy status prior to exit day, and the EU institutions and bodies. The Exit Regulations also provide that the EU's authorised standard contractual clause and approved BCR may continue as potential mechanisms for transfers outside the UK, whether in their original form or as modified for a UK-specific context. Finally, the existing derogations under Article 49 of the GDPR will continue to be available.
Once the UK has left the EU, the Secretary of State will have sole authority to grant adequacy status (by way of regulations) in respect of transfers outside the UK and will be required to publish a list of those countries and territories it has deemed adequate. The ICO will continue to authorise BCR and will also be able to issue new UK-only standard clauses.
4. Co-operation and consistency
From exit day, the ICO will no longer be able to take part in the existing co-operation mechanism between EU supervisory authorities. Equally, the European Commission and European Data Protection Board will not have competence over the regulation of personal data in the UK. Unsurprisingly, therefore, Chapter VII - which lays the foundations of the co-operation and consistency mechanism - will be redundant and is removed entirely from the UK GDPR. Article 50, which addresses broader aims of international co-operation and mutual assistance in the area of data protection, will be retained.
Another expected amendment is the removal of provisions addressing the co-operation of Member State courts. Currently, under Article 81 of the GDPR, where proceedings are issued in a UK court against the same controller or processor and in relation to the same subject matter as a case already pending in another EU Member State, the UK court may either decline jurisdiction or suspend those proceedings until the other court has made its determination. Arguably, the removal of these provisions increases the possibility of concurrent claims in the UK and the EU.
5. ICO fines
The Exit Regulations confirm that the ICO will continue be able to issue the same level of fines as provided under the GDPR. In particular, they state that from exit day the ICO will be able to administer fines of up of £8.7m or 2% of the total worldwide annual turnover (whichever is higher) for less serious breaches and £17.5m or 4% (whichever is higher) for more serious breaches.
What will the new year bring in privacy and data protection legislation? Well, to name just a few highlights, we've got a handful of EU member states still needing to pass laws addressing the General Data Protection Regulation, India is in the midst of debate over a new law, Brazil's law will get the enforcement body it has been lacking, and there are talks of a U.S. federal privacy law. But that's just the tip of the iceberg. This week's Privacy Tracker roundup consists of contributions from IAPP members across the globe outlining their expectations (and occasionally their hopes) for privacy legislation in the coming year. With more than 30 contributions, this year's global legislative predictions issue is our most comprehensive yet.
By Pablo Palazzi
The year 2019 will see Argentina with an important landmark in its history of data protection law. In September 2018 the government sent to congress the data protection bill, based on the EU General Data Protection Regulation. Now it is up to Congress (first the Senate, then the House of Representantes) to openly debate the bill and approve it. Argentina was the first country in Latin America to adopt a full fledged EU data protection law and the first country to be considered adequate by the EU. Now, nearly 20 years later Argentina has again the chance to follow EU law again.
By Tim Van Canneyt, CIPP/E
2019 will be another important year for data protection in Belgium. First, we should finally see the appointment of the directors of the new data protection authority. At the moment, Belgium has an interim supervisory authority which is pretty much forced to act on a day-to-day management basis. When the directors of the DPA are appointed in 2019, the authority will be able to adopt its strategic vision, publish more guidelines to help companies and offer better protection to citizens. In addition, we should hopefully see the implementation of the NIS Directive into national law. Furthermore, the Belgian Supreme Court will have to assess the lawfulness of the recent government decision obliging every Belgian resident to provide their fingerprints for inclusion on the ID card's chip. Finally, it will be interesting to follow the class action brought against Facebook by consumer protection body TestAankoop.
By Renato Leite Monteiro, CIPP/E, CIPM
2019: The year of compliance and the Brazilian Data Protection Authority!
What a year! Nobody could guess in the beginning of 2018 that Brazil would finally have its own General Data Protection Law, known as LGPD (I myself have written this column for the last three years and I always thought my predictions were only in a wild guess!). And then, out of the blue, it was approved in August. However, the president vetoed one of its pillars: the creation of the national data protection authority. Even though some provisions would only have an effect if the authority was created. This lack of DPA made the LGPD weak.
Then, on the dawn of the year, Dec. 28, 2018, the Executive Order n° 869/18 promoted several alterations to the law. One of the most important was the creation of the Brazilian National Data Protection Authority. It is also altered the vacatio legis period for the LGPD to 24 months, changing the enforcement date from February 2020 to August 2020. During this period, the ANPD must exercise collaborative and consultative functions, aiming to provide assistance in the process of compliance to the new law. With the creation of the DPA, business will know to whom and what to look for. They will have a straight channel to communicate. The ANPD will provide for a much more stable application of the law, and, for instance, more legal certainty, what will probably spur technological and economic development.
Nonetheless, despite the DPA, enforcement actions might continue. The Distrito Federal and Territories Public Prosecution Office has been heavily conducting investigations on data breaches and other issues regarding personal data. Recently, the Minas Gerais Public Prosecution Office fined a drugstore chain for exchanging customers’ personal Taxpayer Id numbers for discounts in products, which in fact is a common practice in Brazil. The total amount of the fine was R$ 7,930.801.72 (BRL), the largest related to data protection yet. This condemnation was vastly reported in mainstream media, national and international. Such actions are likely to continue.
Also, since LGPD will enter into force in August 2020, 2019 will be year companies will rush to become compliant, a practice that has already become a new niche market. Consulting companies and law firms are heavily investing in personnel and privacy software to take advantage of the escalating demand. Proof is that the IAPP has partnered with the first official training center of Brazil. Data Privacy Brasil will provide training courses for both CIPP/E and CIPM certifications.
Therefore, we can say that 2019 will be an interesting year for the data protection scenario in Brazil!
The balance of this article requires membership to IAPP. If you are interested in a specific country/region, please email us and we will provide details.