PIPEDA - 10 Core Principles

 Feb 25, 2019 9:00 AM
by Derek Lackey

PIPEDA is broken down into 10 core principles. They reflect and evaluate how a business is required to handle personal information and to ensure that best practices are in place and used. Following is an overview of each of these principles as well as one guidance on how they relate to cloud service providers.

1. Accountability 

An organization is required to accept responsibility for any and all personal information that is under its control. This is accomplished by designating a representation who is accountable and responsible for the organization’s compliance. The business is further required to use various means, including contractual, to ensure that it remains compliant with third parties. It also has a responsibility to uphold PIPEDA by developing and implementing relevant policies and procedures.

Organizations should include contractual obligations that uphold PIPEDA including reporting procedures, security policies, non-disclosure, and limitations.

 

2. Identifying Purposes 

An organization is responsible for identifying and documenting their purpose for collecting personal information. They are required to notify their customers, clients, users, visitors, and guests if they intend to use the information for any purpose that was not identified at the time of collection prior to using that information.

Organizations should share the organization’s outlook on policies and procedures, particularly as it related to the purpose of collecting personal data.

Businesses should evaluate their requirements to handle personal information and to ensure that best practices are in place and used.

 

3. Consent 

An organization is responsible for obtaining the informed consent of individuals when it is engaged in the practice of collection of personal information or data, except where such knowledge and consent is inappropriate.

Organizations should share the organization’s policies and outlook regarding how sensitive data is handled.

 

4. Limiting Collection 

An organization is responsible for limiting the collection of personal information to only what is necessary for purposes identified by the organization. All collection methods should be fair and compliant with all applicable laws.

Organizations should follow the best practices for securing storing personal information on the behalf of the business.

 

5. Limiting Use, Disclosure, and Retention 

An organization is responsible for never using or disclosing personal information for any purpose other than that for which it was collected. They are to retain any personal information collected for only as long as is necessary to fulfill the intent or purpose of the collection.

Organizations should follow best practices for securely handling the destruction or disposal of data that is no longer needed and storage is no longer required. They should also have policies in place regarding third party disclosure.

 

6. Accuracy 

An organization is responsible for ensuring that all information is accurate, complete, and up to date. It should be only what is necessary or required for the purpose or intent of use.

Organizations should share the organization’s principles on the accuracy of data that is collected.

 

7. Safeguards 

An organization is responsible for protecting personal information by ensuring that reliable security safeguards that are appropriate for the level of the information’s sensitivity are in place.

Organizations should have policies in place for safeguarding the data that it is hosting for the organization. Organizations should have access to all security policies regarding how their cloud service provider protects the collected data from loss and theft as well as unauthorized access, copying, modification, disclosure, and use.

 

8. Openness 

An organization is responsible for complete transparency regarding its policies and management of collected personal information. The policies should be very detailed in explaining how it manages personal information and these policies should be readily available for both employees and clients.

Organizations should be transparent regarding their data management policies. They should be able to provide a copy of these policies to their clients upon request.

 

9. Individual Access 

An organization is responsible for providing, upon written request, the existence, use, and disclosure of an individual’s personal information. They must also give those individuals access to the information that has been collected and they must be given the opportunity or option to challenge the accuracy of it and have it amended appropriately.

Organizations should have policies in place that are in line with the organization’s policies regarding access to information.

 

10. Challenging Compliance 

An organization is responsible for providing a platform for individuals to address challenges PIPEDA compliance with the core principles. The designated individual or team that handle’s an organization’s compliance will be the point of contact for individuals who are challenging the compliance issues.

Organizations should have the appropriate policies and procedures to ensure that there are no complaints filed or received regarding the way that an organization’s data is handled.

  

Guiding principles for a more transparent consent process in Canada

 Feb 25, 2019 9:00 AM
by Derek Lackey

The Privacy Commissioners of Canada, Alberta and British Columbia have jointly issued guidelines to help organizations obtain meaningful consent from individuals for the collection, use and disclosure of their personal information. The previously written Guidelines came into effect in January 2019 and are now applied by the Commissioners when evaluating organizational conduct.

The Guidelines set out seven guiding principles for meaningful consent:

1. Emphasize key elements

The Guidelines state that organizations must identify for individuals what personal information is being, or may be, collected about them and for what purposes. This must be done with sufficient precision for individuals to meaningfully understand what they are consenting to. Disclosure to third parties must also be clearly explained.

Further, individuals must be able to understand the consequences of the collection, use or disclosure to which they are consenting. Meaningful risks must be identified, which means a risk that falls below the balance of probabilities but is more than a minimal or mere possibility should be identified by the organization.

 

2. Allow individuals to control the level of detail they get and when

The Guidelines state that information must be provided to individuals in manageable and easily accessible ways, potentially including layers. This is because one person may be comfortable with a quick review of summary information, but others may need a “deeper dive.”

The Guidelines go on to state that the information should remain available to individuals as they engage with the organization, because consent choices are not made just once. At any time, individuals should be able to reconsider whether they wish to maintain or withdraw their consent. Full information should be available to them as they make those decisions.

 

3. Provide individuals with clear options to say "yes" or "no"

The Guidelines emphasize that individuals cannot be required to consent to the collection, use or disclosure of personal information beyond what is necessary to provide the product or service. They must be given a choice about unnecessary collections, uses and disclosures. Previous Commissioner decisions indicate that the term “necessary” does not mean absolutely necessary (i.e. in the sense that it is literally not possible to provide the product/service without collecting, using or disclosing the personal information). Rather, the term “necessary” essentially means “reasonably necessary,” taking all relevant and legitimate factors into account.

For a collection, use or disclosure to be a valid condition of service, it must be integral to the provision of that product or service such that it is required to fulfill its explicitly specified and legitimate purposes.

 

4. Be innovative and creative

The Guidelines say that organizations should design and/or adopt innovative consent processes that can be implemented just-in-time, are specific to the context, and are appropriate to the type of interface used.

While innovation and creativity are clearly worthy goals, it seems unlikely that the Commissioners would chastise an organization or find the organization to be in breach of the consent requirements in their respective legislation simply because the consent was not obtained in an innovative or creative manner. Accordingly, we suggest that organizations see this portion of the Guidelines as an encouragement or “challenge,” but not a strict legal requirement (indeed, the Guidelines note that some statements are intended to communicate “best practices”).

That said, the Guidelines make the fair point that mobile devices present an amplified communication challenge: individuals’ time and attention are at a premium and the medium does not lend itself to lengthy explanations. Accordingly, organizations need to highlight privacy issues at particular decision points in the user experience where people are likely to pay attention in order to obtain informed and meaningful consent from individuals.

 

5. Consider the consumer’s perspective

The Guidelines point out that consent is only valid where the individual can understand that to which they are consenting. Accordingly, an organization’s consent processes must take into account the consumer’s perspective to ensure that the processes are user-friendly and that the information provided is generally understandable from the point of view of the organization’s target audience. In order to do this effectively, the Guidelines suggest that organizations consider:

  • consulting with users and seeking their input when designing a consent process;
  • pilot testing or using focus groups to ensure individuals understand what they are consenting to;
  • involving user interaction/user experience designers in the development of the consent process;
  • consulting with privacy experts and/or regulators when designing a consent process; and/or
  • following an established "best practice" standard or other guideline in developing a consent process.

 

6. Make consent a dynamic and ongoing process

The Guidelines emphasize that informed consent is an ongoing process that evolves as circumstances change. Organizations should not rely on a static moment in time but, rather, treat consent as a dynamic and interactive process. Thus, ensuring the effectiveness of individual consent does not end with the posting of a privacy policy or notice.

For example, when an organization plans to introduce significant changes to its privacy practices, it must notify users and obtain consent prior to the changes coming into effect. The Commissioners recommend that organizations consider periodically reminding individuals about their privacy options and inviting them to review these options.

 

7. Be accountable – stand ready to demonstrate compliance

The Guidelines state that in order for an organization to demonstrate that it has obtained valid consent, it must be able to do more than point to a line buried in a privacy policy. Instead, organizations should be able to demonstrate – either in the case of a complaint from an individual or a practice query from a privacy regulator – that they have a process in place to obtain consent from individuals and that such process is compliant with the consent obligations set out in the applicable legislation.

 

Other considerations

In addition to the seven guiding principles described above, the Guidelines ask organizations to keep in mind the following:

Organizations need to consider the most appropriate form for consent – in other words, organizations must ask themselves: “Should the consent in this particular situation be express or implied?” While express consent is generally required, there are certain circumstances under which implied consent may be adequate.

The purposes for which an organization collects and uses personal information must be appropriate and defined. Consent is not everything.

Individuals have the right to withdraw consent, subject to legal or contractual restrictions. A withdrawal of consent may mean that data held by an organization about an individual should be deleted, depending on the circumstances.

Organizations must obtain consent from a parent or guardian for any individual unable to provide meaningful consent themselves. (The federal commissioner takes the position that, in all but exceptional circumstances, this means anyone under the age of 13).

  

How will personal data continue to flow after Brexit?

 Feb 7, 2019 1:00 PM
by Derek Lackey

Elizabeth Denham's latest blog busts the myths for UK small and medium sized businesses transferring personal data to and from the EEA

Like everyone in the UK right now, we are following the twists and turns of the Brexit negotiations. The sharing of customers’, citizens’ and employees’ personal data between EU member states and the UK is vital for business supply chains to function and public authorities to deliver effective public services.

At the moment personal data flow is unrestricted because the UK is an EU member state. If the proposed EU withdrawal agreement is approved, businesses can be assured that personal data will continue to flow until 2020 while a longer term solution can be put in place. 

However in the event of ‘no deal’, EU law will require additional measures to be put in place by UK companies when personal data is transferred from the European Economic Area (EEA) to the UK, in order to make them lawful.

With less than two months to go until the UK leaves the EU, we recognise that businesses and organisations are concerned. My latest myth busting blog challenges some of the misconceptions about what a ‘no deal’ Brexit will mean for UK companies transferring personal data to and from the EEA.

Myth #1: Brexit will stop me from transferring personal information from the UK to the EU altogether.

Fact

In a ‘no deal’ situation the UK Government has already made clear its intention to enable data to flow from the UK to EEA countries without any additional measures. But transfers of personal data from the EEA to the UK will be affected.

The key question around the flow of personal data, is whether your data is going from the UK to the EEA or exchanged both ways? If you are unsure, start by mapping your data flows and establish where the personal data you are responsible for is going.

All businesses operating in the EEA should consider whether they need to take action now. Read our guidance pages to establish whether you need to prepare for data transfers in the event of ‘no deal’.

 

Myth #2: I have regular customers from Europe who come to my family’s hotel every year – I’ll need a special agreement set up to deal with their personal details.

Fact

When a customer passes their own personal data to a company in the EEA, it is not considered to be a data transfer and can continue without additional measures.

However, there may be other ways you transfer data, for example a booking agency transferring a list of customers, in this case you may need additional measures. If you are unsure please check the ICO’s guidance pages where we have a range of tools and advice to help.

 

Myth #3: Brexit will only affect data transfers of UK companies actually exporting goods or services to the EU.

Fact

Personal data transfers are not about whether your business is exporting or importing goods. You need to assess whether your business involves transfers of personal data, such as names, addresses, emails and financial details to and from the EEA and if this is going to be lawful in the case of ‘no deal’.

It is the responsibility of every business to know where the personal data it processes is going, and that a proper legal basis for such transfers exists. Our guidance – Leaving the EU – six steps to take will help.

 

Myth #4: My business will be fine because there will be a European Commission adequacy decision on exit day on 29 March 2019 to ensure the uninterrupted exchanges of personal data between the UK and the EU.

Fact

‘Adequacy’ is the term given to countries outside the EU that have data protection measures that are deemed essentially equivalent to European standards. Companies and organisations operating within countries with adequacy agreements enjoy uninterrupted flow of personal data with the EU. But an assessment of adequacy can only take place once the UK has left the EU. These assessments and negotiations have usually taken many months.  

Although it is the ambition of the UK and EU to eventually establish an adequacy agreement, it won’t happen yet. Until an adequacy decision is in place, businesses will need a specific legal transfer arrangement in place for transfers of personal data from the EEA to the UK, such as standard contractual clauses.

 

Myth #5: Our parent company in Europe keeps all our personal data records centrally so I don’t need to worry about sorting any new agreements.

Fact

Don’t presume you are covered by the structure of your company. In the case of ‘no deal’, UK companies transferring personal information to and from companies and organisations based in the EEA will be required by law to put additional measures in place. You will need to assess whether you need to take action.

There are many mechanisms companies can use to legitimise the transfer of personal  data with the EEA and standard contractual clauses is one of those. We have produced an online tool to help organisations put contract terms in place providing the lawful basis for the data transfers. Companies that need to act would also benefit from Leaving the EU - six steps to take guidance for more information.

You know your organisation best and will be able to use our guidance to assess if and how you need to prepare. Alternative data transfer mechanisms exist but it can take time to put those arrangements in place.

It is in everyone’s interests that appropriate exchanges of personal data continue whatever the outcome of Brexit. The ICO will carry on co-operating internationally to ensure protections are in place for personal data and organisations have the right advice and guidance.

 

ICO Blog

 

  

Bundeskartellamt prohibits Facebook from combining user data from different sources

 Feb 7, 2019 10:00 AM
by Derek Lackey

Date of issue: 07.02.2019

The Bundeskartellamt has imposed on Facebook far-reaching restrictions in the processing of user data.

According to Facebook's terms and conditions users have so far only been able to use the social network under the precondition that Facebook can collect user data also outside of the Facebook website in the internet or on smartphone apps and assign these data to the user’s Facebook account. All data collected on the Facebook website, by Facebook-owned services such as e.g. WhatsApp and Instagram and on third party websites can be combined and assigned to the Facebook user account.

The authority’s decision covers different data sources:

(i)     Facebook-owned services like WhatsApp and Instagram can continue to collect data. However, assigning the data to Facebook user accounts will only be possible subject to the users’ voluntary consent. Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook data.

(ii)    Collecting data from third party websites and assigning them to a Facebook user account will also only be possible if users give their voluntary consent.

If consent is not given for data from Facebook-owned services and third party websites, Facebook will have to substantially restrict its collection and combining of data. Facebook is to develop proposals for solutions to this effect.

Andreas Mundt, President of the Bundeskartellamt: “With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data. In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts. The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users. Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

Facebook is the dominant company in the market for social networks

In December 2018, Facebook had 1.52 billion daily active users and 2.32 billion monthly active users. The company has a dominant position in the German market for social networks. With 23 million daily active users and 32 million monthly active users Facebook has a market share of more than 95% (daily active users) and more than 80% (monthly active users). Its competitor Google+ recently announced it was going to shut down its social network by April 2019. Services like Snapchat, YouTube or Twitter, but also professional networks like LinkedIn and Xing only offer parts of the services of a social network and are thus not to be included in the relevant market. However, even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process.

Andreas Mundt: As a dominant company Facebook is subject to special obligations under competition law. In the operation of its business model the company must take into account that Facebook users practically cannot switch to other social networks. In view of Facebook’s superior market power, an obligatory tick on the box to agree to the company’s terms of use is not an adequate basis for such intensive data processing. The only choice the user has is either to accept the comprehensive combination of data or to refrain from using the social network. In such a difficult situation the user’s choice cannot be referred to as voluntary consent.”

Abuse of market power based on the extent of collecting, using and merging data in a user account

The extent to which Facebook collects, merges and uses data in user accounts constitutes an abuse of a dominant position.

The Bundeskartellamt’s decision is not about how the processing of data generated by using Facebook’s own website is to be assessed under competition law. As these data are allocated to a specific service users know that they will be collected and used to a certain extent. This is an essential component of a social network and its data-based business model.

However, this is what many users are not aware of: Among other conditions, private use of the network is subject to Facebook being able to collect an almost unlimited amount of any type of user data from third party sources, allocate these to the users’ Facebook accounts and use them for numerous data processing processes. Third-party sources are Facebook-owned services such as Instagram or WhatsApp, but also third party websites which include interfaces such as the “Like” or “Share” buttons. Where such visible interfaces are embedded in websites and apps, the data flow to Facebook will already start when these are called up or installed. It is not even necessary, e.g., to scroll over or click on a “Like” button. Calling up a website with an embedded “Like” button will start the data flow. Millions of such interfaces can be encountered on German websites and on apps.

Even if no Facebook symbol is visible to users of a website, user data will flow from many websites to Facebook. This happens, for example, if the website operator uses the “Facebook Analytics” service in the background in order to carry out user analyses.

Andreas Mundt: By combining data from its own website, company-owned services and the analysis of third party websites, Facebook obtains very detailed profiles of its users and knows what they are doing online.”

European data protection provisions as a standard for examining exploitative abuse

Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved.

In the authority’s assessment, Facebook’s conduct represents above all a so-called exploitative abuse. Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data. This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).

Andreas Mundt: “Today data are a decisive factor in competition. In the case of Facebook they are the essential factor for establishing the company’s dominant position. On the one hand there is a service provided to users free of charge. On the other hand, the attractiveness and value of the advertising spaces increase with the amount and detail of user data. It is therefore precisely in the area of data collection and data use where Facebook, as a dominant company, must comply with the rules and laws applicable in Germany and Europe.”

The Bundeskartellamt’s decision is not yet final. Facebook has one month to appeal the decision to the Düsseldorf Higher Regional Court.

Further information on the proceeding can be found in a background paper.

Press release (pdf)

  

Freedom and democracy cannot exist without privacy

 Jan 28, 2019 4:00 PM
by Derek Lackey

On this international Data Privacy Day, and after a year of severe abuses, it is worth reflecting on why it is essential to protect privacy.  

Privacy is often cast as an abstract or undervalued concept associated with a desire to keep secret certain aspects of our activities or our personality that we prefer to keep to ourselves.

This is a very narrow outlook. In fact, privacy is nothing less than a prerequisite for freedom:  the freedom to live and develop independently as a person, away from the watchful eye of a surveillance state or commercial enterprises, while still participating voluntarily and actively in the regular, day-to-day activities of a modern society.

Data-driven technologies undoubtedly bring great benefits to individuals.  They can be fun and convenient but they can also be powerful tools for personal development.  They open the door to huge opportunities for improving health care and hold the promise for a future built on artificial intelligence (AI) in which the possibilities seem endless.

On the other hand, these technologies also create new risks. For example, some AI applications, which rely on the massive accumulation of personal data, also put other fundamental rights at risk.

One such risk is the potential for discrimination against people resulting from decisions made by artificial intelligence systems. These systems are generally non-transparent and some have been found to rely on data sets that contain inherent bias, in violation of privacy principles. Such discrimination could potentially result in the restriction of availability of certain services, or result in the exclusion of people from certain aspects of personal, social and professional life, including employment.

In December, AI ethics researchers released the Montreal Declaration for the Responsible Development of Artificial Intelligence – a set of 10 principles for developers and organizations that implement AI, as well as the individuals subject to it.

While this ethical framework marks an important, made-in-Canada development that should help guide this emerging sector, I would agree with the Declaration’s authors who say it is only a first step, and that public authorities now need to act.  Governments and legislators in particular have an important role to play in drawing on ethical principles to create an enforceable legal framework for AI that formally requires relevant actors to act fairly and responsibly.

We have also seen in recent years, and in particular in 2018, how privacy breaches can adversely impact the exercise of our democratic rights. The massive accumulation of personal data by certain state and commercial actors makes our democracies vulnerable to manipulation, including by foreign powers. It is unfortunate that the 2019 federal election will take place without any significant strengthening of our personal data protection laws.

In 2019, as the federal government and legislators consider what should be Canada’s national data strategy and laws for the modern age, it is important as a society to remember privacy’s role in protecting other fundamental rights and values, including freedom and democracy. If this happens, we will have drawn the right lessons from 2018.

Link to original 

 

  

PhD Candidate Robbert van Eijk measures privacy component in online advertising

 Jan 28, 2019 10:00 AM
by Derek Lackey

You check out Facebook to see if one of your friends or someone in your family has done something interesting. Your attention is drawn to a holiday advert. That’s a coincidence, you think, because just before you went to Facebook you had been searching internet for a holiday destination. But this is no coincidence: dozens of parties are looking over your shoulder to see what you are getting up to on internet and this influences which adverts you get to see and where. PhD Candidate Robbert van Eijk investigated this process and the observance of privacy legislation in European countries. He will defend his doctoral thesis on 29 January.

The technology which facilitates online advertising is called 'real-time bidding' (RTB). When you visit a website, within a few tenths of a second the advert space on that page is ‘auctioned’: on the basis of data saved in cookies it is determined what kind of adverts are most relevant for you.  The provider who places the highest bid for this kind of advert ‘wins’ and is given - upon payment of course - space by the advertiser to promote his product. 'The motive to write this doctoral thesis came from the desire to investigate real-time bidding at the intersection between technology and the law’, Van Eijk explains. 'I wanted to find out more about what happens when as a visitor to a website you get to see adverts which appear to be tracking your steps. This topic is relevant in light of the application of the General Data Protection Regulation (GDPR) and the current cookie legislation and its rules which are laid down, among others, in Article 11.7a of the (Dutch) Telecommunications Act.'

In his research Van Eijk demonstrates that this privacy component can be measured. 'I combine law and data science in my research by applying mathematic algorithms to the network traffic picked up between the browser and the websites visited. Taking a network-science perspective to the privacy component of RTB is new, by being able to distinguish the networks of partners involved in an advertisement system when displaying an advert on the website which an internet user visits. These advertisement networks partly overlap one another. This new way of observing the process also shows which role partners have in an advertisement network in collecting and sharing the data of website visitors.'

Dominant companies

Van Eijk demonstrates in the research that two kinds of algorithms enable transparency in the mutual collaboration arrangements (the betweenness). 'These are cluster edge betweenness and node betweenness. The first is a standard that is based on the shortest paths between the partners in an advertisement network. The algorithm solves an important issue: which RTB partners are clustered in an RTB system?  The second solves another important issue: who are the dominant companies in a network of RTB partners? Node betweenness helps us to distinguish between the companies.'

In addition, the researcher provides transparency concerning various differences between European countries. 'I show that a Graph-Based Methodological Approach (GBMA) can indicate the situation concerning differences in permission in 28 European countries; for example, differences in cookie notifications and cookie walls. In Europe we see two mechanisms in relation to permission. An implicit permission (where tracking cookies have already been installed before the end user has given permission) and a strict permission mechanism (where the legal requirements are implemented to the extent that no tracking cookies are (allowed to be) installed on the equipment of the end user or information can be read from the equipment when he visits a webpage). In this way, countries with implicit mechanisms can be compared to countries where strict mechanisms predominate. This leads to unequal rights.'

Through his research...

Read The Full Article

 

 

  

European Commission adopts adequacy decision on Japan

 Jan 23, 2019 12:00 PM
by Derek Lackey

The Commission has adopted today its adequacy decision on Japan, allowing personal data to flow freely between the two economies on the basis of strong protection guarantees.

This is the last step in the procedure launched in September 2018, which included the opinion of the European Data Protection Board (EDPB) and the agreement from a committee composed of representatives of the EU Member States. Together with its equivalent decision adopted today by Japan, it will start applying as of today.

Věra Jourová, Commissioner for Justice, Consumers and Gender Equality said: “This adequacy decision creates the world's largest area of safe data flows. Europeans' data will benefit from high privacy standards when their data is transferred to Japan. Our companies will also benefit from a privileged access to a 127 million consumers' market. Investing in privacy pays off; this arrangement will serve as an example for future partnerships in this key area and help setting global standards.” 

The key elements of the adequacy decision

Before the Commission adopted its adequacy decision, Japan put in place additional safeguards to guarantee that data transferred from the EU enjoy protection guarantees in line with European standards. This includes:

  A set of rules (Supplementary Rules) that will bridge several differences between the two data protection systems. These additional safeguards will strengthen, for example, the protection of sensitive data, the exercise of individual rights and the conditions under which EU data can be further transferred from Japan to another third country. These Supplementary Rules will be binding on Japanese companies importing data from the EU and enforceable by the Japanese independent data protection authority (PPC) and courts.

  The Japanese government also gave assurances to the Commission regarding safeguards concerning the access of Japanese public authorities for criminal law enforcement and national security purposes, ensuring that any such use of personal data would be limited to what is necessary and proportionate and subject to independent oversight and effective redress mechanisms.

  A complaint-handling mechanism to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities. This new mechanism will be administered and supervised by the Japanese independent data protection authority. 

The adequacy decisions also complement the EU-Japan Economic Partnership Agreement- which will enter into force in February 2019. European companies will benefit from free data flows with a key commercial partner, as well as from privileged access to the 127 million Japanese consumers. The EU and Japan affirm that, in the digital era, promoting high privacy and personal data protection standards and facilitating international trade must and can go hand in hand.

Next steps

The adequacy decision – as well as the equivalent decision on the Japanese side –will start applying as of today.

After two years, a first joint review will be carried out to assess the functioning of the framework. This will cover all aspects of the adequacy finding, including the application of the Supplementary Rules and the assurances for government access to data. The Representatives of European Data Protection Board will participate in the review regarding access to data for law enforcement and national security purposes. Subsequently a review will take place at least every four years.  

Background

The mutual adequacy arrangement with Japan is a part of the EU strategy in the field of international data flows and protection, as announced in January 2017 in the Commission's Communication on Exchanging and Protecting Personal Data in a Globalised World.

The EU and Japan successfully concluded their talks on reciprocal adequacy on 17 July 2018 (see press release). They agreed to recognise each other's data protection systems as adequate, allowing personal data to be transferred safely between the EU and Japan.

In July 2017, President Juncker and Prime Minister Abe committed to adopting the adequacy decision, as part of the EU and Japan's shared commitment to promote high data protection standards on the international scene (see statement).

The processing of personal data in the EU is based on the General Data Protection Regulation (GDPR), which provides for different tools to transfer personal data to third countries, including adequacy decisions. The European Commission has the power to determine whether a country outside the EU offers an adequate level of data protection. The European Parliament and the Council can request the European Commission to maintain, amend or withdraw these decisions. 

For More Information 

 

  

WTF!?! CNIL Fines GOOGLE $57 Million under GDPR

 Jan 22, 2019 10:00 AM
by Derek Lackey

The early fines to American tech firms will reveal another level of guidance from the Data Protection Authorities. First you should read the LAW. Then seek clarity from the official guidance documents. Then finally, look to the details of the violations. WHAT they fine for is critical information for operations people to set new practices. HOW MUCH they fine for is critical for business risk analysis.

The recent fine from CNIL for GOOGLE is based on "the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations."

GDPR is all about TRANSPARENCY. INFORMATION. CONSENT.

CNIL claim that when setting up an Android device (GOOGLE) consents for processing data must be 

  • clear, 
  • unambiguous, 
  • easy to understand, 
  • easily accessible, 
  • communicate a legal basis for processing,  and 
  • must show a positive action on the part of the data subject.

Specifically, when signing up for an Android account the purpose of processing your personal data is far too generic and of a "vague manner". The same could be said for communicating "the CATEGORIES of data processing for various purposes". With more than 20 different service offerings, GOOGLE 's consent requests are not easily understood. It must be clear for each type of consent which legal basis for processing is being claimed AND how long GOOGLE planned to keep that information.

Therefore the consent GOOGLE believes they have, is not considered valid by the CNIL. While it "is possible to configure the display of personalized ads", CNIL determined that GOOGLE was "not sufficiently informed regarding the extent of the consents requested". GOOGLE was neither "specific" nor "unambiguous". In fact many of those consents were not easily accessed and when they were, the boxes were pre-checked, therfore no positive action was required on behalf of the data subject.

Finally, in order to set up an account, the user must agree to Terms of Service - "I agree to Google’s Terms of Service» as well as  « I agree to the processing of my information as described above and further explained in the Privacy Policy». GDPR requires "specific consent". It is only valid if it is "provided distinctly for each purpose."

It is important to note: "the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement." Look for GOOGLE to be fined again for these very same activities if these practices are not corrected immediately.

 

  

I Mentored Mark Zuckerberg. I Loved Facebook. But I Can't Stay Silent About What's Happening.

 Jan 19, 2019 8:00 AM
by Derek Lackey

Written by Roger McNamee in TIME magazine:

"I am really sad about Facebook.

I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed. With more than 1.7 billion members, Facebook is among the most influential businesses in the world. Whether they like it or not–whether Facebook is a technology company or a media company–the company has a huge impact on politics and social welfare. Every decision that management makes can matter to the lives of real people. Management is responsible for every action. Just as they get credit for every success, they need to be held accountable for failures. Recently, Facebook has done some things that are truly horrible, and I can no longer excuse its behavior."

 

Nine days before the November 2016 election, I sent the email above to Facebook founder Mark Zuckerberg and chief operating officer Sheryl Sandberg. It was the text for an op-ed I was planning to publish about problems I was seeing on Facebook. Earlier in the year, I noticed a surge of disturbing images, shared by friends, that originated on Facebook Groups ostensibly associated with the Bernie Sanders campaign, but it was impossible to imagine they came from his campaign. I wanted to share with Sandberg and Zuckerberg my fear that bad actors were exploiting Facebook’s architecture and business model to inflict harm on innocent people.

I am a longtime tech investor and evangelist. Tech has been my career and my passion. I had been an early adviser to Zuckerberg–Zuck, to many colleagues and friends–and an early investor in Facebook. I had been a true believer for a decade. My early meetings with Zuck almost always occurred in his office, generally just the two of us, so I had an incomplete picture of the man, but he was always straight with me. I liked Zuck. I liked his team. I was a fan of Facebook. I was one of the people he would call on when confronted with new or challenging issues. Mentoring is fun for me, and Zuck could not have been a better mentee. We talked about stuff that was important to Zuck, where I had useful experience. More often than not, he acted on my counsel.

When I sent that email to Zuck and Sheryl, I assumed that Facebook was a victim. What I learned in the months that followed–about the 2016 election, about the spread of Brexit lies, about data on users being sold to other groups–shocked and disappointed me. It took me a very long time to accept that success had blinded Zuck and Sheryl to the consequences of their actions. I have never had a reason to bite Facebook’s hand. Even at this writing, I still own shares in Facebook. My criticism of the company is a matter of principle, and owning shares is a good way to make that point. I became an activist because I was among the first to see a catastrophe unfolding, and my history with the company made me a credible voice.

This is a story of my journey. It is a story about power. About privilege. About trust, and how it can be abused.

The massive success of Facebook eventually led to catastrophe. The business model depends on advertising, which in turn depends on manipulating the attention of users so they see more ads. One of the best ways to manipulate attention is to appeal to outrage and fear, emotions that increase engagement. Facebook’s algorithms give users what they want, so each person’s News Feed becomes a unique reality, a filter bubble that creates the illusion that most people the user knows believe the same things. Showing users only posts they agree with was good for Facebook’s bottom line, but some research showed it also increased polarization and, as we learned, harmed democracy.

To feed its AI and algorithms, Facebook gathered data anywhere it could. Before long, Facebook was spying on everyone, including people who do not use Facebook. Unfortunately for users, Facebook failed to safeguard that data. Facebook sometimes traded the data to get better business deals. These things increased user count and time on-site, but it took another innovation to make Facebook’s advertising business a giant success.

From late 2012 to 2017, Facebook perfected a new idea–growth hacking–where it experimented constantly with algorithms, new data types and small changes in design, measuring everything. Growth hacking enabled Facebook to monetize its oceans of data so effectively that growth-hacking metrics blocked out all other considerations. In the world of growth hacking, users are a metric, not people. Every action a user took gave Facebook a better understanding of that user–and of that user’s friends–enabling the company to make tiny “improvements” in the user experience every day, which is to say it got better at manipulating the attention of users. Any advertiser could buy access to that attention. The Russians took full advantage. If civic responsibility ever came up in Facebook’s internal conversations, I can see no evidence of it.

The people at Facebook live in their own bubble. Zuck has always believed that connecting everyone on earth was a mission so important that it justified any action necessary to accomplish it. Convinced of the nobility of their mission, Zuck and his employees seem to listen to criticism without changing their behavior. They respond to nearly every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. They cannot imagine that the recent problems could be in any way linked to their designs or business decisions. It would never occur to them to listen to critics–How many billion people have the critics connected?–much less to reconsider the way they do business. As a result, when confronted with evidence that disinformation and fake news had spread over Facebook and may have influenced a British referendum or an election in the U.S., Facebook followed a playbook it had run since its founding: deny, delay, deflect, dissemble. Facebook only came clean when forced to, and revealed as little information as possible. Then it went to Plan B: apologize, and promise to do better.

Thanks to Facebook’s extraordinary success, Zuck’s brand in the tech world combines elements of rock star and cult leader. He is deeply committed to products and not as interested in the rest of the business, which he leaves to Sandberg. According to multiple reports, Zuck is known for micromanaging products and for being decisive. He is the undisputed boss. Zuck’s subordinates study him and have evolved techniques for influencing him. Sheryl Sandberg is brilliant, ambitious and supremely well organized. Given Zuck’s status as the founder, the team at Facebook rarely, if ever, challenged him on the way up and did not do so when bad times arrived. (A Facebook spokesperson replies: “People disagree with Mark all the time.”)

You would think that Facebook’s users would be outraged by the way the platform has been used to undermine democracy, human rights, privacy, public health and innovation. Some are, but nearly 1.5 billion people use Facebook every day. They use it to stay in touch with distant relatives and friends. They like to share their photos and their thoughts. They do not want to believe that the same platform that has become a powerful habit is also responsible for so much harm. Facebook has leveraged our trust of family and friends to build one of the most valuable businesses in the world, but in the process, it has been careless with user data and aggravated the flaws in our democracy while leaving citizens ever less capable of thinking for themselves, knowing whom to trust or acting in their own interest. Bad actors have had a field day exploiting Facebook and Google, leveraging user trust to spread disinformation and hate speech, to suppress voting and to polarize citizens in many countries. They will continue to do so until we, in our role as citizens, reclaim our right to self-determination.

We need to begin to reform Facebook and Big Tech in these key areas:

Read The Full Article in TIME

 

  

Gartner Predicts for the Future of Privacy 2019

 Jan 18, 2019 12:00 PM
by Derek Lackey

Security and risk management leaders, including CISOs and privacy professionals, must recognize maturing privacy regulations to ensure a privacy-friendly operation.

Privacy is a business-critical discipline for many organizations, enforced by multiple regulations. Most recently, the European Union’s General Data Protection Regulation (GDPR) has driven a global movement of maturing privacy and data protection laws with stricter requirements.

“Privacy requirements dramatically impact an organization’s strategy, purpose and methods for processing personal data”

“Multiple countries are implementing regulations inspired by the GDPR principles, a movement that is likely to continue into the foreseeable future,” says Bart Willemsen, Senior Director Analyst, Gartner. “These privacy requirements dramatically impact an organization’s strategy, purpose and methods for processing personal data. Furthermore, breaches of these requirements carry financial, reputational and regulatory implications.”

Security and risk management leaders must take note of the following Gartner 2019 predictions for privacy to ensure transparency and customer assurance.

 

By 2020, the backup and archiving of personal data will represent the largest area of privacy risk for 70% of organizations, up from 10% in 2018

 

Today, organizations hold backups of large volumes of personal data that is both sensitive and vulnerable with no clear intentions of using it. Because the sensitivity is a constant characteristic and the vulnerability is arguably equivalent, the volume dictates the level of risk, and represents the largest area of privacy risk today. Additionally, privacy regulations have introduced penalties and stiff fines for violations, making the risk of holding unused personal data potentially very expensive.

Over the next two years, organizations that don’t revise data retention policies to reduce the overall data held, and by extension the data that is backed up, will face a huge sanction risk for noncompliance as well as the impacts associated with an eventual data breach. GDPR, for example, introduced regulatory fines of up to 4% of annual global turnover or €20 million, whichever is greater, for noncompliance.

 

By 2022, 75% of public blockchains will suffer “privacy poisoning” — inserted personal data that renders the blockchain noncompliant with privacy laws

 

Blockchain is a promising technology; however, businesses looking to implement blockchain technology must determine whether the data being used is subject to any privacy laws. For example, public blockchains need an immutable data structure, meaning once data is recorded, it cannot easily be modified or deleted. Privacy rights granted to individuals include the option for customers to invoke the “right to be forgotten.” In many such cases, personal data processed about them must be deleted.

This raises immediate concerns, as entries in a public blockchain poisoned with personal data can’t be replaced, anonymized or structurally deleted. Therefore, businesses are unable to meet the need to keep records with their obligations to comply with privacy laws. Organizations that implement blockchain systems without managing privacy issues by design will run the risk of storing personal data that can’t be deleted without compromising chain integrity.

By 2023, over 25% of GDPR-driven proof-of-consent implementations will involve blockchain technology, up from less than 2% in 2018

Although GDPR guidelines have been in effect since 25 May 2018, organizations are at different levels of compliance. The pressure to fully comply is increasing, driving organizations in or doing business with the EU to further evaluate their data collection processes. However, most are struggling with integration costs and technologies that can help speed up compliance.

“The application of blockchain to consent management is an emerging scenario at an early stage of experimentation,” says Willemsen. “Various organizations have started exploring the use of blockchain for consent management because the potential immutability and tracking of orthodox blockchains could provide the necessary tracking and auditing required to comply with data protection and privacy legislation.”

Read The Full Report at Gartner (membership required)

 

  
1 2 3 4 5 ... 7 8 »