CCPA Explained: Part 3 The Right to Opt-Out and Offering Financial Incentives

 Oct 12, 2019 12:00 PM
by Derek Lackey

Still in Article 2 - Notices to Consumers, in this chapter we deal with the notices required to inform consumers of their right to opt-out as well as offering a Financial Incentive to sell their data. While it is legal to do so, certain notices and transparency are required.

§ 999.306. Notice of Right to Opt-Out of Sale of Personal Information

A   Pupose and General Principles
(1) to inform consumers of their right to restrict the sale of their data
(2) easy to understand Opt-out must be offered
a. use plain, straightforward language  - no legal jargon.
b. draw attention to the notice
c. present it in the languages the site normally uses.
d. be accessible to consumers with disabilities.

B A business that sells personal information must provide a clear notice to opt-out and make it easy to do so.
(1) An obvious "Do Not Sell My Info" button with links to what could be sold by category, linked to that section of the privacy policy.
(2) Develop an offline method of informing as well.
(3) All obligations must be met in an offline solution.

C Language to include in your opt-out
(1) a description of the consumer's righ to opt-out
(2) a link to the website where they can opt-out
(3) clear and simple instructions of how to opt-out
(4) create an audit trail of opt-outs
(5) a link to the Privacy Policy

D Exemptions
(1) if your orgainzation does not and will not sell personal information.
(2) This is stated in the Privacy Policy. It is important to note here that a consumer whos personal data is collected during a period when a "Do Not Sell My Info" is NOT posted is deemed to have opted-out of their data being sold.

E Opt-Out Button or logo
(1) an example will be provided
(2) this Button or logo should be linked to the Privacy Policy and the webpage that captures their preference.

 

§ 999.307. Notice of Financial Incentive

A   Purpose and General Principles
(1) the purpose of this notice to explain the value proposition to the consumer so they can make an informed decision.
(2) shall be easy to read and understand
a. use plain, straightforward language - no technical or legal jargon.
b. use a format that draws consumer's attention.
c. make avaible in languages used on the website.
d. be accessible to consumers with disabilities.
e. place the notice where people can read it prior to opting in, both online and offline.
(3) the description can be a link to the section of the Privacy Policy that describes these incentives and the value proposition (in plain language).

B Elements to include in the Notice of Financial Incentives
(1) clear "sccint" summary of the offer
(2) details of the material terms including the categories of personal information affected.
(3) easy directions to opt-out now or in the future.
(4) inform consumers of their right to opt-out and how to do so.
(5) an explanation of why Financial Incentives is permitted under CCPA
a. a good faithh estimate of the value of the consumer's data that forms the basis for the transaction.
b. how that value is calculated.

 

In our next chapter we will provide what you need to know to craft a Privacy Notice in this new data protected environment.

 

CCPA Explained: Article 1 General Provisions - Part 1 - Scope and Definitions

CCPA Explained: Article 2 - Notices to Consumers - Part 2 - Notice at Collection

 

 

  

CCPA Explained: Article 2 - Notices to Consumers - Part 2 - Notice at Collection

 Oct 12, 2019 11:00 AM
by Derek Lackey

We write these chapters to assist organizations to effectively and efficiently IMPLEMENT new practices designed to take care of your prospect and customer while meeting the standards set by this new law. It begins with understand your obligations under the new CCPA.

Article 2. Notices to Consumers

§ 999.305 Notice at Collection of Personal Information

A. Purpose and General Principles
1. Categories of Personal Information and why you are collecting it.
2. Easy to read and understandable to an average consumer.
a) Use plain, straightforward language and avoid technical or legal jargon
b) a format that draws attention
c) in languages the business usually uses
d) accessible to consumers with disabilities
3. PI cannot be used for any purpose other than the stated purpose. If scope is revised, new permission must be requested.
4. Cannot collect more categories of PI than you are disclosing.
5. No notice. No collection.

B. Include the following in it's notice of collection
(1) list of categiries about to be collected written in a way it can be understood.
(2) each category and a statement how it will be used.
(3) if the business sells information - Do Not Sell My Info must be added
(4) a link to the privacy policy

C. Notice at collection may be a link to the section of the privacy policy that contains th info required.

D.  If you did not collection the Personal Information yourself you should:
(1) Contact the person with a notice to opt-out
(2) Contact the source of the information to:
a. confirm a Notice at Collection was executed orginally.
b. Obtain a written description of how Notice was provided with an example of the notice. This should be kept for at least 2 years.

This is easy to grasp and with paragraph C, very easy to implement. All you need to do is add these categories and use statements to your privacy policy and create a link BEFORE your fields on your webform. For every category of data you collect, an organization should write a paragraph in their privacy policy describing why it is being collected and how it will be used. As we can see where this is all heading, add how long you intend to keep it and you will be ready for the next wave of data protection laws, which you can bet will follow the CCPA shortly.

You may also wish to read:

CCPA Explained: Article 1 General Provisions - Part 1 - Scope and Definitions

 

  

CCPA Explained: Article 1 General Provisions - Part 1 - Scope and Definitions

 Oct 12, 2019 10:00 AM
by Derek Lackey

I remember one of my teachers in grade school asking the question, "How do you eat an elephant?" His answer: "One bite at a time." was a revalation to me at the time. Looking back over my life, it is mind boggling how often that single statement has provided access to action in my life. A big, intimidating topic like the California Consumer Privacy Act (2018), complete with it's significant fines and private right of actions requires an access to action and I suggest it be considered one section at a time.

The Attorney General of California has released the text of Proposed Regulations on Oct 11, 2019, just in time for organizations to completly re-think their management of data (collection, storage, sharing, selling, etc) prior to the law coming into force Jan 1, 2020. Let's break the 24 pages down and add some comments/editorial to help organizations head down this road.

The first Section is definitions.  An obvious missing is the definition of Personal Information in the context of California law. Clarip describes it as follows:

The California Consumer Privacy Act protects the personal information of California residents, referred to by the privacy law as consumers. To clarify to businesses precisely what they need to protect, the CCPA contains a definition of personal information. However, the breadth of the definition means that businesses need to protect a broad range of information and may need to make judgment calls about information on the periphery.

The definition of personal information in the CCPA includes 11 categories, which can be summarized as:

1) Identifiers
2) Select Information in Customer Records
3) Legally Protected Characteristics
4) Commercial Purchasing Information
5) Biometric Information
6) Internet or Network Activity
7) Geolocation
8) Information Typically Detected by the Senses
9) Employment Information
10) Education Information
11) Inferences from Above Used to Profile

However, this is really only the beginning as the definition of personal information is not limited to these categories. Personal information includes anything that identifies, relates to, describes, is capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household.

There are several exclusions from the definition of personal information. One of them is that it does not include deidentified or aggregate consumer information. It also does not include publicly available information, which is defined as information lawfully made available from federal, state, or local government records.

Some of the specific data identified as personal information:

1) Identifiers:

– real name,
– alias,
– postal address,
– unique personal identifier,
– online identifier,
– Internet Protocol address,
– email address,
– account name,
– social security number,
– driver’s license number,
– passport number, or
– other similar identifiers.

2) Information in Customer Records:

– name,
– signature,
– social security number,
– physical characteristics or description,
– address,
– telephone number,
– passport number,
– driver’s license or state identification card number,
– insurance policy number,
– education,
– employment,
– employment history,
– bank account number,
– credit card number,
– debit card number,
– financial information,
– medical information,
– health insurance information.

6) Internet Activity:

– browsing history,
– search history,
– information regarding a consumer’s interaction with an Internet Web site, application, or advertisement.

8) Information Detected by the Senses (this moniker is not listed in the CCPA but seems to fit well):

– audio,
– electronic,
– visual,
– thermal,
– olfactory,
– similar information.

The Attorney General has the power to add additional categories of personal information in regulations issued after the solicitation of broad public participation in order to address changes in technology, data collection practices, obstacles to implementation, and privacy concerns.

Areas of Controversy

There are at least several major areas of controversy around the definition of personal information at the moment.

The first is that there has been testimony in legislative hearings from proponents of the bill after the CCPA passed that IP address alone is not considered as personal information. This has been in response to criticism that there really is no true carve out for small businesses since many businesses with less than $25 million in revenue will hit the data collection threshold. If a consumer’s IP address alone is considered personal information under the bill, then a website server which is collecting that information in its server logs will only need 137 visitors a day from California to reach the CCPA threshold and become a covered business. Many people read the CCPA as covering all collection of IP addresses, but some proponents have testified otherwise.

The second is that the personal information of employees is included within the scope of the law. AB-25 is a proposed bill in the California Assembly that would remove employees from the definition of consumer and end the controversy. We will be closely following the bill to see whether it is approved by the California legislature. In the latest version of AB-25 up for a possible Senate floor vote, employee data is excluded from the CCPA for a one year period so that consumer advocates and business groups can determine how to appropriately protect employee data privacy.

Another area of controversy has been the inclusion of households within the definition of personal information. Businesses have publicly noted the problem with this for the right to access and right to delete and asked for guidance during the rulemaking process from the California Attorney General.

Now lets review Section 1, including more definitions:

 

Article 1 - General Provisions

§ 999.300. Title and Scope

A The California Consumer Protection Act (CCPA) do not limit any other rights a consumer may have

B Violations of these regulations could results in fines or a private right of action as described.

 

§ 999.301. Definitions
In addition to the definitions set forth in Civil Code section 1798.140, for purposes of these regulations:


(a) “Affirmative authorization” means an action that demonstrates the intentional decision by the consumer to opt-in to the sale of personal information. Within the context of a parent or guardian acting on behalf of a child under 13, it means that the parent or guardian has provided consent to the sale of the child’s personal information in accordance with the methods set forth in section 999.330. For consumers 13 years and older, it is demonstrated through a two-step process whereby the consumer shall first, clearly request to opt-in and then second, separately confirm their choice to opt-in.


(b) “Attorney General” means the California Attorney General or any officer or employee of the California Department of Justice acting under the authority of the California Attorney General.


(c) “Authorized agent” means a natural person or a business entity registered with the Secretary of State that a consumer has authorized to act on their behalf subject to the requirements set forth in section 999.326.


(d) “Categories of sources” means types of entities from which a business collects personal information about consumers, including but not limited to the consumer directly, government entities from which public records are obtained, and consumer data resellers.


(e) “Categories of third parties” means types of entities that do not collect personal information directly from consumers, including but not limited to advertising networks, internet service providers, data analytics providers, government entities, operating systems and platforms, social networks, and consumer data resellers.


(f) “CCPA” means the California Consumer Privacy Act of 2018, Civil Code sections 1798.100 et seq.


(g) “Financial incentive” means a program, benefit, or other offering, including payments to consumers as compensation, for the disclosure, deletion, or sale of personal information.


(h) “Household” means a person or group of people occupying a single dwelling.


(i) “Notice at collection” means the notice given by a business to a consumer at or before the time a business collects personal information from the consumer as required by Civil Code section 1798.100(b) and specified in these regulations.


(j) “Notice of right to opt-out” means the notice given by a business informing consumers of their right to opt-out of the sale of their personal information as required by Civil Code sections 1798.120 and 1798.135 and specified in these regulations.


(k) “Notice of financial incentive” means the notice given by a business explaining each financial incentive or price or service difference subject to Civil Code section 1798.125(b) as required by that section and specified in these regulations.


(l) “Price or service difference” means (1) any difference in the price or rate charged for any goods or services to any consumer, including through the use of discounts, financial payments, or other benefits or penalties; or (2) any difference in the level or quality of any goods or services offered to any consumer, including denial of goods or services to the consumer.


(m) “Privacy policy” means the policy referred to in Civil Code section 1798.130(a)(5), and means the statement that a business shall make available to consumers describing the business’s practices, both online and offline, regarding the collection, use, disclosure, and sale of personal information and of the rights of consumers regarding their own personal information.


(n) “Request to know” means a consumer request that a business disclose personal information that it has about the consumer pursuant to Civil Code sections 1798.100, 1798.110, or 1798.115. It includes a request for any or all of the following:
(1) Specific pieces of personal information that a business has about the consumer;
(2) Categories of personal information it has collected about the consumer;
(3) Categories of sources from which the personal information is collected;
(4) Categories of personal information that the business sold or disclosed for a business purpose about the consumer;
(5) Categories of third parties to whom the personal information was sold or disclosed for a business purpose; and
(6) The business or commercial purpose for collecting or selling personal information.


(o) “Request to delete” means a consumer request that a business delete personal information about the consumer that the business has collected from the consumer, pursuant to Civil Code section 1798.105.


(p) “Request to opt-out” means a consumer request that a business not sell the consumer’s personal information to third parties, pursuant to Civil Code section 1798.120(a).


(q) “Request to opt-in” means the affirmative authorization that the business may sell personal information about the consumer required by Civil Code section 1798.120(c) by a parent or guardian of a consumer less than 13 years of age, or by a consumer who had previously opted out of the sale of their personal information.


(r) “Third-party identity verification service” means a security process offered by an independent third party who verifies the identity of the consumer making a request to the business. Third-party verification services are subject to the requirements set forth in Article 4 regarding requests to know and requests to delete.


(s) “Typical consumer” means a natural person residing in the United States.


(t) “URL” stands for Uniform Resource Locator and refers to the web address of a specific website.


(u) “Verify” means to determine that the consumer making a request to know or request to delete is the consumer about whom the business has collected information.


Note: Authority cited: Section 1798.185, Civil Code. Reference: Sections 1798.100-1798.199, Civil Code.

While not exhaustive, this sets a context for the language used in the CCPA Proposed Regulations

Our next chapter deals with Article 2 - Notices to Consumers - Part 2 - Notices at Collection

 

  

The Canadian Centre for Cyber Security Releases Baseline Controls

 Jul 4, 2019 7:00 AM
by Derek Lackey

The Canadian government’s Canadian Centre for Cyber Security (“CCCS”) has released Baseline cybersecurity controls for small and medium organizations in an effort to help small and medium-sized businesses improve their cybersecurity practices and their overall resiliency to cybersecurity threats.

Small and medium-sized businesses face a range of cyber threats in the form of cybercrime, often with immediate financial or privacy implications, such as compromised customer, financial and proprietary information. With this in mind, the baseline controls provide a condensed set of advice, guidance and security controls on how such businesses can maximize the effectiveness of their cybersecurity investments. In this bulletin, we review key features of the guidance.

Assessment

Cybersecurity ostensibly depends on a multitude of factors. As such, businesses are encouraged to apply the controls that are most appropriate for their circumstances and that best suit their cybersecurity needs. Businesses should conduct a five-step assessment to appraise those needs:

1) Size assessment: The proposed baseline controls are intended for businesses with fewer than 499 employees.

2) Information systems and assets that fall within the scope of cyber-protection: Information systems and assets refer to all computers, servers, network devices, mobile devices, information systems, applications, services and cloud applications that are used to conduct business. It is strongly recommended that all information systems and assets be considered within the scope for baseline controls.

3) Value of information systems and assets: The injury level related to the confidentiality, integrity and availability of information systems and/or data should be assessed. The baseline controls are intended for situations where all potential injuries are at or below a medium threat level.

4) Primary threat of concern: If a business operates in a strategic sector of the economy or faces more advanced cybersecurity threat levels, it should invest in more comprehensive cybersecurity measures.

5) Primary cybersecurity investment levels: Someone in a leadership role who is specifically responsible for IT security should be identified. Then, the business’s financial spending levels as well as internal staffing levels for IT and IT security should be assessed.

Identification of Baseline Controls

Once the five-part assessment has been conducted, a business is in a position to determine which baseline controls are relevant to implement to reduce the risk of cybersecurity incidents and data breaches.

The CCCS proposes the following thirteen baseline controls:

1) Develop an incident response plan: Businesses should have a basic plan for how to respond to incidents of varying severity, namely a written incident response plan (both in soft and hard copy) that details who is responsible for handling incidents including any relevant contact information for communicating to external parties, stakeholders, and regulators. Businesses should also consider purchasing a cybersecurity insurance policy and implementing a security information and event management system.

2) Automatically patch operating systems and applications: Businesses should enable automatic updates for all software and hardware or establish full vulnerability and patch management solutions, as well as conduct risk assessment activities.

3) Enable security software: Businesses should configure and enable anti-virus and anti-malware software that update and scan automatically, on all connected devices.

4) Securely configure devices: Businesses should implement secure configurations for all devices, namely changing default passwords, turning off unnecessary features, and enabling relevant security features.

5) Use strong user authentication: Businesses should implement two-factor authentication wherever possible and require it for important accounts, namely financial accounts, system administrators, cloud administration, privileged users, and senior executives. Businesses should also have clear policies on password protection and only enforce password changes on suspicion or evidence of compromise.

6) Provide employee awareness training: As a first line of defence, businesses should train employees on basic security practices and focus on practical and easily implementable measures, such as effective password policies, identification of malicious emails and links, approved software, appropriate usage of the Internet, and safe use of social media.

7) Backup and encrypt data: Businesses should backup systems that contain essential business information at a secure offsite location and ensure that recovery mechanisms operate as expected. Backups should be stored in an encrypted state, with restricted access for testing or restoration activities only.

8) Secure mobility: Businesses should implement a mobility management solution for all mobile devices and decide on an ownership model for mobile devices. Whether mobile devices are business or employee-owned, there should be a separation between work and personal data, including apps, email accounts, and contacts. Businesses should ensure that employees download mobile apps from a list of trusted sources and require that all mobile devices store sensitive information in a secure, encrypted state. Businesses should also require users to disable automatic connections to open networks, avoid connecting to unknown Wi-Fi networks, limit the use of Bluetooth and NFC for the exchange of sensitive information, and use corporate Wi-Fi or cellular data network connectivity rather than public Wi-Fi.

9) Establish basic perimeter defences: Businesses should have a dedicated firewall, with a DNS firewall for outbound DNS requests to the Internet, and activate software firewalls on devices within their networks. Businesses should require secure connectivity to all corporate IT resources and VPN connectivity with two-factor authentication for all remote access into corporate networks. Only secure Wi-Fi, never public Wi-Fi networks, should be used. Businesses should follow the Payment Card Industry Data Security Standard for all point-of-sale terminals and financial systems and further isolate these systems from the Internet and should ensure the implementation of DMARC on all of the business’s email services.

10) Secure cloud and outsourced IT services: Businesses should require that all their cloud service providers comply with Trust Service Principles or, alternatively, evaluate their comfort level with how and where their outsourced IT providers handle, access, store, and use their sensitive information. Businesses should also ensure that their IT infrastructure and users communicate securely with the cloud.

11) Secure websites:..

Read The Full Article

 

  

GDPR: Anonymisation and pseudonymisation

 Mar 26, 2019 9:00 AM
by Derek Lackey

European Citizens have a fundamental right to privacy, it is important for organisations which process personal data to be cognisant of this right. When carried out effectively, anonymisation and pseudonymisation can be used to protect the privacy rights of individual data subjects and allow organisations to balance this right to privacy against their legitimate goals.

Read this guide to find out about using these techniques.

Key points

Irreversibly and effectively anonymised data is not “personal data” and the data protection principles do not have to be complied with in respect of such data. Pseudonymised data remains personal data.

If the source data is not deleted at the same time that the ‘anonymised’ data is prepared, where the source data could be used to identify an individual from the ‘anonymised’ data, the data may be considered only ‘pseudonymised’ and thus still ‘personal data’, subject to the relevant Data Protection legislation.

Data can be considered “anonymised” from a data protection perspective when data subjects are not identified or identifiable, having regard to all methods reasonably likely to be used by the data controller or any other person to identify the data subject, directly or indirectly.

What is personal data?

Personal data means any information relating to an identified or identifiable individual. This individual is also known as a ‘data subject’.

An identifiable individual is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that individual.

The definition above reflects the wording of both the General Data Protection Regulation (GDPR) and the Irish Data Protection Act 2018. Accordingly, data about living individuals which has been anonymised such that it is not possible to identify the data subject from the data or from the data together with certain other information, is not governed by the GDPR or the Data Protection Act 2018, and is not subject to the same restrictions on processing as personal data.

What is anonymisation?

"Anonymisation" of data means processing it with the aim of irreversibly preventing the identification of the individual to whom it relates. Data can be considered effectively and sufficiently anonymised if it does not relate to an identified or identifiable natural person or where it has been rendered anonymous in such a manner that the data subject is not or no longer identifiable.

There is a lot of research currently underway in the area of anonymisation, and knowledge about the effectiveness of various anonymisation techniques is constantly changing. It is therefore impossible to say that a particular technique will be 100% effective in protecting the identity of data subjects, but this guidance is intended to assist with identifying and minimising the risks to data subjects when anonymising data. In the case of anonymisation, by 'identification' we mean the possibility of retrieving a person's name and/or address, but also the potential identifiability by singling out, linkability and inference.

What is pseudonymisation?

"Pseudonymisation" of data means replacing any identifying characteristics of data with a pseudonym, or, in other words, a value which does not allow the data subject to be directly identified.

The GDPR and the Data Protection Act 2018 define pseudonymisation as the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that (a) such additional information is kept separately, and (b) it is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable individual.

Although pseudonymisation has many uses, it should be distinguished from anonymisation, as it only provides a limited protection for the identity of data subjects in many cases as it still allows identification using indirect means. Where a pseudonym is used, it is often possible to identify the data subject by analysing the underlying or related data.

Uses of anonymisation and pseudonymisation

Data which has been irreversibly anonymised ceases to be “personal data”, and processing of such data does not require compliance with the Data Protection law. In principle, this means that organisations could use it for purposes beyond those for which it was originally obtained, and that it could be kept indefinitely.

In some cases, it is not possible to effectively anonymise data, either because of the nature or context of the data, or because of the use for which the data is collected and retained. Even in these circumstances, organisations might want to use anonymisation or pseudononymisation techniques:-

As part of a "privacy by design" strategy to provide improved protection for data subjects.

As part of a risk minimisation strategy when sharing data with data processers or other data controllers.

To avoid inadvertent data breaches occurring when your staff is accessing personal data.

As part of a “data minimisation” strategy aimed at minimising the risks of a data breach for data subjects.

Even where anonymisation is undertaken, it does retain some inherent risk. As mentioned, pseudonymisation is not the same as anonymisation and should not be equated as such – the information remains personal data. Even where effective anonymisation takes place, other regulations may apply – for instance the ePrivacy directive applies in many regards to information rather than personal data. And finally, even where effective anonymisation can be carried out, any release of a dataset may have residual privacy implications, and the expectations of the concerned individuals should be accounted for.

Identification – the test under the Data Protection Acts

In order to determine whether data has been sufficiently anonymised to bring it outside the scope of Data Protection law, it is necessary to consider the second element of the definition, relating to the identification of the data subject, in greater detail.

The Article 29 Working Party on Data Protection (now replaced by the European Data Protection board, or ’EDPB’) has previously suggested the following test for when an individual is identified or identifiable:

“In general terms, a natural person can be considered as “identified” when, within a group of persons, he or she is "distinguished" from all other members of the group. Accordingly, the natural person is “identifiable” when, although the person has not been identified yet, it is possible to do it…”

Thus, a person does not have to be named in order to be identified. If there is other information enabling an individual to be connected to data about them, which could not be about someone else in the group, they may still “be identified”.

“Identifiers are pieces of information which are closely connected with a particular individual, which could be used to single them out.”

In determining whether a person can be distinguished from others in a group, it is important to consider what “identifiers” are contained in the information held. Identifiers are pieces of information which are closely connected with a particular individual, which could be used to single them out. Such identifiers can be “direct”, like the data subject’s name or image, or “indirect”, like their phone number, email address or a unique identifier assigned to the data subject by the data controller. As a result, removing direct identifiers does not render data sets anonymous. Data which are not identifiers may also be used to provide context which may lead to identification or distinction between users – e.g. a series of data about their location, or perhaps their shopping or internet search history. Indeed, these kinds of data series on their own may be sufficient to distinguish and identify an individual.

However, just because data about individuals contains identifiers does not mean that the data subjects will be identified or identifiable. This will depend on contextual factors. Information about a child’s year of birth might allow them to be singled out in their family, but would probably not allow them to be distinguished from the rest of their school class, if there are a large number of other children with the same year of birth. Similarly, data about the family name of an individual may distinguish them from others in their workplace, but might not allow them to be identified in the general population if the family name is common.

On the other hand, data which appears to be stripped of any personal identifiers can sometimes be linked to an individual when combined with other information, which is available publicly or to a particular individual or organisation. This occurs particularly in cases where there are unique combinations of connected data. In the above case for instance, if there was one child with a particular birthday in the class then having that information alone allows identification.

Identifiability and anonymisation

The concept of “identifiability” is closely linked with the process of anonymisation. Even if all of the direct identifiers are stripped out of a data set, meaning that individuals are not “identified” in the data, the data will still be personal data if it is possible to link any data subjects to information in the data set relating to them.

Recital 26 of the GDPR provides that when determining whether an individual is identifiable or not “[…] account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly” and that when determining whether means are ‘reasonably likely to be used’ to identify the individual “[,,,] account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.” Recital 26 also clarifies that the principles of data protection do not apply to anonymous information. …”

Therefore, to determine when data is rendered anonymous for data protection purposes, you have to examine what means and available datasets might be used to re-identify a data subject. Organisations don’t have to be able to prove that it is impossible for any data subject to be identified in order for an anonymisation technique to be considered successful. Rather, if it can be shown that it is unlikely that a data subject will be identified given the circumstances of the individual case and the state of technology, the data can be considered anonymous.

Some different ways that re-identification can take place are discussed below.

If the source data is not deleted at the time of the anonymisation, the data controller who retains both the source data and the anonymised data will normally be in a position to identify individuals from the anonymised data. In such cases, the anonymised data must still be considered to be personal data while in the hands of the data controller, unless the anonymisation process would prevent the singling out of an individual data subject, even to someone in possession of the source data.

Identification risks...

Read The Full Article from the Irish DPA

 

  

PIPEDA - 10 Core Principles

 Feb 25, 2019 9:00 AM
by Derek Lackey

PIPEDA is broken down into 10 core principles. They reflect and evaluate how a business is required to handle personal information and to ensure that best practices are in place and used. Following is an overview of each of these principles as well as one guidance on how they relate to cloud service providers.

1. Accountability 

An organization is required to accept responsibility for any and all personal information that is under its control. This is accomplished by designating a representation who is accountable and responsible for the organization’s compliance. The business is further required to use various means, including contractual, to ensure that it remains compliant with third parties. It also has a responsibility to uphold PIPEDA by developing and implementing relevant policies and procedures.

Organizations should include contractual obligations that uphold PIPEDA including reporting procedures, security policies, non-disclosure, and limitations.

 

2. Identifying Purposes 

An organization is responsible for identifying and documenting their purpose for collecting personal information. They are required to notify their customers, clients, users, visitors, and guests if they intend to use the information for any purpose that was not identified at the time of collection prior to using that information.

Organizations should share the organization’s outlook on policies and procedures, particularly as it related to the purpose of collecting personal data.

Businesses should evaluate their requirements to handle personal information and to ensure that best practices are in place and used.

 

3. Consent 

An organization is responsible for obtaining the informed consent of individuals when it is engaged in the practice of collection of personal information or data, except where such knowledge and consent is inappropriate.

Organizations should share the organization’s policies and outlook regarding how sensitive data is handled.

 

4. Limiting Collection 

An organization is responsible for limiting the collection of personal information to only what is necessary for purposes identified by the organization. All collection methods should be fair and compliant with all applicable laws.

Organizations should follow the best practices for securing storing personal information on the behalf of the business.

 

5. Limiting Use, Disclosure, and Retention 

An organization is responsible for never using or disclosing personal information for any purpose other than that for which it was collected. They are to retain any personal information collected for only as long as is necessary to fulfill the intent or purpose of the collection.

Organizations should follow best practices for securely handling the destruction or disposal of data that is no longer needed and storage is no longer required. They should also have policies in place regarding third party disclosure.

 

6. Accuracy 

An organization is responsible for ensuring that all information is accurate, complete, and up to date. It should be only what is necessary or required for the purpose or intent of use.

Organizations should share the organization’s principles on the accuracy of data that is collected.

 

7. Safeguards 

An organization is responsible for protecting personal information by ensuring that reliable security safeguards that are appropriate for the level of the information’s sensitivity are in place.

Organizations should have policies in place for safeguarding the data that it is hosting for the organization. Organizations should have access to all security policies regarding how their cloud service provider protects the collected data from loss and theft as well as unauthorized access, copying, modification, disclosure, and use.

 

8. Openness 

An organization is responsible for complete transparency regarding its policies and management of collected personal information. The policies should be very detailed in explaining how it manages personal information and these policies should be readily available for both employees and clients.

Organizations should be transparent regarding their data management policies. They should be able to provide a copy of these policies to their clients upon request.

 

9. Individual Access 

An organization is responsible for providing, upon written request, the existence, use, and disclosure of an individual’s personal information. They must also give those individuals access to the information that has been collected and they must be given the opportunity or option to challenge the accuracy of it and have it amended appropriately.

Organizations should have policies in place that are in line with the organization’s policies regarding access to information.

 

10. Challenging Compliance 

An organization is responsible for providing a platform for individuals to address challenges PIPEDA compliance with the core principles. The designated individual or team that handle’s an organization’s compliance will be the point of contact for individuals who are challenging the compliance issues.

Organizations should have the appropriate policies and procedures to ensure that there are no complaints filed or received regarding the way that an organization’s data is handled.

  

Guiding principles for a more transparent consent process in Canada

 Feb 25, 2019 9:00 AM
by Derek Lackey

The Privacy Commissioners of Canada, Alberta and British Columbia have jointly issued guidelines to help organizations obtain meaningful consent from individuals for the collection, use and disclosure of their personal information. The previously written Guidelines came into effect in January 2019 and are now applied by the Commissioners when evaluating organizational conduct.

The Guidelines set out seven guiding principles for meaningful consent:

1. Emphasize key elements

The Guidelines state that organizations must identify for individuals what personal information is being, or may be, collected about them and for what purposes. This must be done with sufficient precision for individuals to meaningfully understand what they are consenting to. Disclosure to third parties must also be clearly explained.

Further, individuals must be able to understand the consequences of the collection, use or disclosure to which they are consenting. Meaningful risks must be identified, which means a risk that falls below the balance of probabilities but is more than a minimal or mere possibility should be identified by the organization.

 

2. Allow individuals to control the level of detail they get and when

The Guidelines state that information must be provided to individuals in manageable and easily accessible ways, potentially including layers. This is because one person may be comfortable with a quick review of summary information, but others may need a “deeper dive.”

The Guidelines go on to state that the information should remain available to individuals as they engage with the organization, because consent choices are not made just once. At any time, individuals should be able to reconsider whether they wish to maintain or withdraw their consent. Full information should be available to them as they make those decisions.

 

3. Provide individuals with clear options to say "yes" or "no"

The Guidelines emphasize that individuals cannot be required to consent to the collection, use or disclosure of personal information beyond what is necessary to provide the product or service. They must be given a choice about unnecessary collections, uses and disclosures. Previous Commissioner decisions indicate that the term “necessary” does not mean absolutely necessary (i.e. in the sense that it is literally not possible to provide the product/service without collecting, using or disclosing the personal information). Rather, the term “necessary” essentially means “reasonably necessary,” taking all relevant and legitimate factors into account.

For a collection, use or disclosure to be a valid condition of service, it must be integral to the provision of that product or service such that it is required to fulfill its explicitly specified and legitimate purposes.

 

4. Be innovative and creative

The Guidelines say that organizations should design and/or adopt innovative consent processes that can be implemented just-in-time, are specific to the context, and are appropriate to the type of interface used.

While innovation and creativity are clearly worthy goals, it seems unlikely that the Commissioners would chastise an organization or find the organization to be in breach of the consent requirements in their respective legislation simply because the consent was not obtained in an innovative or creative manner. Accordingly, we suggest that organizations see this portion of the Guidelines as an encouragement or “challenge,” but not a strict legal requirement (indeed, the Guidelines note that some statements are intended to communicate “best practices”).

That said, the Guidelines make the fair point that mobile devices present an amplified communication challenge: individuals’ time and attention are at a premium and the medium does not lend itself to lengthy explanations. Accordingly, organizations need to highlight privacy issues at particular decision points in the user experience where people are likely to pay attention in order to obtain informed and meaningful consent from individuals.

 

5. Consider the consumer’s perspective

The Guidelines point out that consent is only valid where the individual can understand that to which they are consenting. Accordingly, an organization’s consent processes must take into account the consumer’s perspective to ensure that the processes are user-friendly and that the information provided is generally understandable from the point of view of the organization’s target audience. In order to do this effectively, the Guidelines suggest that organizations consider:

  • consulting with users and seeking their input when designing a consent process;
  • pilot testing or using focus groups to ensure individuals understand what they are consenting to;
  • involving user interaction/user experience designers in the development of the consent process;
  • consulting with privacy experts and/or regulators when designing a consent process; and/or
  • following an established "best practice" standard or other guideline in developing a consent process.

 

6. Make consent a dynamic and ongoing process

The Guidelines emphasize that informed consent is an ongoing process that evolves as circumstances change. Organizations should not rely on a static moment in time but, rather, treat consent as a dynamic and interactive process. Thus, ensuring the effectiveness of individual consent does not end with the posting of a privacy policy or notice.

For example, when an organization plans to introduce significant changes to its privacy practices, it must notify users and obtain consent prior to the changes coming into effect. The Commissioners recommend that organizations consider periodically reminding individuals about their privacy options and inviting them to review these options.

 

7. Be accountable – stand ready to demonstrate compliance

The Guidelines state that in order for an organization to demonstrate that it has obtained valid consent, it must be able to do more than point to a line buried in a privacy policy. Instead, organizations should be able to demonstrate – either in the case of a complaint from an individual or a practice query from a privacy regulator – that they have a process in place to obtain consent from individuals and that such process is compliant with the consent obligations set out in the applicable legislation.

 

Other considerations

In addition to the seven guiding principles described above, the Guidelines ask organizations to keep in mind the following:

Organizations need to consider the most appropriate form for consent – in other words, organizations must ask themselves: “Should the consent in this particular situation be express or implied?” While express consent is generally required, there are certain circumstances under which implied consent may be adequate.

The purposes for which an organization collects and uses personal information must be appropriate and defined. Consent is not everything.

Individuals have the right to withdraw consent, subject to legal or contractual restrictions. A withdrawal of consent may mean that data held by an organization about an individual should be deleted, depending on the circumstances.

Organizations must obtain consent from a parent or guardian for any individual unable to provide meaningful consent themselves. (The federal commissioner takes the position that, in all but exceptional circumstances, this means anyone under the age of 13).

  

How will personal data continue to flow after Brexit?

 Feb 7, 2019 1:00 PM
by Derek Lackey

Elizabeth Denham's latest blog busts the myths for UK small and medium sized businesses transferring personal data to and from the EEA

Like everyone in the UK right now, we are following the twists and turns of the Brexit negotiations. The sharing of customers’, citizens’ and employees’ personal data between EU member states and the UK is vital for business supply chains to function and public authorities to deliver effective public services.

At the moment personal data flow is unrestricted because the UK is an EU member state. If the proposed EU withdrawal agreement is approved, businesses can be assured that personal data will continue to flow until 2020 while a longer term solution can be put in place. 

However in the event of ‘no deal’, EU law will require additional measures to be put in place by UK companies when personal data is transferred from the European Economic Area (EEA) to the UK, in order to make them lawful.

With less than two months to go until the UK leaves the EU, we recognise that businesses and organisations are concerned. My latest myth busting blog challenges some of the misconceptions about what a ‘no deal’ Brexit will mean for UK companies transferring personal data to and from the EEA.

Myth #1: Brexit will stop me from transferring personal information from the UK to the EU altogether.

Fact

In a ‘no deal’ situation the UK Government has already made clear its intention to enable data to flow from the UK to EEA countries without any additional measures. But transfers of personal data from the EEA to the UK will be affected.

The key question around the flow of personal data, is whether your data is going from the UK to the EEA or exchanged both ways? If you are unsure, start by mapping your data flows and establish where the personal data you are responsible for is going.

All businesses operating in the EEA should consider whether they need to take action now. Read our guidance pages to establish whether you need to prepare for data transfers in the event of ‘no deal’.

 

Myth #2: I have regular customers from Europe who come to my family’s hotel every year – I’ll need a special agreement set up to deal with their personal details.

Fact

When a customer passes their own personal data to a company in the EEA, it is not considered to be a data transfer and can continue without additional measures.

However, there may be other ways you transfer data, for example a booking agency transferring a list of customers, in this case you may need additional measures. If you are unsure please check the ICO’s guidance pages where we have a range of tools and advice to help.

 

Myth #3: Brexit will only affect data transfers of UK companies actually exporting goods or services to the EU.

Fact

Personal data transfers are not about whether your business is exporting or importing goods. You need to assess whether your business involves transfers of personal data, such as names, addresses, emails and financial details to and from the EEA and if this is going to be lawful in the case of ‘no deal’.

It is the responsibility of every business to know where the personal data it processes is going, and that a proper legal basis for such transfers exists. Our guidance – Leaving the EU – six steps to take will help.

 

Myth #4: My business will be fine because there will be a European Commission adequacy decision on exit day on 29 March 2019 to ensure the uninterrupted exchanges of personal data between the UK and the EU.

Fact

‘Adequacy’ is the term given to countries outside the EU that have data protection measures that are deemed essentially equivalent to European standards. Companies and organisations operating within countries with adequacy agreements enjoy uninterrupted flow of personal data with the EU. But an assessment of adequacy can only take place once the UK has left the EU. These assessments and negotiations have usually taken many months.  

Although it is the ambition of the UK and EU to eventually establish an adequacy agreement, it won’t happen yet. Until an adequacy decision is in place, businesses will need a specific legal transfer arrangement in place for transfers of personal data from the EEA to the UK, such as standard contractual clauses.

 

Myth #5: Our parent company in Europe keeps all our personal data records centrally so I don’t need to worry about sorting any new agreements.

Fact

Don’t presume you are covered by the structure of your company. In the case of ‘no deal’, UK companies transferring personal information to and from companies and organisations based in the EEA will be required by law to put additional measures in place. You will need to assess whether you need to take action.

There are many mechanisms companies can use to legitimise the transfer of personal  data with the EEA and standard contractual clauses is one of those. We have produced an online tool to help organisations put contract terms in place providing the lawful basis for the data transfers. Companies that need to act would also benefit from Leaving the EU - six steps to take guidance for more information.

You know your organisation best and will be able to use our guidance to assess if and how you need to prepare. Alternative data transfer mechanisms exist but it can take time to put those arrangements in place.

It is in everyone’s interests that appropriate exchanges of personal data continue whatever the outcome of Brexit. The ICO will carry on co-operating internationally to ensure protections are in place for personal data and organisations have the right advice and guidance.

 

ICO Blog

 

  

Bundeskartellamt prohibits Facebook from combining user data from different sources

 Feb 7, 2019 10:00 AM
by Derek Lackey

Date of issue: 07.02.2019

The Bundeskartellamt has imposed on Facebook far-reaching restrictions in the processing of user data.

According to Facebook's terms and conditions users have so far only been able to use the social network under the precondition that Facebook can collect user data also outside of the Facebook website in the internet or on smartphone apps and assign these data to the user’s Facebook account. All data collected on the Facebook website, by Facebook-owned services such as e.g. WhatsApp and Instagram and on third party websites can be combined and assigned to the Facebook user account.

The authority’s decision covers different data sources:

(i)     Facebook-owned services like WhatsApp and Instagram can continue to collect data. However, assigning the data to Facebook user accounts will only be possible subject to the users’ voluntary consent. Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook data.

(ii)    Collecting data from third party websites and assigning them to a Facebook user account will also only be possible if users give their voluntary consent.

If consent is not given for data from Facebook-owned services and third party websites, Facebook will have to substantially restrict its collection and combining of data. Facebook is to develop proposals for solutions to this effect.

Andreas Mundt, President of the Bundeskartellamt: “With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data. In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts. The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users. Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

Facebook is the dominant company in the market for social networks

In December 2018, Facebook had 1.52 billion daily active users and 2.32 billion monthly active users. The company has a dominant position in the German market for social networks. With 23 million daily active users and 32 million monthly active users Facebook has a market share of more than 95% (daily active users) and more than 80% (monthly active users). Its competitor Google+ recently announced it was going to shut down its social network by April 2019. Services like Snapchat, YouTube or Twitter, but also professional networks like LinkedIn and Xing only offer parts of the services of a social network and are thus not to be included in the relevant market. However, even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process.

Andreas Mundt: As a dominant company Facebook is subject to special obligations under competition law. In the operation of its business model the company must take into account that Facebook users practically cannot switch to other social networks. In view of Facebook’s superior market power, an obligatory tick on the box to agree to the company’s terms of use is not an adequate basis for such intensive data processing. The only choice the user has is either to accept the comprehensive combination of data or to refrain from using the social network. In such a difficult situation the user’s choice cannot be referred to as voluntary consent.”

Abuse of market power based on the extent of collecting, using and merging data in a user account

The extent to which Facebook collects, merges and uses data in user accounts constitutes an abuse of a dominant position.

The Bundeskartellamt’s decision is not about how the processing of data generated by using Facebook’s own website is to be assessed under competition law. As these data are allocated to a specific service users know that they will be collected and used to a certain extent. This is an essential component of a social network and its data-based business model.

However, this is what many users are not aware of: Among other conditions, private use of the network is subject to Facebook being able to collect an almost unlimited amount of any type of user data from third party sources, allocate these to the users’ Facebook accounts and use them for numerous data processing processes. Third-party sources are Facebook-owned services such as Instagram or WhatsApp, but also third party websites which include interfaces such as the “Like” or “Share” buttons. Where such visible interfaces are embedded in websites and apps, the data flow to Facebook will already start when these are called up or installed. It is not even necessary, e.g., to scroll over or click on a “Like” button. Calling up a website with an embedded “Like” button will start the data flow. Millions of such interfaces can be encountered on German websites and on apps.

Even if no Facebook symbol is visible to users of a website, user data will flow from many websites to Facebook. This happens, for example, if the website operator uses the “Facebook Analytics” service in the background in order to carry out user analyses.

Andreas Mundt: By combining data from its own website, company-owned services and the analysis of third party websites, Facebook obtains very detailed profiles of its users and knows what they are doing online.”

European data protection provisions as a standard for examining exploitative abuse

Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved.

In the authority’s assessment, Facebook’s conduct represents above all a so-called exploitative abuse. Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data. This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).

Andreas Mundt: “Today data are a decisive factor in competition. In the case of Facebook they are the essential factor for establishing the company’s dominant position. On the one hand there is a service provided to users free of charge. On the other hand, the attractiveness and value of the advertising spaces increase with the amount and detail of user data. It is therefore precisely in the area of data collection and data use where Facebook, as a dominant company, must comply with the rules and laws applicable in Germany and Europe.”

The Bundeskartellamt’s decision is not yet final. Facebook has one month to appeal the decision to the Düsseldorf Higher Regional Court.

Further information on the proceeding can be found in a background paper.

Press release (pdf)

  

Freedom and democracy cannot exist without privacy

 Jan 28, 2019 4:00 PM
by Derek Lackey

On this international Data Privacy Day, and after a year of severe abuses, it is worth reflecting on why it is essential to protect privacy.  

Privacy is often cast as an abstract or undervalued concept associated with a desire to keep secret certain aspects of our activities or our personality that we prefer to keep to ourselves.

This is a very narrow outlook. In fact, privacy is nothing less than a prerequisite for freedom:  the freedom to live and develop independently as a person, away from the watchful eye of a surveillance state or commercial enterprises, while still participating voluntarily and actively in the regular, day-to-day activities of a modern society.

Data-driven technologies undoubtedly bring great benefits to individuals.  They can be fun and convenient but they can also be powerful tools for personal development.  They open the door to huge opportunities for improving health care and hold the promise for a future built on artificial intelligence (AI) in which the possibilities seem endless.

On the other hand, these technologies also create new risks. For example, some AI applications, which rely on the massive accumulation of personal data, also put other fundamental rights at risk.

One such risk is the potential for discrimination against people resulting from decisions made by artificial intelligence systems. These systems are generally non-transparent and some have been found to rely on data sets that contain inherent bias, in violation of privacy principles. Such discrimination could potentially result in the restriction of availability of certain services, or result in the exclusion of people from certain aspects of personal, social and professional life, including employment.

In December, AI ethics researchers released the Montreal Declaration for the Responsible Development of Artificial Intelligence – a set of 10 principles for developers and organizations that implement AI, as well as the individuals subject to it.

While this ethical framework marks an important, made-in-Canada development that should help guide this emerging sector, I would agree with the Declaration’s authors who say it is only a first step, and that public authorities now need to act.  Governments and legislators in particular have an important role to play in drawing on ethical principles to create an enforceable legal framework for AI that formally requires relevant actors to act fairly and responsibly.

We have also seen in recent years, and in particular in 2018, how privacy breaches can adversely impact the exercise of our democratic rights. The massive accumulation of personal data by certain state and commercial actors makes our democracies vulnerable to manipulation, including by foreign powers. It is unfortunate that the 2019 federal election will take place without any significant strengthening of our personal data protection laws.

In 2019, as the federal government and legislators consider what should be Canada’s national data strategy and laws for the modern age, it is important as a society to remember privacy’s role in protecting other fundamental rights and values, including freedom and democracy. If this happens, we will have drawn the right lessons from 2018.

Link to original 

 

  
1 2 3 4 5 ... 7 8 »