Data Protection update – May 2023 – Stephenson Harwood

Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from May 2023.

The month of May marked the fifth anniversary of the EU GDPR and it was commemorated with a bang.

Just days before the GDPRs official birthday, Meta was served a record 1.2 billion fine for data protection breaches. The fine, the largest ever imposed under the GDPR, came after Irelands Data Protection Commission found the tech giant had violated the law by transferring personal data of EU Facebook users to the US without appropriate safeguards.

Meta has six months to remediate the unlawful processing, including storage, in the US of personal data of European users. Andrea Jelinek, chair of the European Data Protection Board, said the unprecedented fine sends a strong signal to organisations that serious infringements have far-reaching consequences.

Still, Meta hasnt taken the decision lying down. In a statement, the tech company vowed to appeal the ruling, which it says could have implications for thousands of businesses which rely on the ability to transfer data between the EU and US in order to operate.

May also saw the Court of Justice of the European Union hand down four pivotal preliminary rulings related to the application of the EU GDPR. The rulings clarified the law in relation to four legal issues: the accountability principle, the right of access under Article 15 EU GDPR, compensation under Article 82 EU GDPR and joint controllers.

In this months issue:

On 22 May, the Irish Data Protection Commission ("DPC") announced that Meta has been fined 1.2 billion the largest fine to date issued under the EU General Data Protection Regulation ("EU GDPR").

The DPC's decision against Meta has three parts:

With the EU-US draft adequacy agreement still not in place (the European Parliament voted against the proposed agreement in a non-binding resolution earlier in May), the DPC's decision lands Meta's US-EU data transfers in a difficult, uncertain position. The decision also has profound ramifications for anyone transferring personal data to the US under the EU GDPR, as it demonstrates that it may be very difficult to do so lawfully under any of existing legal mechanisms and derogations, in light of the incompatibility of US law with European fundamental rights. The issue is especially difficult for transfers to any electronic communications service provider (such as Meta) that may be required to hand over European data to US national security agencies under the US federal law FISA.

For further analysis of the DPC's decision and what it means for any business making overseas transfers, look out for our upcoming Insight deep dive on our data protection hub.

On 4 May, the Court of Justice of the European Union ("CJEU") handed down four preliminary rulings relating to the application of the EU GDPR.

The CJEU considered:

For more information on these decisions, read our Insight.

On 15 May, the UK government announced that it is scaling back the Retained EU Law (Revocation and Reform) Bill ("REUL Bill"). The government provided a revised list outlining which pieces of legislation are being revoked with justifications provided for each.

Since Brexit, over 1,000 EU laws have been revoked or reformed in the UK. The REUL Bill will revoke a further 600 laws, in addition to the 500 pieces of legislation that will be revoked by the Financial Services and Markets Bill and the Procurement Bill. The government justifies this decision by stating that it will lighten the regulatory burden for businesses and encourage economic growth.

This decision reflects a scaled down promise in contrast to the government's initial plans to scrap thousands of EU laws by the end of this year. However, in its press release, the government outlined plans to continue reviewing remaining EU laws in order to identify further opportunities for reform. The REUL Bill creates a mechanism that enables this ongoing aim of revoking EU law.

Some minor pieces of data protection legislation will be revoked by the REUL bill, such as the Data Retention and Acquisition Regulations 2018. However, more significantly, the government has stated that it will remove the current interpretive principles and the structure providing for the supremacy of all EU law. This means UK courts could be permitted to overrule EU precedents and there will be significant uncertainty as to how to interpret terms from retained EU laws. In the context of data protection, there may be uncertainty as to the supremacy and interpretation of the UK General Data Protection Regulation ("UK GDPR").

The REUL Bill will return to the House of Commons after the House of Lords concludes its debate.

Stay tuned for further updates on how post-Brexit regulatory reform will affect data protection in the UK.

On 17 April, the Data Protection and Digital Information (No. 2) Bill ("DPDI Bill") had its second reading in the House of Commons. This provided us with our first opportunity to hear what MPs had to say about the DPDI Bill. Their primary concerns surrounded retaining its adequacy with the EU and the struggle to balance the interests of big tech and consumers. For more information on the second reading, read our Insight.

Following this, the DPDI Bill moved to Committee stage. This stage involves a House of Commons committee hearing evidence and conducting a detailed examination of a bill. On 10 May, a House of Commons committee heard evidence from 23 witnesses. John Edwards, the UK Information Commissioner, was among those providing evidence.

Edwards assisted the committee with a forensic analysis of the wording of the DPDI Bill. He outlined that the use of phrases such as 'high-risk activities' does not provide decision-makers with sufficient clarity when interpreting legislation. Edwards argued that the ICO and other decision-makers would appreciate further, clear criteria to assist them with issuing guidance and interpreting the legislation. Removing as much uncertainty as possible from the DPDI Bill should be the aim as this will enable greater efficiency. Edwards also outlined his concerns surrounding the future role of ministers with the current DPDI Bill providing scope for ministers to overrule the ICO and refuse to publish its statutory codes, threatening to undermine the independence of the ICO.

Other witnesses expressed concerns relating to the DPDI Bill's provisions on automated decision-making and its impact on the UK retaining adequacy with the EU.

The DPDI Bill will now move to its third reading, representing the House of Commons' final chance to debate the contents of the bill and vote on its approval. If approved, the DPDI Bill will move to the House of Lords for consideration.

On 4 May, leaders of some of Europe's largest technology companies wrote to the European Commission outlining their concerns regarding the EU's forthcoming Data Act.

As we previously reported, the Data Act will bring in a new landscape for data access, data portability and data sharing. It includes provisions that introduce common rules on the sharing of data generated by connected products or related services and will compel data holders to make data available to public bodies without undue delay where there is an exceptional need for the public body to use the data. The European Commission are adamant that the Data Act will ensure fairness in the digital environment, stimulate a competitive data market, open opportunities for data-driven innovation and make data more accessible for all.

However, the concerns raised in this letter from the technology companies suggest that not all stakeholders agree on whether the Data Act is on track to achieve its aims. The letter was organised by DigitalEurope and is signed by chief executives of Siemens, Siemens Healthineers, SAP, Brainlab and Datev. The letter expressed concerns around supporting European competitiveness and protecting businesses against cyber attacks and data breaches. The letter outlined three key concerns:

Executives at SAP say that they welcome the objectives of the Data Act to create a common EU regulatory framework and facilitate data sharing. However, they insist that the Data Act needs further amendments in order to preserve contractual freedom, allowing providers and customers to agree on terms that reflect business needs.

The letter asks the European Commission to pause the process, enabling changes to the proposed Act. Time will tell whether the Data Act will be further delayed in the face of these concerns. The Swedish presidency entered into negotiations (or 'trilogue') with the European Parliament on the final version of the Data Act in March and further trilogues are expected to take place in May and beyond.

The ICO, the UK Data Protection Authority ("DPA"), issued new guidance for businesses and employers on Employee Data Subject Access Requests ("DSARs").

Data subjects have the right of access under the EU GDPR, meaning they can request a copy of their personal information from organisations. This is a right often exercised by employees against their employers or former employers. Employees can request any personal data held by the employer, such as attendance details, sickness records or personal development and other HR records. The ICO reported in its press release that it received 15,848 complaints relating to DSARs between April 2022 and March 2023. In light of this, it has now released new, enhanced guidance on how employers should respond to DSARs.

The new guidance covers key issues, including the following points:

For more information, you can access the ICO's full guidance here.

In the midst of growing anxiety across the tech industry about the potential impact of AI, and some more stark warnings from industry experts including George Hinton (so called "godfather of AI") that the recent rapid development in the capabilities of AI may pose an existential risk to humankind unless urgent action is taken, Prime Minister Rishi Sunak appears to be contemplating charting an alternative approach to the UK's regulation of AI, with reports that the government is considering tighter regulation and talk of a new global regulator (or at least the creation of a new UK AI-focused watchdog).

Back in March, we reported that the UK Government had published a white paper outlining its plans to regulate AI (the "AI White Paper"). The government's intention was for the AI White Paper to foster a pro-innovation approach to AI regulation which focusses on its benefits and potential whilst avoiding unnecessary burdens to business and economic growth. The AI White Paper is currently open for consultation, which is set to conclude on 21 June, although industry figures have warned that the AI White Paper is now already out of date.

The government may concede that there has been a shift in its approach since the AI White Paper was published, with reports of government insiders insisting that they "want to stay nimble, because the technology is changing so fast", and expressing their wish to avoid the product-by-product regulatory regime, such as the one that is envisaged by the EU's AI Act.

It appears that Sunak may also be applying pressure on the UK's allies, seeking to construct an international agreement in relation to how to develop AI capabilities, which could entail the establishment of a global regulator. Given that the EU has been unable to reach an agreement since the draft AI Act was published over two years ago, Sunak's plan to formulate and subsequently agree such an international agreement in a short period of time appears somewhat optimistic.

Domestically, MPs from both the Conservative and Labour party are calling for an AI bill to be passed, which might set certain conditions for companies seeking to create and develop AI in the UK and lead to the creation of a UK regulator. It remains to be seen what approach the government will take to regulating AI in the UK and what aspiration it has to lead on such regulation on the global stage.

Over in the US, American lawmakers are arguing that federal regulation of AI is necessary for innovation. Speaking at an event in Washington, DC, on 9 May, US Representative Jay Obernolte said that regulation to mitigate potential harms and provide customer protection is something which "is very clearly necessary when it comes to AI." Obernolte further stressed that regulation of data privacy and AI must coincide, given the vast amounts of information AI models require to learn and AI's ability to pierce digital data privacy, reaggregate personal data and build behavioural models to predict and influence behaviour.

In early May, the Biden Administration (the "Administration") announced new actions which it says are set to further promote responsible American innovation in AI as well as protect people's rights and safety. Emphasising the need to place people and communication at the centre of AI development, by supporting responsible innovation that serves the public good, the Administration said that companies have a fundamental responsibility to ensure that the products they provide are safe prior to deployment for public use.

The Administration has also announced an independent commitment from leading AI developers including Google, Microsoft, NVIDIA and OpenAI to participate in a thorough public evaluation of AI systems. These actions all contribute to a broader and ongoing effort for the Administration to engage with a variety of stakeholders on critical AI issues.

Belinda Dennett, Microsoft Corporate Affairs Director, spoke to members of Australia's parliament at a parliamentary hearing on 3 May, to communicate her view that the government should collaborate with industry and society on principles-based measures or co-regulation with regard to AI, rather than taking a more targeted and direct regulatory response.

Dennett's comments reflect Microsoft's view that there is a risk in seeking to regulate what is known today in relation to generative AI technologies, as that knowledge can rapidly go out of date. The effect of this risk is such that any policy seeking to regulate generative AI would soon find itself trailing behind the development of the technology being regulated.

In making her remarks, Dennett specifically referred to the recent rapid enhancement in the capabilities of generative AI technologies such as ChatGPT and explained that "this was innovation we weren't expecting for another ten years." Dennett also praised calls which have been made for summits and various other discussions around the generative AI boom on the basis that, for AI, "society needs to decide where those guardrails should be."

Microsoft's comments come as Australia joins other jurisdictions needing to act quickly to determine how best to regulate AI and generative AI in particular, which we considered in our April 2023 bulletin.

In October 2022, Joseph Sullivan, Uber Technologies' former security chief, was convicted of obstruction of a federal proceeding and of concealing or failing to report a felony. Sullivan's conviction arose in connection with a 2016 cyber breach that affected 57 million Uber drivers and riders. In response to the breach, Sullivan devised a scheme by which the hackers who had breached Uber's network were paid $100,000 through the company's 'bug bounty' scheme and were induced into signing a non-disclosure agreement, such that Uber's legal team and the US Federal Trade Commission officials would not find out.

Sentenced in early May, Sullivan was handed a three-year term of probation and ordered to pay a fine of $50,000. Although Sullivan has avoided time in prison, US District Judge William Orrick made clear that if he were to preside over a similar case in the future "even if the character is that of Pope Francis, they should expect custody." Sullivan's case illustrates that corporate information security officers ("CISOs") should work with lawyers to establish whether a breach has occurred and whether it should be reported. It has also accelerated a transition whereby CISOs report breaches more directly to their organisation's senior executives.

Consequently, companies should now be reconsidering their processes for breach identification and the documentation of decisions regarding breaches in order to develop more robust breach response procedures. This will allow companies to cultivate a culture of shared responsibility for taking decisions associated with cybersecurity breaches, which will, in turn, assist CISOs with avoiding personal liability.

Following a year-long inquiry into the abuse of spyware in the EU, the European Parliament's Committee of Inquiry has adopted its final report and recommendations. The inquiry investigated the use of surveillance spyware such as "Pegasus", which can be covertly installed on mobile phones and is capable of reading text messages, location tracking, accessing the device's phone and camera as well as harvesting information from apps.

MEPs stated that the use of spyware in Hungary constitutes "part of a calculated and strategic campaign to destroy media freedom and freedom of expression by the government", and in Poland the use of Pegasus has been part of "a system for the surveillance of the opposition and critics of the government designed to keep the ruling majority and the government in power". To remedy these major violations of EU law, the MEPs called on Hungary and Poland to comply with European Court of Human Rights ("ECHR") judgments, restore judicial independence and oversight institutions as well as launch credible investigations into abuse cases to help ensure citizens have access to proper legal redress. In Greece, where spyware "does not seem to be part of an integral authoritarian strategy, but rather a tool used on an ad hoc basis for political and financial gains", MEPs called on the government to repeal export licences that are not in line with EU export control legislation. Elsewhere across the EU in Spain, although the country has "an independent justice system with sufficient safeguards", MEPs called on Spanish authorities to ensure "full, fair and effective" investigations.

In order that illicit spyware practices are stopped immediately, MEPs recommended that spyware should only ever be used by member states in which allegations of spyware abuse have been thoroughly investigated, national legislation is in line with recommendations of the Venice Commission and CJEU and ECHR case law, Europol is involved in investigations, and export licences not in line with export controls are repealed. MEPs further recommended that the Commission should assess whether these conditions are met by member states by December 2023. In order to prevent attempts to justify abuses, the MEPs also called for a common legal definition of 'national security' as grounds for surveillance.

MEPs adopted the report and recommendations and the text outlining the recommendations is expected to be voted on by the full Parliament during the plenary session starting on 12 June.

In a statement released earlier this month, Toyota Motor Corporation ("Toyota") confirmed that a human error rendered the vehicle data of around 2.15 million customers publicly accessible in a period spanning almost a decade from November 2013 to April 2023.

The incident, which Toyota states was caused by a "misconfiguration of the cloud environment", as a result of the cloud system having been accidentally set to public rather than private, meant that data including vehicle identification numbers and vehicle location data was potentially accessible by the public. Toyota have said that the accessible data alone was not sufficient to enable identification of the affected data subject and that there had been no reports of malicious use of the data.

Although it has confirmed that the data in question is confined to that of its Japanese customers, the number of potentially affected customers constitutes almost the entirety of Toyota's customer base who had signed up for its main cloud service platforms since 2012, which are essential to its autonomous driving and other AI-based offerings. Affected customers include those who use the T-Connect service, which provides a range of services such as AI-voice driving assistance, and also users of G-Link, which is a similar service for owners of Lexus vehicles.

The incident had only recently been discovered by Toyota as it targets an expansion of its connectivity services. Toyota said that the "lack of active detection mechanisms, and activities to detect the presence or absence of things that become public" was the cause of the failure to identify the issue earlier. Toyota has stated that it will take a series of measures to prevent a recurrence of the incident including implementing a system to audit cloud settings, establishing a system to continuously monitor settings and educating employees on data handling rules.

The Japanese Personal Information Protection Commission has been informed of the incident but has not provided comment at this stage. However, the Japanese automaker subsequently announced that customer information in some countries throughout Oceania and Asia may also have been left publicly accessible from October 2016 to May 2023. In this instance, potentially leaked customer data may include names, addresses, phone numbers and email addresses.

You can read Toyota's statement in Japanese here.

The High Court has brought Prismall v Google and DeepMind to an early conclusion, ruling that Andrew Prismall and the 1.6 million class members he represents cannot go to trial.

Andrew Prismall sued Google and DeepMind under the tort of misuse of private information on behalf of 1.6 million NHS patients after, in 2016, it was revealed that DeepMind transferred the patients' data without their knowledge or consent. To make this claim, Prismall was required to show that the class of patients had a reasonable expectation of privacy and that DeepMind deliberately and without justification obtained and used the data. Prismall also had to show that all members of the class had the same interest. This follows the principle set out in Lloyd v Google that a representative action cannot succeed if it requires an individualised assessment of class members' loss.

Prismall argued that, without needing an individualised assessment, he could show that each class member had a reasonable expectation of privacy in relation to the relevant personal data, this expectation was unjustifiably interfered with and such interference entitled them to an award of more than trivial damages. However, the court ruled that there was no realistic prospect of the class members meeting these requirements. The court found that:

Mrs Justice Williams struck out the case and ruled that a summary judgment should be entered in favour of Google and DeepMind.

The case was one of the few opt-out class actions that continued after the Lloyd v Google ruling narrowed the options for bringing such claims under the UK GDPR. It appears that misuse of private information was not a viable alternative in this case.

For more information, you can access the full judgment here.

A Belgian data subject complained to the Belgian DPA after being informed of his obligations under the US Foreign Account Tax Compliance Act ("FATCA") by his bank. The Belgian DPA has now ordered Belgium's Federal Public Service Finance to stop processing complainants' data in relation to FATCA transfers, arguing that such transfers breach the EU GDPR.

FATCA's aim is to combat tax fraud, money laundering and tax evasion. 87 countries have entered FATCA agreements with the US. Under FATCA, non-US banks must send information about any accounts held by American citizens to the corresponding non-US government, who then shares the information with the US Internal Revenue Service (the "IRS"). This information constitutes personal data under the EU GDPR.

The Belgian DPA originally decided that the FATCA transfers did not breach the EU GDPR and that Schrems II did not apply. However, the Belgian DPA's litigation arm disagreed. It found that data subjects are not able to understand the purposes of processing in relation to FATCA transfers and concluded that FATCA transfers breach the EU GDPR's purpose limitation, data minimisation and proportionality principles. The IRS failed to carry out a data protection impact assessment in relation to the transfers. In addition, the FATCA transfers were found not to be subject to appropriate safeguards. As a result, the Belgian DPA ordered that transfers of personal data to the US under the FATCA regime must cease.

This does not represent the only challenge to FATCA. A US-born data subject now residing in the UK has complained to the High Court that FATCA transfers are disproportionate and breach her rights under the EU GDPR. However, the impact of ceasing FATCA transfers is questionable. American Citizens Abroad, a non-profit organisation, commented that the Belgian DPA decision will not get rid of US tax problems for expats. It argued that the IRS has an obligation to enforce US tax laws and if the required information cannot be provided via FATCA transfers, it will come to light another way.

The US Federal Trade Commission ("FTC") filed a complaint against Meta in 2011, resulting in a 2012 privacy order barring Meta from misrepresenting its privacy practices. After a subsequent complaint from the FTC, relating to Meta's misrepresentations that fed into the Cambridge Analytica scandal, Meta agreed to another privacy order in 2020. This 2020 order compelled Meta to pay a $5 billion penalty.

In a press release dated 3 May, the FTC claims that Meta has now violated the privacy promises that it made in the 2020 privacy order. The FTC's claim is based on the following points:

As a result, the FTC proposes to make the following changes and extensions to the privacy order:

The FTC has requested that Meta responds to these claims within 30 days. Meta have pledged to robustly fight this action, labelling it a political stunt.

May saw the latest enforcement action against Clearview AI, following numerous recent sanctions against the facial recognition platform.

On 9 May, the Austrian DPA found that Clearview AI was not complying with the EU GDPR. Following a request for access, a data subject found that their image data had been processed by Clearview AI. The Austrian DPA found that Clearview AI processed the personal data without lawfulness, fairness and transparency and was in breach of data retention rules by permanently storing data. In addition, Clearview AI's processing of the data served a different purpose from the original publication of the data subject's personal data. The Austrian DPA ordered Clearview AI to erase the complainant's personal data and to designate a representative in the EU.

In another decision handed down in May, the Australian Administrative Appeals Tribunal ruled that Clearview AI's collection of Australian facial images without consent breached the country's privacy standards. As a result, the Australian authority ordered Clearview AI to leave the country and delete all Australian images that it had gathered.

This follows action taken against Clearview AI in April. The French DPA fined Clearview AI 5.2 million for its failure to comply with the DPA's earlier order to stop collecting and processing personal data of individuals located in France.

This wave of enforcement action reflects the ongoing battle of applying data protection requirements to ever-evolving AI technologies.

We reported in March that Marc Van der Woude, president of the EU's General Court, warned that a wave of Digital Markets Act ("DMA") litigation was looming. The DMA places obligations on Big Tech platforms (referred to as "Gatekeepers") to create a fairer environment for business users and to ensure that consumers can access better services and easily switch providers.

The first step of the DMA's implementation kicked off on 2 May. This step looks into the classification of certain platforms as Gatekeepers. Any platforms labelled with this designation will be prohibited from certain behaviours and practices. Three main criteria are involved in deciding whether a platform is a Gatekeeper:

Any organisations labelled as Gatekeepers will be subject to the DMA's list of dos and donts. For example, Gatekeepers must not prevent consumers from linking up to businesses outside the Gatekeeper's platform or prevent users from uninstalling any pre-installed software or app if they wish to.

By 3 July, potential Gatekeepers must notify their core platform services to the European Commission if they meet the DMA's thresholds. The European Commission, following such notifications, has 45 working days to assess whether the organisation is a Gatekeeper. Any designated Gatekeepers will then have 6 months to comply with the DMA's requirements.

Each month, we bring you a round-up of notable data protection enforcement actions.

Company

Authority

Fine

Comment

Meta Ireland

Irish DPA

1.2 billion

See our coverage of the Irish DPA's decision above.

GSMA

Spanish DPA

200,000

GSMA failed to carry out a data protection impact assessment in relation to a data subject's special category data.

B2 Kapital

Croatian DPA

2.26 million

Representing the Croatian DPA's highest EU GDPR fine to date, B2 Kapital were fined for failing to prevent data security breaches.

Clearview AI

Read more here:
Data Protection update - May 2023 - Stephenson Harwood

Related Posts

Comments are closed.