Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img

Thank you for your feedback – In-House Community Congress 2022 -Hong Kong

Thank you for submitting the feedback form. If you have any questions or require a copy of the slides from speakers at the Hong Kong...
HomeCoronaVirus COVID-19Data protection and Covid-19: beyond data privacy

Data protection and Covid-19: beyond data privacy

Screenshot 2020-06-10 at 10.31.09 AMThe pandemic has exposed confusing inconsistencies in privacy laws, and once again highlighted the need to scrutinise AI, writes Ronald Yu.

 

 

Covid-19 has brought about radical disruptions both to peoples’ lives and routines as well as corporate behaviours and national policies.

It also prompted several technology companies and governments to respond. For example:

  • On April 10, 2020, Apple and Google announced a partnership to develop APIs and operating system-level technology to assist in contract tracing
  • A network of infectious disease epidemiologists launched the Covid-Mobility 19 Data Network in partnership with Facebook, Camber Systems and Cuebiq to provide aggregated data sets for epidemiologists to use
  • Unacast launched a pro bono Covid-19 Toolkit for public health experts, policymakers, academics, community leaders and businesses
  • Facebook announced an expansion of its Data for Good initiative to use aggregated data to provide Covid-19 related tools for policymakers
  • Google announced it is providing mobility reports for Covid-19 efforts, to show movement trends over time by geography.

But there were also unpredictable developments.

New developments, new consequences

Who would have thought that Apple and Google would join forces to use Bluetooth technology to help governments and health agencies reduce the spread of Covid-19 worldwide only to find themselves at loggerheads with France, once one of the staunchest advocates for personal data privacy, over security technicalities in operating systems and questions of how to store contact tracing data because France wanted Google and Apple to relax privacy rules for contract tracing purposes?

Covid-19 also sent compliance and HR departments scrambling to make sense of inconsistent national privacy laws for example discovering that:

  • While you could ask employees about whether they have symptoms in Australia, Denmark, Germany or Spain, you can only do so with limitations in Poland or Italy, and you cannot do so in France
  • In the Netherlands you cannot ask employees or gig economy or agency workers if they have symptoms nor can you ask about their travel history, but you can ask visitors if they have symptoms and ask about their travel history
  • While you can take the temperature of employees, gig economy or agency workers, and visitors in the UAE, you cannot do the same in Sweden or Finland. You can, however take the temperature of employees and gig economy or agency workers with limitations in Hungary, though Hungarian law bans you from taking the temperature of visitors
  • You can ask about any symptoms in an employee’s, gig worker’s, agency worker’s, or visitor’s household in Hong Kong and Singapore, and you can do the same in China and Germany but with limitations, but would be barred from doing so in France. In the Netherlands, while you cannot ask an employee, gig worker or agency worker about symptoms in his/her family, you can ask a visitor such questions.

While most companies and people have examined the issue of population and individual reactions to Covid-19 from a data privacy standpoint, there are other very important developments that have almost been overlooked.

Screenshot 2020-06-10 at 10.26.59 AM
Non-standard standard setting

Shortly after the blow-up between France, Google and Apple (and Germany) regarding the storage of contract tracing data, the UK trialled its own contract tracing app that pooled the data collected into a single database operated by the National Health Service (NHS), arguing this would provide greater insight into the spread of Covid-19 and allow the NHS to decide which users are most at risk. However, as this app was incompatible with the contact tracing system developed by Google and Apple, the UK system did not work as advertised.

While not only embarrassing, it also highlighted the ability of technicians and technocrats in California to affect policy initiatives in Europe.

Furthermore, this instance of de facto standard setting was accomplished without the typical lengthy industry-wide consultation that typically accompanies standards-setting exercises, but instead with the agreement of two major technology players. Though the ostensible purpose of this was for public health, the anti-trust implications as well as the ability of tech giants to effectively impact policy abroad and, by extension, the operation of companies using their platforms should not be ignored.

Upsetting AI

Second, the Covid-19 crisis has caused major shifts in behaviour: e-commerce sales have risen in many places, more students are taking classes online, more meetings are now being conducted online, more people are using tele-medicine, more people are having their food and groceries delivered to their homes and more people are working from home. The permanence of these trends remains to be seen but while much has been made of the supply chain disruptions brought about by Covid-19, these changes have also begun affecting AI systems as machine-learning models trained on normal human behaviour have now discovered that ‘normal’ has changed. This has caused hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing and more.

Companies will now have to update their AI systems to incorporate the latest data so their AI systems remain competitive, that their operations are not compromised, yet at the same time continue their cybersecurity vigilance against bad actors who might try to inject bad data in an attempt to poison a company’s datasets. Thus companies must constantly oversee their AI systems to make sure they do not misbehave.

Screenshot 2020-06-10 at 10.28.58 AM

Continuous, constant oversight is needed

While the primary objective of such oversight is to ensure these systems are providing their expected design outcomes, avoiding liability is also important given the growing list of governments and companies that have faced or are facing public backlash, if not legal action in some cases resulting in damages in the millions of dollars and even legislative investigation for poorly implemented AI.

Governments have already begun laying the groundwork for more careful scrutiny of AI systems. For example, in December 2019 the Australian Human Rights Commission released its Human Rights and Technology Discussion Paper, produced as part of a broader project examining the human rights impacts of emerging technologies, algorithmic bias, artificial intelligence, big data and the fourth industrial revolution. In the paper, the Commission proposed a principle that AI should be accountable in how it is used, that the Australian Law Reform Commission conduct an inquiry into the accountability of AI-informed decision making and proposed new legislation to require that an individual be informed where AI is materially used in a decision that has a legal, or similarly significant, effect on the individual’s rights. It also proposed mandating algorithmic explicability.

Should Australia’s and other similar initiatives ultimately translate into new laws, this will mean yet another set of legal concerns for companies to worry about.

Navigating complexity, vigilance, swift intervention

Ultimately, companies and their counsel will have to navigate this increasingly complex labyrinth of laws and even technical standards keeping in mind not just the potential legal liabilities but also the potential impact on business competitiveness and operational integrity caused by events such as the Covid-19 crisis.

Covid-19 and the impact of the behavioural changes it has ushered in shows that the need to access data in a timely manner is more important than ever given our increasing dependence on AI systems. Yet the need for vigilance and swift intervention and oversight for unpredictable occurrences remains.

___________________________

This is an excerpt from a speech given by Ronald Yu as part of the inaugural Empowering Transformation webinar series examining the impact of cross border data flows. Click here for more.

 

Also by this author:

___________________________

Official Publication: Asian-mena CounselClick Here to read the full issue of Asian-mena Counsel: Counsels of the Year 2020.