Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img

Thank you for your feedback – In-House Community Congress 2022 -Hong Kong

Thank you for submitting the feedback form. If you have any questions or require a copy of the slides from speakers at the Hong Kong...
HomeLegal Inno'TechThe techlash is coming

The techlash is coming

Published in Asian-mena Counsel: Cyber Crime & Data Protection Special Report 2019

 

Screenshot 2019-11-27 at 10.43.51 AM

 

Screenshot 2019-11-27 at 10.53.01 AMIntroduce artificial intelligence systems with great care — or suffer the consequences, writes Ronald Yu.

 

For many organisations, artificial intelligence has arrived or will be coming soon, bringing all sorts of new challenges for counsel, especially with respect to cross-border data flows.

While most discussion regarding international data transfers has been focused on privacy and cybercrime, these are just two aspects of an increasingly complicated set of converging issues.

To understand how complicated these matters could become, let us start with demolishing any notions that current AI thinks like a human being. It does not. There is so much that still we do not know about the human mind, essentially precluding the possibility of replicating the operation of a human mind by artificial means.

AI is, however, good at analysing and extracting patterns from massive amounts of data and deriving generalisations from these patterns.

Screenshot 2019-11-27 at 10.54.14 AMThe limitations arise

We are only now starting to more broadly recognise the limitations of this approach.

For instance, what if we developed an AI system to identify potentially dangerous animals but only trained it to recognise potentially dangerous or large mammals located in Europe?

That system would not be robust enough to deal with the potential dangers kangaroos hopping across a road could pose to a fast-moving car and using it in a self-driving car in Australia, without the proper alterations to its programming, would be irresponsible. This was, in fact, the very problem Volvo encountered in 2017 when it discovered that its autonomous vehicle prototypes, which had been trained to recognise the potential dangers of large Swedish moose and elk, did not know how to react to Australian kangaroos.

AI systems, of course, can learn new tricks though not always what their developers expect or want as Microsoft famously learned when its Tay chatbot began posting offensive tweets through its Twitter account, causing Microsoft to shut down the service within 24 hours of its launch. While Microsoft blamed this on a coordinated attack by people who had exploited a vulnerability in Tay and it later admitted it had made a critical oversight for this specific attack.

Controversy and court action

Though AI systems are often touted for their putative benefits: mitigating human bias and error, and offering the promise of cost efficiency, accuracy and reliability, poor implementation — particularly in health care, criminal justice, education, employment, benefits disbursement and other areas, has resulted in numerous problems — and challenges, both in the courts of public opinion and in actual courts.

For example, while more police departments employ predictive policing systems to forecast criminal activity and allocate police resources, such systems are increasingly being challenged by critics who claim they are built on data produced during documented periods of flawed, racially biased and sometimes unlawful practices and policies, resulting in controversial policing practices. Critics also slam vendors’ assurances that their systems adequately mitigate or segregate this data as insufficient.

There has also been litigation in America and Europe over the use of AI in the disbursement of medical benefits, public school teacher employment and juvenile criminal risk assessment.

That’s not all

But these are not the only problems with AI systems. Modern AI systems are often criticised for their lack of creativity or adaptability as compared to a human (though it should be noted that an AI system could be updated with new training data though this would not be instantaneous).

AI systems still have communications limitations, thus the reason why some AI companies employ armies of humans to review conversations recorded by their devices (eg, Amazon’s Alexa) for accuracy, resulting in all sorts of privacy-related critiques.

Finally, AI systems are limited in metacognition — they cannot really think about how they think. Thus, if an AI system encounters a problem, it will revert to what it has been trained to do and will continue to do so as it is unable to step back, consider what it is doing wrong, analyse the problem and try a different (and hopefully successful) approach.

While we might deride such behaviour in a human — ie, doing the same thing over and over again expecting a different result — as insanity, AI systems just do not know any better.

Implications

The implications are that we will have to live with communications-challenged AI systems for a bit longer and that human intervention will be required where AI systems encounter something they do not expect. The latter means that AI-based products may still require considerable additional development to adapt to local conditions — something tech companies developing AI products will need to consider (as well as the related data-transfer issues) — and that companies introducing AI systems must recognise that the systems they are introducing may be wholly inappropriate for their business environments.

Given the aforementioned public backlash and legal action against poorly implemented AI, this is no small consideration.

And one more thing…

This growing resistance to AI and related technologies such as facial recognition may likely mean companies will have less flexibility in employing new technologies, will need to be more careful in how they introduce and implement AI systems, or both. How this could affect longer term growth and progress is yet to be seen.

However, as if the potentially serious issues with an AI system that was trained using biased or limited data, employs problematic algorithms or was sloppily implemented — or the concerns of future worker displacement by AI — were not enough, there is also the present problem of worker disaffection among tech workers who increasingly feel left out of the mainstream, as exemplified by protests against ride-sharing companies (that employ AI and data analysis extensively) by livery drivers in the New York metropolitan area or by collective bargaining actions undertaken by African workers in Minnesota against Amazon in the past year.

The techlash is just beginning…

 

 

Official Publication: Asian-mena CounselClick Here to read the full issue of Asian-mena Counsel: Cyber Crime & Data Protection Special Report 2019.