Insights

AI Innovation vs. Regulation

Will the regulators ever be able to keep pace with AI innovation? It will likely always be a struggle, so perhaps the better answer is not in new laws, but new principles for artificial intelligence.
Author:

A previous version of article was originally posted at The AI Groupie – a site run by Tariq and devoted to big picture views on AI, written for everyone.

All the hullabaloo about the privacy issues surrounding face recognition technology is… distressing. And not the hullabaloo part.

AI is going to present us with a firehose of ethical challenges over a compressed timescale the likes of which we have never before encountered. The real issue, I believe, will be that AI innovation will vastly outpace regulatory innovation. In fact, so fast is the pace of AI advancement that a new piece of legislation or industry regulation risks being outdated by the time it comes into effect.

This constant sense of playing catch-up (which will become a source of malaise amongst legislators and regulators if it isn’t already) is a risk in itself – possibly provoking crude and far-reaching attempts to legislate away the problem in response to public concern. Some of the calls for outright bans on face recognition probably already fall into this camp.

Recent AI Innovation Buzz

To illustrate the point about the speed of AI innovation, here are a few forward-looking applications that are just around the corner. There are no doubt huge potential benefits here! But in the same breath we should have a pause and think through the ethical (and sometimes moral) questions they will raise.

  • A ClearView face recognition patent application describes potential use cases: “to know more about a person that they meet, such as through business, dating, or other relationship”, to “grant or deny access for a person to a facility, a venue, or a device”, to identify “a sex offender” or “homeless people,” or to determine whether someone has a “mental issue or handicap”
  • In August 2020 an AI system beat an experienced US fighter pilot 5-0 in simulation. In December 2020 the US AirForce flew the first real world test flight with human and AI co-pilots. The Pentagon intends to conduct live trials pitting tactical aircraft controlled by AI against human pilots by 2024
  • DeepNostalgia from MyHeritage.com took portrait photos and gave them realistic head movements and facial expressions. Within two years (my view) with a photo + sample of voice recording + a large(ish) sample of a person’s written text, AI will be able to create realistic interactive simulacrums of the living and of the departed
  • Neuralink is an admittedly impressive company attempting to create a workable computer-brain interface. Elon Musk, some guy you may have heard of, reported in January that they had implanted a chip into a monkey’s brain and that it was now playing video games with only its mind. This slope is more slippery than the leftover peels from that monkey’s lunch.

The AI ethics debate to date has been focused on what companies need to do to ensure that they use AI ethically. That’s necessary …but not sufficient.

Concern Among Business Leaders

KPMG just published the results of a Jan 2021 survey of US business and IT leaders in companies with revenues above $1bn, and more than half of business leaders in Tech, Retail and Industrial Manufacturing say that AI is moving faster than it should in their industry. Business leaders are conscious that controls are needed and overwhelmingly believe the government has a role to play in regulating AI technology. Pegasystems found something similar in their 2021 Technology Trends survey (conducted across 12 countries): three quarters of respondents felt the current level of external AI governance isn’t sufficient to manage its explosive growth.

Principles, Not Just Laws

What we need – and sooner rather than later – are a set of principles for AI (or inviolable rights for humans) agreed at a trans-national level on the basis of which any prospective application can be judged.

Turns out Isaac Asimov got it right. The need for this is going to become very apparent very quickly. The recently formed Partnership On AI and the World Economic Forum’s Global AI Action Alliance are steps in the right direction, but we need institutions of authority – representing both public and private sector – to establish and police AI deployment principles.

Perhaps these institutions should take Janus, the Roman god of beginnings and transitions, as inspiration. One face focused on facilitating the myriad opportunities and benefits that the new world of AI will make possible, the second intent upon navigating humanity through the ethical challenges that AI will inevitably bring.

About the author

Author

From Strategy to Reality®

You May Also Like

Lies, Damn Lies, and Statistics 

We’ve never been more awash in data, or better equipped to use it. So with all of these tools and insights to guide our decisions, how do we still so often get it wrong?

Read More

Money 2020 and Me, Part II

Further Advisory’s CEO recaps an exciting – and exhausting – week at this year’s premiere Fintech conference in Las Vegas: Money 2020

Read More

Money 2020 and Me, Part I

Follow along with Further Advisory’s CEO as he offers perspectives and insights on this year’s premiere Fintech conference in Las Vegas.

Read More

Traditional AI: Don’t Just Let It Be

The rise of Generative AI (Gen AI) has captivated the world, with companies eager to adopt it. But in the rush to capitalize on this shiny new trend, are they overlooking the enduring value of Traditional AI?

Read More

Sign up for our Insights newsletter

Shopping Basket