Insights

Managing the Risks of AI

If there’s anything science fiction movies have taught us, it’s to stay one step ahead of the robots. So how do you help ensure your company’s AI experience is less SkyNet and more WALL-E?
Author:

Adoption of AI within business has taken off like a rocket the past couple years.

From remote work models and employee turnover to global channel disruptions, the pandemic era has challenged operational continuity like never before – and accelerated the adoption of enterprise AI beyond the traditional front-office applications.

While governments scramble (albeit slowly) to assemble regulatory frameworks, AI speeds ahead at a breakneck pace, attempting to address these operational challenges. Implementers find themselves caught in between, needing to adapt and innovate quickly without becoming the cautionary tale that inspires a new generation of Asimovs.

So in this limbo, how can you tap into the power of AI while mitigating risk?

1. Understand the uniqueness of AI

Even organizations with robust risk management plans and processes – banks, we’re talking about you – should evaluate their existing frameworks under the lens of AI. 

Removing human intervention can alleviate talent shortages and lower costs. But by its nature, it also removes subjective judgments that often help avoid pitfalls. The latest trends in AI present ongoing safety and ethical risks that, if not properly mitigated, can impact not just utility and quality, but your reputation with customers.

Ask these questions to deepen your understanding of AI risks:

  • No-code/Low-code solutions – How can you encode risk controls into systems that are intended to be used by non-experts?
  • Behavioral intent prediction – Are your customers comfortable having their emotions analyzed based on voice and facial data? Is the AI’s assessment truly an accurate account?
  • AI autonomy – How will you teach the model to optimize against certain metrics without compromising others? Real systems don’t always have a test environment – how will you “train” them without affecting production?

There are many more questions along these lines to be pondered. Applying that AI lens to current risk assessments and methodologies will help understand the uniqueness of the challenges AI brings.

2. Establish and entrench a set of guiding principles for AI.

Once you’ve developed an understanding of the risks, it should become clearer how they overlap with your organization’s goals and culture. The next step therefore is to establish a set of guardrails, or guiding principles, that applies this understanding of AI risk to your business operating models.

BMW is a great case study for this.

It may come as no surprise that the home of the Ultimate Driving Machine is a leader in strategic AI management. BMW’s 7 Principles for the Use of Artificial Intelligence guide the auto group’s development and application of AI solutions across its business:

What are the AI principles that define how your business operates?

3. Design a robust end-to-end framework that activates these principles.

Assessments and guiding principles are important – and a lot of consulting firms tend to stop there. But where the magic happens is in the application of those principles.

For example, BMW did not call it a day with their fine list of principles. They next established a Center of Excellence, which stays abreast of trends and constantly iterates on tools and processes, all while regularly communicating out to their colleagues with a balance of excitement and caution. All this sends a strong signal throughout the organization that AI is a focus of leadership and comes with the governance and accountability needed for success.

Other keys to success are formal and informal AI training options, promoting AI certification programs, and aligning AI principles with incentives and employee recognition.

Bottom line, your framework should:

  • Provide insights into AI applications to ensure they’re effective
  • Expose safety and ethics risks to make them manageable
  • Create a feedback loop to ensure any vulnerabilities are addressed 

Knowledge + Application = Effective Risk Management

Effectively and securely managing risk within your AI platforms requires a deep understanding of those risks, combined with a top-to-bottom set of processes and controls that can be applied to real-life business processes. Do this right, and you might just stay ahead of the robots.

A previous version of article was originally posted at Navigate AI – a site run by Tariq and devoted to big picture views on AI, written for everyone.


Further Advisory helps major banks and other financial institutions think through strategies and operational realities of applying AI in the course of delivering services and experiences for both customers and colleagues. From partnership strategy and due diligence to implementation and go-to-market, we make the vision become a reality.

About the author

Author

From Strategy to Reality®

You May Also Like

Lies, Damn Lies, and Statistics 

We’ve never been more awash in data, or better equipped to use it. So with all of these tools and insights to guide our decisions, how do we still so often get it wrong?

Read More

Money 2020 and Me, Part II

Further Advisory’s CEO recaps an exciting – and exhausting – week at this year’s premiere Fintech conference in Las Vegas: Money 2020

Read More

Money 2020 and Me, Part I

Follow along with Further Advisory’s CEO as he offers perspectives and insights on this year’s premiere Fintech conference in Las Vegas.

Read More

Traditional AI: Don’t Just Let It Be

The rise of Generative AI (Gen AI) has captivated the world, with companies eager to adopt it. But in the rush to capitalize on this shiny new trend, are they overlooking the enduring value of Traditional AI?

Read More

Sign up for our Insights newsletter

Shopping Basket