Rebuilding Trust in Banking With Artificial Intelligence

Why the ‘why way’ is the right way to restoring trust in AI

Rebuilding Trust in Banking With Artificial Intelligence

By Dr. Alain Briançon, VP of Data Science and CTO for Cerebri AI.

“Why? – Because I am your mother, that’s why.” – My mom/your mom/everyone’s mom.

Artificial Intelligence is growing in sophistication, autonomy, and market reach offering transformational opportunities for businesses and their customers.

AI relies on the collection and smart processing of personal information to function.

However, the privacy scandals of social media and recent breaches of consumer data have eroded consumer confidence: not only around data usage but the implications of its omnipotence.

We live in an age of maximum customer empowerment and, as a result, maximum business anxiety. Today customers use multiple channels to interact with your brand, spend more and have access to more information about you, than ever before. Customers are two swipes away from a list of reasons why they should switch to your competitor.

Banks, global electronic and vehicle OEMs, service providers and Governments used to wait and react to customer interactions, once their customers visited their stores, dealerships, branches, or offices.

Today’s world is one where customers expect proactive experiences. The enterprise cannot afford to cast a broad promotional net, sit back and hope for the best. They must observe and adapt to each customer.

In this era, organizations cannot afford to be perceived as rigid or dogmatic. For these reasons, many organizations have been relying on AI.

Besides enabling the obvious benefits in personalization, AI is also inherently creating ‘looseness’ or ‘fuzziness’, because it uses multiple factors, events, variables and features.

When leveraging ‘fuzziness’, brands and businesses need to tread carefully and avoid harming consumer goodwill… and manage, the all-important, trust.

The goodwill consequences of AI deployment have been minimal because so far, most visible use cases of AI have been in the realm in “consumer lifestyle” (CLS).

There are many types of AI systems affecting CLS; there are recommendations (‘you should this’ or ‘take the second exit on the roundabout’); interpretations (‘this is a picture of your aunt Helen in your picture collection’); recognition (‘this what you said’) and more.

Things change as AI deployment reaches the “Consumer Life Critical” (CLC) stage: when hard decisions are made by businesses about their customers, which affect those customers’ life patterns. As a gigabyte of prevention is worth a terabyte of cure, AI must protect itself against an impending backlash of consumer concerns and resulting politician’s reactions.

THE key defense here is explainability. Un dealing with our mothers, in any customer facing-activity – whether it is retail, finance, healthcare or hiring and firing – when AI is used to classify types/genres and restricts choice and transactions that impact CLC, goodwill must be earned, and trust kept.

Explaining explainability

Within the European Union, any AI decisions that have an impact on customers have a legal requirement to be ‘explainable’ (GDPR Recital 71). In the US, the Equal Credit Opportunity Act, Title 12, requires sharing the reasons for an adverse action. Regardless of the regulatory landscape, AI outcomes must be explained.

Explainability must not be designed as a single audience add-on. Data scientists use machine learning solutions to create value, increased engagement and drive better key performance indicators (KPI) for their business.

Too often, data scientists slap feature importance on, as a substitute for explainability. “This is what drove the decision” is a cheap line.

As an aside, care should be considered for continuous versus discrete variables; because, at times, bad encoding is really what “drove the decision.”

When dealing with regulators, at times, machine learning models are mapped to decision trees (albeit, at times, very large and deep ones). Decision trees are deemed “interpretable.

” A subject matter expert can look at it and trace the path of decisions the AI made to arrive at a decision. But is interpretability the same as explainability? Explaining explainability can be challenging.

Those steps are necessary, but far from enough:

Some decision trees might just as well be a forest (picture courtesy Alain Briancon).

Data scientists’ designs must be built in terms of business impact, not model impact. If you can’t put a $ measurement or market impact measurement to your results, you have not done enough for quality and explainability.

Explainability must be designed as a quality metric AND an audit method. Model quality is important. Every model deployed should be measured along with more than a dozen technical performance metrics for model quality.

The impact of missing data, data source quality, expected or affected customers, the usage of variables and features, along the lifecycle of deployment and valuation of impact should all be part of the design from the get-go.

While Subject Matter Experts reviews are the first line of defense to ensure the rationale behind AI-powered decisions, the explanation must be outside the language of AI geeks.

Explaining is quality control on Cerebri Values CX platform (picture courtesy Alain Briancon)

A sound level of understanding needs to be adapted to the CEOs, IT, SMEs, and, of course, customers.

For CEOs and CTOs, understanding the impact of missing and imputed data, protected and private attributes, the balance of multiple KPIs should be integral to the roll AI across their businesses.

Care needs to be taken to ensure model training is unbiased, fair (whether its supervised, unsupervised or utilizes reinforcement learning) and avoids forcing a fit to preconceived behaviors.

Large-scale organizations often gather customer data in one “customer journey per function”, and then analyze these journeys for insights that drive engagement and financial results. However, true customer journeys cut across sales, marketing, support, and all other functions that touch the customer. That means that any AI-driven decisions impact multiple departments and P&L centers.

Goodwill within an organization is important as well. Robing Q4 product sales with Q3 service sales might be ok if everyone knows about it.

The importance of UX

Audit means inspection and traceability of decisions. It implies a friendly user interface integrated with normal AI operation. Design for explainability can require trading some performance for explainability. Data scientists must strive for best in class design, and then pull back performance enough to provide explainability.

Focusing on explainability as a quality metric has additional benefits that compensate for potential performance issues. Especially when dealing with systems that leverage customer journeys, in contrast with factor-based or demographic based systems, which only look at static variables.

Explaining is inherently a causal interaction. New techniques are emerging to deal with causality, that in turn improve the performance of models customer journeys. They include SHAPLEY analysis, do logic, interventional logic, counterfactual analysis, Granger causality, graph inference. These techniques can be used for feature engineering for modeling and improve modeling significantly.

There are significant benefits in building explainability and interpretability into an AI system.

Alongside helping to address business pressures, adopting good practices around accountability and ethics improves the confidence in AI, thus hasten the deployment for CLS applications.

An enterprise will be a stronger position to foster innovation and move ahead of its competitors in developing and adopting new AI-driven capabilities and insight.

For AI to be adopted thoroughly, the backlash against the obvious abuses of privacy from social media concerns (in the western world) and the ‘slap happy’ approach to data security have to be worked out.

To succeed in the long term, AI must be impact/outcome centric. That means stakeholder explanation centric. Above all, AI must be customer-centric and that means explaining embedded from the beginning.

“Why? – Because I am your customer, that is why.”

Bio: Dr. Alain Briançon, Chief Technology Officer and VP Data Science of Cerebri AI, is a serial entrepreneur and inventor (over 250 patents worldwide) with a vast experience in data science, enterprise software, and mobile space.

Related:

Источник: https://www.kdnuggets.com/2019/10/restoring-trust-ai.html

AI & Financial Services Can Work Together to Rebuild Consumer Trust

Rebuilding Trust in Banking With Artificial Intelligence

Artificial intelligence (AI) and financial services have one thing in common—consumers don’t trust them.

According to the 2019 Edelman Trust Barometer, finance is still the least trusted industry globally.

Consumers are also on the fence when it comes to putting their trust in AI, mainly because of the lack of clarity regarding the processes, performance, and intentions of the provider.

So how can two very different industries—banking and AI—use their unique resources and skills to rebuild consumer trust?

Engagement fosters trust (and vice versa)

Forrester’s 2018 Customer Experience Index found that the banking industry is struggling to create and maintain a human connection with customers. This is understandable since this endeavor is new to banks. In the past, banks had a captive audience—people joined the same bank as their parents…and stayed with them forever. But times have changed.

We’re living in an era where our Netflix account chooses our entertainment and our Amazon account knows when we’re running low on toilet paper. It’s hard to believe that our banks—the institutions we trust with our livelihoods—are the most silent of all the companies we interact with every day. A recent study by Celent found that only 44 percent of people feel that their bank knows them.

Millennials have matured in an age where personalized, responsive services— those delivered by Uber, Amazon, and Google—are the norm. It’s no wonder they’re 2 to 3 times more ly to switch banks than people in other age groups.

As banks pivot to meet the demands of the evolved consumer, they’re looking to AI to increase customer engagement through personalized services. The end result: a virtual financial assistant that acts as a conduit between the bank and its users.

As with any new relationship, there’s a period of trust-building that needs to happen before value can be exchanged. Before virtual assistants can become the status quo, banks need to learn the language of trust—and their conversational AI needs to enable it.

Building trust in virtual assistants

It takes time to build trust. Virtual assistants are capable of performing everyday banking tasks  for customers, but it all depends on one thing—the trust of the user.

If the virtual assistant begins to make decisions about your finances without your consent, you will be uncomfortable—and rightly so. You haven’t reached the right level of comfort with the bot to trust it to take such actions. But if the bot starts by giving you useful advice that helps you reach your financial goals, you’ll slowly begin to trust that it has your best interests in mind.

Customer trust journey with a virtual financial assistant

Level 1Level 2Level 3Level 4Level 5
You interacted with the virtual assistant. You opened a dialogue because you trusted it would be worth your time.Depending what channel you’re in, you permit the assistant to access your account to see what’s going on.You listen to insights from the assistant such as, “Your spending is on track this week, think about saving some money.

You allow the assistant to take action with your approval, for example, “You’ve just been paid, can I move $50 into saving?”The assistant knows you so well, it completes pre-approved tasks for you proactively.

Achieving the privacy—value exchange

In the modern economy, consumers expect to exchange a certain amount of personal privacy with organizations that deliver value. However, privacy is inherently trust-based.

And after several decades of complacency, the world is now hyper-focused on privacy. The steady flood of data breaches has washed away people’s trust in organizations.

People no longer trust the system to self-regulate so they want the power to regain control of their data at the push of a button.

Governments around the world have stepped up to meet these consumer demands by implementing sweeping regulations the EU General Data Protection Regulation (GDPR) that give people more control over the personal information that is being collected about them—and allows them the right to be forgotten. These regulations are good for society. They’ve made us think about the value that others extract from our data and the value that we get in return.

As we build relationships with bots (and banks), the privacy–value exchange becomes key since value exchange is completely dependent on trust.

Banks must guide customers on a journey to build trust in the virtual financial assistant.

For the assistant to deliver true value, the customer must trust that their privacy is respected and that the bot has their best interests in mind—and not the interests of the bank.

Finn AI has defined five key principles—or tenets—of trust. If a user is confident that the bot follows these principles, they’ll be more ly to trust it to take valuable actions on their behalf.

Finn AI tenets of trust

“I trust you to be competent”

If you tell the assistant to move $50 from checking, the bot does this correctly.

“I trust you to be well intentioned”

The assistant is not sneaky. It is working for you, and only you.

“I trust you to know me”

The assistant understands your unique needs and only recommends actions, products, or services that will benefit you financially.

“I trust you to be reliable”

The assistant better be available whenever you need it.

“I trust you to be discreet”

The assistant only uses the information that you’ve shared for the purposes that it was shared. It will not use this information in the future, for example, to prohibit you from being approved for a loan.

Quantifying trust and making it actionable

The ability to quantify trust will be the biggest differentiator in banking in the future. Finn AI is working with customers to achieve this. Leveraging data from user behavior, we are building a framework that banks can use to determine the trust level of a given set of users—and that Finn AI can use to build better products.

Going beyond customer surveys and CSAT measurements, data scientists can correlate behaviors with trust so that banks can surface insights :

  • “If a user expresses this sentiment in this context, it demonstrates that trust levels are declining and we should do something to improve this.”
  • “If a user takes this action, it demonstrates that trust levels are increasing and they may be ready to try feature X.”

In this way, banks can intuit a user’s needs how they’re behaving with their virtual assistant.

In the experience economy, profitability and happy customers are not mutually exclusive. Smart banks will work to bridge the trust gap with personalized services, delivered via virtual assistants, to help customers achieve their financial goals. In turn, banks will increase retention and customer lifetime value. Trust is a win-win for everyone and banks must invest heavily in it.

Read the Celent study on Raising the Customer Experience Bar: How to Close the Trust Bar in Retail Banking for more details.

Источник: https://www.finn.ai/ai-in-banking-can-rebuild-consumer-trust/

NEWS
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: