single blog Buzz | Digital Reasoning
Privacy Policy  |  Terms of Use|Copyright © 2017, Digital Reasoning Systems, Inc.
Buzz
Get the scoop from around the Digital Reasoning universe. Check out our latest press releases, events we will be attending, and informative blog posts.
Banks Must Prevent Themselves From Becoming An Accomplice To Financial Crime
Author: James Ollerenshaw
 |  Published: January 30, 2017
 | 

 

Criminals exploiting broken Know-Your-Customer processes are costing banks and their customers millions. Now, several leading banks are using Artificial Intelligence to fix the problem.

As fraud and cyber-crime overtake theft to become “the most commonly experienced offence” in the United Kingdom, a story in the Sunday Times newspaper has shone a light on a problem within the banks that we all use to look after our cash. With victims of scams collectively losing millions, a leading banker has argued that banks which have allowed themselves to be hoodwinked by criminals should pick up the bill.

The problem lies in broken Know-Your-Customer or KYC processes – the mechanism by which banks are supposedly protected from exploitation by financial criminals. The newspaper report describes how banks have opened accounts used by crooks, accepting their forged documents as sufficient proof of identification. Confidence tricksters use the fraudulent accounts to accept and then rapidly move stolen funds beyond the reach of the authorities.

“An accomplice to fraud”

The concern for banks is that inadequate checks on customer identities could render them liable for costly refunds. The problem is described by Sir Peter Burt, the former boss of Bank of Scotland: “A bank that opens an account where money is being transferred by a fraudster is an accomplice to a fraud. It is not a question of whether or not a bank can recover the money… it is a question of whether or not a bank is permitting a fraud to take place by not knowing who their customer really is.”

Burt recommends that banks should not limit identity verification to documents, such as passports and letters from utility companies, but should “do external checks and maintain an audit trail with reliable and independent sources.” The challenges facing banks are what those external sources should be and how to perform checks thoroughly but without causing delay to legitimate customers. Banks could insist that documents are signed by a notary, but this too is open to forgery and bribery. 

Catching crooks in a big data net

Analysis of large external data sources can help to reveal hidden identities and relationships. Information about a prospective customer, their interests and connections can be gleaned from social media, news media, electronic communications, credit and criminal records, and other consumer and public records. Unfortunately, legacy technology used to support KYC processes lacks the capability to automatically collect and holistically analyze multiple sources of data. 

A further difficulty is resolving entity information across and between such vast and varied collections of data. Conventional computers are unable to precisely determine who’s who, especially when names are abbreviated or aliases are used. In response, banks have been forced to build intensive processes that require knowledge workers to make manual checks on customers deemed to be a higher risk. The process is slow, expensive and overlooks those who have not triggered a raised risk score. As criminals find ways to disguise their identity and intent, accounts continue to be opened fraudulently and banks find they have become an unwitting facilitator in the scammers’ illicit schemes.

Turning big data into smart insights

Now, several major banks are looking at how artificial intelligence (AI) can provide a solution to this growing problem. Despite sounding futuristic, in most cases this is a simple extension of proven cognitive computing technology that many banks already have in place. Cognitive analytics, a form of AI, is now widely used to increase the effectiveness of programs that monitor employee communications for noncompliant or illegal behavior. To enhance KYC processes, the same approach is applied to the big data records that can expose financial crime.

Computers are good at processing large volumes of data at high speed, but conventional machines are unable to make sense of ordinary human communication. Cognitive systems are different. They analyze text, voice and even image data in a similar way to people, using semantics and context to work out what is being communicated, to who, and why. Holistic analysis of multiple data sources makes it possible to glean insights that would not otherwise be apparent: concealed identities, criminal involvements, and hidden networks and relationships.

The insights can be accumulated into profiles on individual customers, with a clear audit trail of the data source and content. Unlike traditional KYC, there is effectively no limit to the number of customers or scale of data that can be used to perform checks and populate these profiles. Every new customer can be assessed to the same exacting degree, not only at the time of account opening but on an ongoing basis. This means that any new insights are brought to light and KYC profiles remain up to date. 

Should there be a cause for concern – perhaps a news report that links the customer an alias identity with past records of fraud – banks can be confident that they’ll be alerted and can act. As fraud continues to rise and consumers are advised to sue banks for refunds, the pressure is piling on banks to fix KYC and demonstrate their proactivity to regulators. Leveraging the AI capabilities that are already a feature in many large banks looks like one of the most promising ways to enact Burt’s recommendations.

Want to learn more? »

Learn more about how Digital Reasoning can work for you by exploring our case studies, white papers and additional resources.

Who to talk to

For all media inquiries, please contact:

Jason Beck
Director, Communications
615-567-8633
jason.beck@digitalreasoning.com