Why AI Needs Humans
It’s not surprising that artificial intelligence (AI) and its emerging role in the financial sector was a hot topic at Money20/20 this year. Thought leaders, experts, and disruptors came together to explore how AI is being applied to financial services today and how it could be used in the future.
IDology’s chief product officer, Heidi Hunter, presented “Why AI Needs Humans,” a talk that explored the risks and rewards associated with AI and a path forward for organizations to leverage it responsibly. While there’s no doubt that AI will be transformative for financial services, Hunter’s talk highlighted the need for human fraud analysts to mitigate the risks associated with AI’s lack of transparency.
Understanding AI’s Risks Before Reaping its Rewards
AI brings flexibility and scalability to identity verification with its ability to quickly scrutinize vast volumes of digital data and uncover patterns of suspicious activity but falls short in its lack of transparency and visibility into why decisions were reached. Without understanding AI’s reasoning and decisioning process, financial services providers can’t explain to regulators why a decision was made or produce an auditable trail to show that policies were followed during onboarding.
Bias is another obstacle. Without transparency, any bias in the machine learning models that power AI goes unchecked and perpetuates over time. When the data, algorithms, or decision-making processes used in training an AI system are biased or under-representative, it can lead to unfair or discriminatory outcomes. The machines don’t have a moral bias, they simply do what they’re asked based on the information they’ve been given.
This lack of transparency is why AI needs humans.
A Multi-layered Ecosystem
When implemented with human supervision and intelligent verification technology, AI can become invaluable. A fraud analyst can step in if an AI system rejects a legitimate ID, determine how the error occurred, and teach the computer how to spot similar issues in the future. With this continuous feedback, machine learning models are improved through constant input and refinement. On the other hand, an AI system without oversight will assume uncorrected bad behavior is accurate and will continue making the same decisions, thereby exacerbating the problem.
This type of failure can draw scrutiny from regulators, result in fines and sanctions, and lead to reputational harm.
AI also has an inability to detect new forms of fraud because it analyzes patterns of data and assumes future activity will follow those same patterns. A trained fraud analyst will catch novel threats AI systems miss, and with continuous feedback, enable the learning model to improve through constant input and refinement of data.
At IDology, we believe in the power of AI but know it is best used as part of a multi-layered ecosystem that balances the advantages of a transparent, data-driven decisioning engine with the capabilities of machine learning models to detect risk at scale. Our Fraud Analysts are experts, and an important manual review layer that confirms machine learning models are making the correct assertions, safeguards against new fraud threats, and feeds product teams the insights needed to drive continuous improvement, optimization, and development of new solutions.
As part of a multi-layered approach, fraud analysts monitor transactional activity across our vast Consortium Network of customers, giving users a cross-industry view of fraud and fraud specific to their business. With this information on hand, users are empowered to make informed fraud prevention decisions. Our customers work directly with IDology Fraud Analysts to review suspicious transactions and prevent fraudulent activity through a direct communication channel that provides customers with a continuous feedback loop.
By layering human intelligence onto AI, we can analyze large amounts of data at scale while leveraging the intuition and expertise of our team of Fraud Analysts to detect novel fraud, govern AI models to mitigate bias, and provide closed-loop data transparency.
IDology’s Fraud Analysts have a 99.8% success rate of identifying fraud trends and recommending configuration settings to help customers stop the most fraud while maintaining high onboarding rates. To learn more, read our eBook on Why AI Needs to Be Smarter.