AI & Data Science in Finance – But Are We Ready?
What are our Future FinTech Champions learning at the moment? And what are their thoughts on the current development of FinTech? Read through this submission from one of our FFCs, Dosieah Khushi, currently studying Software Engineering @ University of Mauritius [Submission made in September 2025].
The marketing floor was alive. Traders hopping from desk to desk, numbers spiraling up the charts, but what struck me the most was the “dead silence”. A single AI system, working ceaselessly, was executing countless decisions in the blink of an eye. No breaks. No second-guessing. Only operating on cold, calculated precision, and an uneasy feeling that the smartest “person” in the room wasn’t human.
As a software engineering student exploring the wonders of technology, my bond with AI is both academic and personal. During lab sessions, I have built and trained small machine learning models to predict stock trends, recommend portfolios, and even uncover anomalies, sometimes with impressively high accuracy and sometimes with humbling errors.
In theory, it is true that:
More Data + Smarter Algorithms = Better outcomes.
However, in practice, the real world is far more complicated. In 2025, the financial industry is already submerged in data, including market feeds, customer profiles and
trillions of transactions. AI and data science are not experimental tools anymore; they are now the cornerstone of modern finance. Yet, every time I come across a report on “AI readiness”, I see the same pattern: big dreams, little execution.
According to Deloitte (2024, p.2), 86% of financial services leaders consider AI as a driving business force in the coming years. Founded in 2009, Zest AI machine learning
software and services help lenders make more accurate and fairer credit underwriting decisions (2022, p.2). JP Morgan’s LOXM, which was designed to execute large trades with minimal market disruption is said to have achieved substantial cost savings and far outperformed both manual and automated existing trading methods in trial (No date, p.2). What took 360,000 hours now happens in mere seconds. An article published by the Business Insider (2025, p.4-6) showed that Mastercard’s AI systems monitor 159 billion transactions each year, boosting fraud detection by up to 300% and reducing false declines by 22%. Likewise, C3 AI Anti-Money Laundering uses advanced machine learning techniques to increase the accurate identification of suspicious activity while greatly minimizing false positive alerts (2025, p.2). In fact, a regulatory framework (FREE-AI) is being developed by the Reserve Bank of India (2025, p.2) to promote ethical AI adoption, aiming to balance innovation with risk mitigation. And the list can go on and on.
The ambition is clear. AI can process huge volumes of data at the speed of light, detect patterns beyond human reach, and make decisions with remarkable accuracy. In
markets where milliseconds can mean millions, the benefits are too significant to overlook.
Yet, beneath the excitement lies a sobering reality. An MIT-affiliated report (2025, p.2) showed that, despite investing $35–40 billion into generative AI tools, 95% of American companies have seen little to no meaningful return. This leaves only a rare 5% that managed to scale AI successfully, highlighting the gap between bold investment and real impact.
Additionally, structural barriers deepen the challenge. Empirical evidence shows that only 6% of retail banks are ready to scale AI in Europe. Legacy systems, regulatory
uncertainty, and fears of job displacement hold the rest back. Also, navigating complex frameworks like GDPR and the EU AI Act further slows down progress, especially for institutions already wrestling with adaptability. But the problem runs deeper than regulation or technology. As Hale (2025, p.2) observed, just 2% of enterprises are truly prepared to harness AI, while 77% are only moderately ready and 21% trail far behind. Weak governance, unreliable data, and fragile security systems mean that many organizations are building AI on crumbling grounds.
Workforce readiness is another roadblock. A Kyndryl survey (2025, p.2) showed that 71% of business leaders acknowledge that their employees are not fully prepared for AI. Skill shortages (51%) and cultural pushbacks (45%) often leave new tools unused or rejected. Without reskilling and paradigm shifts, AI remains an expensive experiment.
By the same vein, finance is a trust-based industry. If an AI declines a loan or flags a transaction as fraud, we need to question why. As per TrustPath(2025, p.2) black-box
models, no matter how accurate, can erode customer confidence and regulatory acceptance. Without explainability, using AI is, simply put, like driving a car blindfolded.
Therefore, the answer to the question “Are we really ready?” is that, from where I stand,as a software engineering student and a keen observer of this evolving world, I would say we are almost there, but not quite. Right now, we are in the “pre-launch checklist” stage. The engines are built, the fuel is loaded, but we’re still double-checking that everything will stick together once we take off.
To me, being “ready” is not defined by how advanced our technology is, but by how deeply we understand it. We have mastered mechanics. We know how to train models,
fine-tune precision, and deploy them at scale, yet readiness goes beyond technical expertise. It is about our ability to coexist with these systems, to interpret their decisions, to trust their outcomes, and to hold them accountable when they fail. True readiness, in my eyes, lies in achieving that delicate balance between technical brilliance and moral responsibility, and where code finally meets conscience.
I also believe readiness is deeply human. Technology evolves fast, but people take time to adapt. Many still believe that AI will replace them, when in reality, the future of finance is built on collaboration rather than competition. Humans bring creativity, empathy, and moral judgment, qualities that cannot be coded in algorithms. The financial institutions that will thrive are those that view AI not as a substitute, but as an ally, one that amplifies human insight rather than erases it.
But then comes emotional readiness and honesty, perhaps the hardest one of all. Finance is a realm where every figure tells a human story. A declined loan is not just a
statistical entry; it is a dream put on hold. A flagged transaction is not merely a security concern; it is someone’s livelihood under scrutiny. Would I personally trust an AI to decide whether I’m creditworthy? Probably not yet. Not until I understand its reasoning. Not until it earns my confidence the same way a trusted banker or advisor would. That’s the human dilemma of AI. We can program intelligence, but can we inspire trust? This is why explainability, transparency, and responsibility must sit at the core of every system we build.
So, are we ready? Technically, yes. Strategically, we are learning. But ethically and culturally, we are still finding our footing. Readiness is not just a destination. It is a
journey of continual learning, governance, and humility.
I believe the future of finance will surely be AI-driven, but it should never be AI-controlled. The smartest “person” in the room might not be human, but the wisest decisions must always have a human touch. Because technology may power the system, but trust will always power the world of finance.
And maybe that’s the real answer: we are getting ready; one algorithm, one policy, and one mindset at a time.
REFERENCES - PART 1
Deloitte, 2025. The future of AI in banking. Deloitte US. p.2 https://www.deloitte.com/us/en/services/consulting/articles/ai-in-banking.html
Business Insider, 2025. From fighting fraud to fueling personalization, AI at scale is
redefining how commerce works online. p.4-6 https://www.businessinsider.com/sc/how-ai-at-scale-is-shaping-the-future-of-
commerce?r=US&IR=T
Fintech Global, 2025. RBI panel calls for AI framework in finance sector. p.2
https://fintech.global/2025/08/18/rbi-panel-calls-for-ai-framework-in-finance-sector/
AI magazine, 2022. How zest AI enables fair and transparent lending with AI. p.2 https://aimagazine.com/ai-applications/how-zest-ai-enables-fair-and-transparent-lending-
with-ai
Best Practice AI, [no date]. AI Case Study. p.2 https://www.bestpractice.ai/ai-case-study-best-practice/jpmorgan’s_new_ai_program_for_automatically_executing_equity_trades_in_real-time_out-performed_current_manual_and_automated_methods_in_trial
REFERENCES - PART 2
C3.ai, 2025. Improved Money Laundering Detection with Predictive Analytics. p.2
https://c3.ai/products/c3-ai-anti-money-laundering/
News.com.au, 2025. ‘Caught out’: CBA sensationally backflips on AI job cuts. p.3 https://www.news.com.au/finance/work/at-work/caught-out-cba-sensationally-backflips-
on-ai-job-cuts/news-story/f23275aa2cdf55a33b64cd565c0b39d8
TechRadar, 2025. American companies have invested billions in AI initiatives – but have basically nothing to show for it. p.2 https://www.techradar.com/pro/american-
companies-have-invested-billions-in-ai-initiatives-but-have-basically-nothing-to-show-for-it
TechRadar, 2025. A quarter of applications now include AI, but enterprises are still not ready to reap the benefits. p.2 https://www.techradar.com/pro/a-quarter-of-applications-now-include-ai-but-enterprises-still-arent-ready-to-reap-the-benefits
Capgemini, 2024. Only 6% of retail banks have built an enterprise roadmap to drive AI-
driven transformation at scale, p.1-3 https://www.capgemini.com/news/press-
releases/only-6-of-retail-banks-have-built-an-enterprise-roadmap-to-drive-ai-driven-
transformation-at-scale/
REFERENCES - PART 3
Technology Magazine, 2025. Kyndryl: 71% of Workforces Unprepared for AI Deployment. p.2 https://technologymagazine.com/articles/kyndryl-71-of-workforces-unprepared-for-ai-deployment
TrustPath, 2025. The AI black box problem: How financial organizations can ensure AI explainability and transparency. https://www.trustpath.ai/blog/how-financial-
organizations-can-ensure-ai-explainability-and-transparency