Skip to main content

I like to think of AI in the context of wealth management in terms of advice. Can AI move beyond cold automation when it comes to generating tailored investment advice at scale?

I refer in particular to the work of Andrew Lo to illustrate the challenge.

Andrew Lo is a renowned thought leader at MIT, director of the Financial Engineering Laboratory and principal investigator at the Computer Science & Artificial Intelligence Laboratory. Lo provocatively argues that, in the context of wealth management, we need more artificial humanity rather than artificial intelligence. He presented his view to a wider audience in a famous blog post published by the Wall Street Journal on 6 June 2016, entitled “Imagine if robo advisers could do emotions”.

For many years, the wealth management business has been built on the notion of the cold, rational investor. To address the shortcomings of such an approach, and to recognise the human nature of the investor, many service providers are placing an advisor alongside the client. The challenge, says Lo, in delivering tailored investment advice at scale is to manage the emotional component of investing through technology. Yes, this means building digital processes and algorithms. But crucially, these algorithms should be better at understanding and anticipating human behaviour than the classic paradigm of rationality. I join Andrew Lo in applying insights from behavioural economics to improve the business contribution of digital investment processes. The pioneering work of Nobel laureates such as Daniel Kahneman and Richard Thaler is indeed helping us to refine our understanding of people’s risk preferences, to better understand the return distributions of products, and ultimately to build portfolios that people stick with by combining products in a way that matches people’s preferences.

There are several reasons why I believe this has the potential to transform the wealth management business. First, success in digitising advice will help reduce operational costs, for example in dealing with the tidal wave of regulatory must-do’s. Greed for profit and fear of regulation tend to encourage change that improves. Second, and more on a missionary note, the success of digitising investment services will improve the chances of getting everyone invested and keeping them invested. Digitising the advisor and humanising the investor has a social impact by improving overall financial well-being.

A key element of success is trust. Investors should trust the model and, in particular, the advice it provides. Trusting the model is not a given. There is plenty of research documenting “algorithm aversion”, see for example a recent article in the Financial Times “Algorithmic analyst aversion“, 17 September 2024, reporting on research by my former colleague Gertjan Verdickt. The current research agenda of Lo and his team is, however, more focused on generating trustworthy output and developing AI that ensures the output is trustworthy. The criterion they use is that of “fiduciary duty”, the legal obligation of human advisors to make decisions on behalf of the investor and to put the investor’s interests first. The test for AI in the context of providing trusted investment advice is to pass the fiduciary test, so to speak. This means, for example, training the AI model on a dataset that includes all relevant regulatory requirements, lawsuits about things that have gone wrong, and so on.

Readers interested in learning more can watch the master himself, Andrew Lo, explain this research agenda in a recent MIT video : MIT Professor on How AI & LLMs are Shaping Financial Advice, Analysis, & Risk Management: Part 1

Enjoy! And feel free to share your comments in a private message.

About the author:

Jurgen Vandenbroucke, PhD, Managing director, everyoneINVESTED , the wealth tech spin-off of KBC Group, Expert general manager at KBC Group, Lecturer in financial engineering at the University of Antwerp, Lecturer in digital household finance at KU Leuven

Leave a Reply