Two thirds of AI users already take money advice from chatbots – Wealth Professional Canada

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Clients quietly act on chatbot retirement tips as most firms still just ‘experiment’ with AI
Clients are already using artificial intelligence to rewire their finances — often without telling their advisers — while most wealth firms still use the technology for little more than email polish and Google replacement.  
According to an Intuit Credit Karma poll cited by CNBC, 66 percent of Americans who have used generative AI say they have used it for financial advice, jumping to 82 percent for millennials and Generation Z, and about 85 percent of those users acted on the recommendations.  
The New York Times reports that nearly one‑third of people who turned to generative AI for money guidance did so for retirement planning, and that 47 percent of Americans now feel comfortable using AI in their financial lives.  
The Times describes how a 32‑year‑old worker discovered her 401(k) was invested as if she had retired in 2015; after listening to a podcast, she used ChatGPT to reset her target‑date fund to 2060 and shift her allocation to 80 percent equities, then encouraged her husband to adjust his own retirement account.  
Another Times case details a data scientist who moved from Britain to Phoenix and turned to ChatGPT to decode terms like “401(k)” and compare account types; after feeding in her age, goals, current investments and desired retirement age, she chose to invest more heavily in stocks and accept higher risk.  
The Times also profiles a former mental health practice chief executive who asked ChatGPT how to retire by 45.  
The chatbot referred her to a financial adviser, then suggested paying down $25,000 of debt, building an emergency fund covering six to 12 months of expenses and exploring passive income streams.  
Two years later, she invests weekly into her individual retirement accounts and regularly consults an adviser, but still follows the broad plan the chatbot helped her frame.  
Courtney Alev, a consumer financial advocate at Intuit Credit Karma, told The Times that AI is “filling a gap for millions of people who may not have access to traditional financial guidance,” and, if used thoughtfully, can help them start planning earlier, set clearer goals and make more informed decisions.  
According to CNBC, Andrew Lo, a finance professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management, said the issue is “not whether AI has enough expertise,” since “AI has the [financial] expertise.”  
He argued that the real problem is that large language models “don’t have that fiduciary duty” and cannot be held to the same consequences for errors as a human adviser
CNBC notes that human advisers who violate fiduciary responsibility can face regulatory penalties, civil liabilities and criminal charges.  
Lo warned that the notion of putting a client’s interest first “has no teeth” without responsibility or legal liability.  
Sebastian Benthall, a senior research fellow at New York University School of Law’s Information Law Institute, told CNBC that people increasingly rely on ChatGPT, Claude, and Gemini for financial advice.  
He warned that it remains “a big open regulatory question” who is responsible and whether people should trust a product without a corporation with a fiduciary duty behind it. 
The same article also reports that law professor Jiaying Jiang of the University of Florida believes AI companies currently do not receive compensation for retail investment advice and therefore are not fiduciaries.  
However, she said advisers who do owe a fiduciary duty could breach it by using AI to deliver recommendations that are not in a client’s best interest, in which case the adviser — not the AI firm — would be liable.  
Lo told CNBC that AI can be “really good” at planning tasks such as explaining complex concepts and providing overviews of programs like Medicare.  
But he warned consumers not to trust it with “very, very specific calculations” about their finances, pointing out that large language models sound confident even when incorrect and are generally weak at financial calculations, especially on tax‑related questions. 
CNBC reports that more clients now test their advisers’ advice against tools like ChatGPT.  
Houston‑based adviser Tim Lootens warned that AI can misuse concepts such as the “wash sale rule,” urging year‑end stock sales without assessing whether the loss is meaningful or how it affects a 30‑ to 40‑stock portfolio.  
“If you don’t stand up to some of this misapplication of information,” he told CNBC, “you’ll find out people will harm themselves.” 
Even AI developers add caveats.  
CNBC notes that James Burnham, a legal and government affairs official at Elon Musk’s xAI, wrote that the firm’s Grok platform “is not tax advice so always confirm yourself too.”  
Financial advisers quoted by The New York Times recommend users avoid entering sensitive details like income, full expense breakdowns, bank statements or Social Security numbers, and instead provide only general ranges.  
Wealth Professional reports on a Section study of 5,000 knowledge workers in Canada, the United States and Britain showing a sharp disconnect between executive ambition and frontline reality in AI use. 
Senior leaders at Canadian wealth management firms have drafted policies, bought enterprise licences and rolled out training on responsible AI and prompt basics, but most workers are not using AI to change how advice is researched, documented or delivered.  
According to Wealth Professional’s summary of the study, about 70 percent of workers are “AI experimenters” who use tools for elementary tasks such as condensing meeting notes or adjusting email tone, while 28 percent are “AI novices” who avoid AI or abandoned it after brief trials.  
Only 2.7 percent qualify as “AI practitioners” who have truly integrated AI into workflows and report substantial productivity gains, and just 0.08 percent meet the bar for “AI experts.”  
Altogether, the research concludes that 97 percent of workers either use AI ineffectively or not at all, and that fewer than one‑third save four or more hours per week through AI — well below the ten‑plus hours per employee the study says organisations should target.  

source

Scroll to Top