Categories Online News Press Wealth

Panel: Wealth Shops Need Ethical AI Guardrails

Wealth management firms are rapidly implementing artificial intelligence tools for both advisors and clients. But experts warn that these tech-forward firms can run afoul of fiduciary standards if guardrails are not in place.
According to panelists at Future Proof Citywide in Miami on Sunday, fiduciary wealth managers, large and small, need intentional guidelines to ensure the fast-changing technology benefits clients, both ethically and legally.
“We have all these generative AI systems, and there are a lot of capabilities, but there is still an implementation gap,” Azish Filabi, managing director, Center of Ethics in Financial Services at The American College of Financial Services, told the audience of advisors.
Filabi noted a Microsoft study that showed 40% of AI outputs did not match the user’s intended goal, and that while 79% of leaders said they need to use AI to stay competitive, only 25% have plans to do so.
Wealth managers can face challenges when using AI to prepare client materials or conduct investment research, she said. But it’s the more complex client-facing technology that poses the “biggest fiduciary risk,” she said.
“The more automated [chatbots] become, the more you’re hopeful that they’re giving correct answers and they’re consistent with your fiduciary practices,” she said. “From a financial services chatbot perspective, understanding what a chatbot is optimized for, and ensuring that it’s optimized to produce client best interest, not necessarily the best interest of others in the stakeholder community.”
Wealth managers and regulators have been grappling with AI standards even as the tech continues to evolve and new offerings emerge weekly.
In June, the CFP Board announced a new AI Working Group, including executives from firms such as LPL, Orion, Fidelity and Edward Jones, to study how artificial intelligence is impacting the financial planning profession. In FINRA’s latest annual report, the regulator included a new section on generative AI, stressing that while FINRA’s rules are “technology neutral,” they apply to AI just as they do to any other tool, including those that help with supervision, communications, recordkeeping and fair dealing.
In a separate panel at the Miami conference, RIA practitioners championed the use of AI, but put forward a framework for doing so safely.
Morgan Bell, managing director at RIA-backer Constellation Wealth Capital, laid out three areas to address when setting up AI systems.
First, she recommended that firms put AI-specific policies in place, vetted by compliance officers and business leaders, with the understanding that the internal guidelines will need to be updated as the tech evolves.
Second, firms should create a formal AI committee to both champion and scrutinize its use. Finally, she said firms should identify strategic objectives for what they hope to achieve with the technology.
AI use is “only going to be as good as the firm and the individuals using it within the organization on a day-to-day basis,” Bell said. “You have to start with the foundations, but then we encourage our partner firms to really define what those use cases would be within the organization.”
Rend Fetyan, who guides AI implementation at the $235-billion Cresset, said the firm has an attestation policy for its advisors who use AI, ensuring they understand the rules. Those guidelines, however, change as the technology advances.
“It is a living, breathing thing,” she said. “With the technology changing, it is going to be a continuous process of amending the policy.”
Fetyan also recommended that firms establish “AI Champions” embedded across the company, both to drive adoption and to provide feedback on its effectiveness.
A significant challenge for wealth management firms is that compliance can be a moving target as regulators update rules on an ad-hoc basis, Filabi said on the sidelines of the conference.
In June, the Securities and Exchange Commission withdrew its “predictive data analytics” rule, introduced in 2023 under the Biden administration and intended to rein in firms’ conflicts of interest when using AI. The financial industry pushed back on the rule, partly because it was considered too broad and nebulous.
Filabi said regulators will view AI through the lens of a client’s best interest and the fiduciary practices already in place. What is harder to gauge is whether they will issue rules specific to AI or use enforcement actions to guide their implementation.
She said clients—and their attorneys—may be the first to challenge firms’ use of AI if they see cause for concern or ethical lapses.
“Not through the SEC or FINRA, but through a client’s civil lawsuit,” she said.
Filabi said the American College of Financial Services’ “ethics by design” framework is intended to help firms develop their own policies for using the technology while remaining flexible enough to adapt to future changes.
“This is a really challenging business,” she said. “Even if you followed all the case law and the regulations, you’re consistently dealing with new situations and new fact patterns.”

Related:Advisors Cite Communication, New AI Tools in Keeping Clients Calm Amid War, Oil Headlines