NORTH AMERICAN SECURITIES ADMINISTRATORS ASSOCIATION™

Compliance Matters: Using AI: Risks and Compliance Considerations

IA Compliance:

Using AI: Risks and Compliance Considerations

The following information reflects the views of NASAA’s Investment Adviser Section Resources and Publications Project Group.  It does not necessarily represent the views of NASAA, and it is not intended as legal advice. Any questions should be directed to the appropriate state regulators.

Artificial intelligence (“AI”) refers to computer systems designed to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. AI uses algorithms and vast amounts of data to recognize patterns, make predictions, and automate complex processes. Common uses of AI in financial services include marketing, client management, and administrative support.  

While AI is increasingly being used in the financial services industry, its benefits come with regulatory and operational hurdles that advisers must carefully navigate to ensure compliance.

This article will discuss risks and challenges associated with AI and common compliance considerations.

Risks and Challenges of AI Use

“Fake” AI

Using “fake AI,” typically means marketing something as AI when it is not. Regulators have taken actions against advisers for claiming that their platform, tools, or services are “powered by artificial intelligence” when in reality the service is based on simple algorithms, pre-set rules, or even manual processes that are not true AI. This misrepresentation could violate federal or state laws.

Reliability of AI

An adviser should be cautious if choosing to utilize AI tools and ensure the program is capable of its intended use.  Some programs claim to use AI but do not perform actual meaningful analysis. Other programs may provide biased or incomplete data outputs. If the adviser bases client advice or marketing claims on fake or faulty AI outputs, they may be exposed to compliance risks, regulatory penalties, and lawsuits.

Data Privacy

Some free and publicly available AI tools, like open-source models from Meta, are open source. Others, like ChatGPT, are proprietary. Many of these systems may store or process user data, so privacy depends on the provider and usage settings. Advisers must ensure client records and personally identifiable information are safeguarded. If an adviser inputs sensitive client data into an AI program without conducting adequate due diligence, client harm and regulatory penalties could result. Even though some programs take steps to protect data, they do not guarantee complete confidentiality – especially in open or publicly shared systems.

Marketing

AI-generated advice or marketing material may unintentionally mislead clients. When using AI-generated content for marketing, advisers should consider the following:

  • Testimonials and endorsements: Some states may restrict the use of testimonials or require additional disclosures to be made.
  • Performance advertising: Claims of performance must be factual, substantiated, and cannot be misleading. Hypothetical performance reports, which AI tools often generate, must include proper disclosures.
  • Supervision and Approval: Advisers using AI to generate social media posts, blogs, or advertisements must ensure that all content is accurate, contains adequate disclosures, and is reviewed by a designated compliance officer.

Advisers should review the applicable marketing rules in their jurisdiction.

Compliance Considerations When Using AI

Advisers using AI should review and update their compliance programs to address AI-related risks. At a minimum, policies and procedures should specifically cover the following:

  • Use and supervision of AI-generated content. All AI-driven output such as marketing materials, client communications, and investment recommendations should be reviewed by a compliance officer for accuracy before being disseminated.
  • Ongoing AI tool evaluation. Advisers should train not only in how to use AI tools but also in understanding the limitations and risks involved, especially around data protection. Training should emphasize the adviser’s fiduciary duties, marketing compliance standards, and cybersecurity best practices.
  • Vendor due diligence. Before using any third-party AI tools, Advisers should investigate a vendor’s data security protocols, regulatory compliance support, transparency around AI algorithms, and customer support and updates. Additionally, Advisers must ensure that contracts with vendors include indemnification clauses, service level agreements, and clear terms addressing cybersecurity incidents.
  • Documentation of marketing review processes. Advisers must maintain documentation of the review of any AI-generated content for examination purposes.

Conclusion

AI’s perceived advantages come with real risks, especially around compliance with state advertising rules, client data privacy, and supervision obligations. Advisers must adopt a balanced approach if leveraging AI’s capabilities by maintaining strict human oversight, updating compliance programs, and ensuring full alignment with evolving state and federal regulatory requirements.

Advisers with questions about state regulatory requirements may contact their jurisdiction’s regulator here.

 





Skip to content