Episode 4: Putting the Intelligence in AI
In this episode of Asset Management Corner, Andrew Dean and Chris Mulligan are joined by Weil’s own Olivia Greer to discuss how advisers should be thinking about artificial intelligence. They take on all things AI, including the intersection of federal, state, and foreign laws and regulations; policies and procedures; recording keeping; SEC Enforcement and Examinations interest; and the connection to cybersecurity. Andrew and Chris also discuss two SEC enforcement matters involving large retail advisers, plus the first Marketing Rule case under this Commission.
Transcript
Andrew Dean: Hello and welcome back to asset Management Corner. We are your hosts, I'm Andrew Dean. He's Chris Mulligan. We are partners at the law firm Weil. This is the podcast where we bring our experiences as former senior SEC officials in the Division of Examinations and the Division of Enforcement and talk all things SEC and asset management. On today's podcast we will touch on some enforcement actions that dropped on the Friday before Labor Day. But first we went into the mail bag for your thoughts on today's topics. And Chris, do you know with the number one issue demanded by our listeners is?
Chris Mulligan: Could you give me a hint how many letters are in it?
Andrew Dean: I think it abbreviated as 2.
Chris Mulligan: Oh, is it? Could it be AI?
Andrew Dean: That's right. That's right. I know, Chris, you were probably thinking cross trading rules, but those came in a distant second today article. Today artificial intelligence.
Chris Mulligan: That's right. I I feel like all all of our, you know, many of our listeners have contacted us said that was a great episode. That was fantastic. When is the episode on AI coming out? Because I really want to listen to that one. So this is sort of by universal demand from our listeners and to help us today, we're joined by the most in demand partner at Weil and she took time out of her very busy schedule and it is very busy, Olivia Greer. She is a partner in our technology and IP transactions practice and she is head of our privacy and cyber Security group as well as Co head of the firm's AI task. No wonder she is in demand and we are thrilled to have her here as a guest today. She represents our clients and connection with privacy and data security matters, helping clients across all market sectors navigate the business implications of new and evolving technology, including of course AI delighted to have you, Olivia. Thanks for taking a few minutes to talk with us.
Olivia Greer: Thank you for having me. I've never had such a warm welcome. I'm like this is, this is like I've arrived.
Chris Mulligan: So why did you set the stage for us we hear about AI. Everyone wants to talk about it. It impacts so many different parts of the asset management industry. But it's not just one thing, right? It it crosses so many different trading activities, books and records privacy. There's just so many areas that AI is affecting and it's impacting. So many different areas we're going to try to stay focused on the areas that are sort of red hot right now, but why don't you set the stage? Because it's not just securities laws, right? There's federal. There's state laws, there's obligations coming from everywhere right now. And why don't you just set the stage about what is happening in the AI space on the technology side and on the regulatory side across different regulators and how all of those things are sort of colliding?
Olivia Greer: Yeah, happy to. So I think the cleanest way to do it and then we can obviously get deeper into all the issues that that you want to talk about is think about where we were 3 1/2 years ago. And we were really just starting to talk about this big thing that was coming, folks in my world. Were chuckling a little bit because we've been talking about AI for a decade or more, we just called it machine learning. And so, you know, when we sort of look at the landscape over the last, you know, 3 1/2 for four years, that obviously has changed dramatically in terms of what? Yeah, I can do, but coming back to basics, it's really a lot about what's the data. The information that's being used to train the model or the tools to put into it to allow it to do its job and what's coming out of it and what's the impact of that. So I think that's like the framework that we think about all AI within and that's why coming at it as a privacy professional, I have a unique vantage point because I'm looking at data all the time, but it also really sets the stage. Regulatory landscape and before I get to that, I'll just say quickly, right that framework is the same whether you're talking about machine learning, whether you're talking about artificial, Generative artificial intelligence, Agentic artificial intelligence. Right. We could parse all those terms, but fundamentally, you're really talking about data and how it's going into these tools and how it's coming out of these tools. So that now lives in this constantly evolving landscape both domestically and internationally. And I think businesses across multiple sectors and particularly so in sort of the asset management and wealth management space. Just because the information you're handling is quite sensitive and you have a very specific regulatory sort of sandbox that you play in, right, there's layers of of regulation that you have to be thinking about, right. So obviously you're thinking about. The SEC and we. Talk about this all the time and we can. Get into this. Sort of nitty gritty of that, but because you're talking about so much data. You are also talking about privacy laws, right? So we now have upwards of 15 privacy laws in different states across the US most of those have parameters around how do you use automated decision making technology? What kind of notification do you owe individuals? If you are using data to train models or using data as inputs, right? There's that layer you've got this sort of collision of, you know, federal agency regulations against state laws. And then if you're operating internationally, you have an added layer, right? If you're subject to the GDPR in Europe and or in the UK or or other international privacy laws as well as then sort of the big one, which is the UA I? Fact which, and I think we'll talk about this more, right. If you are engaging in any AI use that might be classified as high risk, you have entered an entire world of different types of regulatory obligations. So there's a lot and I'll pause there, but I think that's sort of laying the groundwork for. What we're looking at when we talked about AI.
Chris Mulligan: All right, so basic question, do you think you should have AI policies if you're using AI Olivia?
Olivia Greer: Chris, I think you have to have AI policies. That was a good setup I just like knocked that one out of the park.
Chris Mulligan: Oh, wow. OK, well, I think we're, I think, end of conversation then. OK.
Olivia Greer: Done. We're done. But the reason I would say that looks really different for different businesses, right? I think that's what's really critical is yes, you if you are touching AI in any way and even honestly if it's machine learning, if it's sort of what you would think of as like basic, you got to have some kind of policy. And I would say even if you're just starting to think about it, your first step should be considering internally, what that policy is, right. And so for businesses that are really building. Truly building their own models or building tools that they're they're pushing out publicly, you have to really have an AI governance framework in place that is going to involve multiple policies and multiple teams, right? But it's not a one-size-fits-all model, right? So for most companies, when I say you need an AI policy, I'm talking about. A fairly basic handful of pages that set out the high level dos and don'ts, and let employees know what they can do. What they can't do, and when they need to go. Talk to someone. And who to? Go and talk to so it's. I wouldn't say it's super super complicated. We got to have something on paper.
Chris Mulligan: And I think we are seeing very deep interest from SEC examiners about AI and about AI policies and procedures. And I think one thing that we talk about all the time is how are you using AI because the more important that function gets from the Advisors act perspective, right, you have a fiduciary duty that attaches to your investment. Decisions to your services that you provide to your clients and they just can't be outsourced to a I you have to have if you're using AI for trading or investment decisions, you have to have a process to make sure that it's doing what you think it's doing. You have to have oversight. You have to have testing. You obviously have to have robust policies and procedures that sort of incorporates all of that. But the more important the function. As you mentioned it earlier, under the Advisers Act, if you're registered investment advisor, it becomes incredibly important to make sure that you have your hands around exactly how you're using AI. And if you have proper. Oversight. But I think you mentioned that the more important the function, it's not only an advisors act issue, but there's also issues with States and maybe other jurisdictions.
Olivia Greer: Yeah, I think that's exactly right. And and I think your point is, is the first point right, which is you really have to understand how the organization is currently using it and how it. Wants to be using. It and be planning for that because you know. It's too easy to sort of let people lose and then find out later that somebody was putting confidential materials into the public version of ChatGPT, which sounds crazy but is really not crazy and is something that I think is happening less frequently than it was, maybe a year or so ago. But is is really something that people get so excited. Right. They think that there's all this efficiency and and there is right, there's great opportunities and there are really good tools out there, but you really have to be thoughtful and be aware of where you're going to bump up against regulatory applications or regulatory restrictions. I think you've raised some of the critical ones, so needing there to be human oversight I think is probably maybe the most important piece to consider. And the second one I would say is explainable ality and particularly for investment advisors. You've got to be able to explain the decisions that are getting made. And most AI tools are are in a black box and so if for example, one of the the use cases that is coming up more and more is utilizing AI to make trading decisions well if if that tool is making a decision and something goes horribly awry.The organization has to be able to explain that decision, and if you don't understand how the AI came to that decision and there was no human as a checkpoint before the decision got actually finalized and executed on, then you've got a real problem.
Andrew Dean: Just jumping in there like I think there's actually some comparisons that could be drawn to like the algorithmic and high frequency trading firms and like their need for policies. And we've seen enforcement actions that sometimes even when you could actually program it to do a particular thing, it doesn't always do the thing you want to do. Do or. There's a lack of oversight or supervision about who has access to the code, who has the ability to change it, and so we we see that as an issue. As well sometimes.
Olivia Greer: Yeah, and and I think it's maybe worth noting that you know what we've found with clients is that sometimes there is, you know, you've got legal, you've got compliance, you've got you deal professionals, you've got these sort of different buckets of of employees inside an organization. That it can be challenging at the beginning to figure out how to get everybody on the same page, but I think what we've found is that organizations that are able to get policies in place and get the right people to the table to make sure that everyone's aligned on what the policy is, they actually find that they're better able to onboard more AI tools more effectively. Because everyone's starting from the same page and so I often come in as the person being like you got to do this thing. You don't. New, but I think that internally legal and compliance don't have to be the office of No. I think they're they're really the office of like, let's figure out how to take advantage of these opportunities in a way that is going to be beneficial to the organization and not subject us to whole bunch of risk unnecessarily. Right. There's risk associated with AI. Full stop. But if we come at it. From a vantage point of really kind of getting all the stakeholders on board together early, then there's a. Lot of opportunity there.
Chris Mulligan: And it's not just your use of it either, it's also your vendors use of AI as well and so diligent in those those contracts and those agreements is something that we're seeing coming up because now all your vendors are using AI well, how are they using it right and how and what's your oversight and particularly the more important the functions are.
Olivia Greer: Yeah. And look, I mean, you know you and I have been talking about this and it's an important deadline coming up, which is the updates to Reg. SP. Around cybersecurity, which you can't really separate from the AI conversation because you've got all these cyber security obligations, one of which has to do with vendor diligence, right? And what you're what you're getting your vendors to agree to in terms of their protection of your data. So if we come back to basics, AI really is data and the the quality of the AI. Is dependent on the quality of the data and the security of the data, right? You're hard to look at your compliance with Reg. SP is also related to how you're positioned to be taking advantage of AI opportunities.
Chris Mulligan: So I want to shift a little bit and talk about two specific areas. So obvious. The use of AI raises a ton of questions that you need to work through with well thought through policies and procedures that you can implement that do what you want them to do and allow you to use AI in the way you want to use AI, but in a way that has proper oversight and testing and everything else. So that's sort of like the high level. There's two like, really. Specific issues that keep coming up over and over and over again. One we have seen a pretty aggressive exam and enforcement focus on and that's on marketing and I'll talk about that in a second. But a really hyper technical issue that we haven't seen a lot of focus on from the SEC currently is AI and record keeping. And part of that is just and Olivia, can you speak to this, just the volume of records that often can be generated by using AI in a way that quickly and just ton of documents? That can be created through extensive use of AI in a way that that maybe firms are not accustomed to.
Olivia Greer: Yes, I think the one of the biggest uses of generative AI tools is text generation, right in various contexts. And so tools that are pretty uniformly adopt. Would include call transcription or call summaries, summarizing emails, summarizing other types of documents, right, and those every time you do that, you are creating another document on top of the documents that already exist, right? So you might have a confidential information memorandum. And you want a summary, which makes sense because that's efficient and it helps you kind of get through a bunch of things. In your morning, you've now created a document over on on top of the right. Similarly with the e-mail summarization tools you've already got the e-mail. You now have a document summarizing the e-mail and what's tricky about that is you have limited amount of control over what that tool spits out, right? So if it. Mates, if it misstates something, if it frames something in a way that is counter to the organization's policy, or how you would approach it, it exists in writing, so you're just you're sort of in a space now where you're just generating a ton more written materials that you may not be getting fully and may not. Be totally aligned. With policy and the same goes for all transcriptions. I mean, I think those tools you know, in particular among some of the other tools out there, are really still evolving. We test them internally quite a bit and you get a lot of gibberish and you get a lot of hallucination. And so if you're not incredibly careful about vetting those, then you end up with records that may not reflect reality.
Chris Mulligan: And I think this is made more complicated by the fact that we are coming off of a prior administration that that was very aggressive in books and record cases using another technology text messaging with those very aggressive. Enforcement actions in that space we are not seeing and I frankly don't really expect to see a focus from the division of examinations or probably the division of enforcement for the next few years on this issue. But I think the problem is if we don't have clear guidance and look there are some legal theories about why these may or may not be books and records, and that's something you can work through with your counsel on in terms of developing a strategy for how you think about them. But the bottom line is there's not necessarily a perfectly clear answer for a lot of these types of books and records. And until we get that. From the SEC. You know, we're all sort of relying on the fact that we don't expect there to be a lot of aggressive exam or enforcement activity in this space and that has so far been true. But that of course that can change in the future and in the books and records of today are or where they're going to be examined by a future SEC potentially and said I think this is a space where we where I think the entire industry would love some guidance in terms of what they're supposed to do with all of these new records that are just being spewed out every day by these new tools.
Andrew Dean: Where I don't think they need guidance Chris. And you tease this is on the marketing side. I mean I'll say so when I was at the SEC and in early 2024, we brought what turned out to be the first AI washing cases. As the SEC went on to explain and you knew that it kind of like arrived at the SEC. To use Olivia's term, when. Both chair Gentzler and the director of Enforcement Gruber Gray Wall, both issued on the same day like these video press releases of the actions just to show like how important this topic was. I mean, and the general thrust of those cases was that they were explaining the advisors were that they were using AI. And investment decisions in ways that they were not actually using it right so very easy, kind of low hanging fruit type of a case. Very similar to and frankly, when we were looking at those cases, it's like a typical advisory violation. It's just when you say you're doing a thing and you're not doing it, that's a pretty easy breach of a fiduciary duty case. But again, low hanging fruit, we see the Trump administration has talked about this as AI being important. And looking at enforcing firms and companies, not just advisors, but publicly held companies that are doing. Things similar. So I do think on the marketing side and the way that their firms are advertising, I think is where people can get into a lot of trouble. Olivia, how are you advising clients on that topic or how much are you running into that?
Olivia Greer: Well, you know, we look at it with clients for sure, especially when they're pulling together filings. And So what what are they going to say about it? I I think this is a little bit just putting it in a broader context, which is one of the things that I spend a lot of my time doing is diligence seeing companies that are private equity and other clients are looking to acquire or invest in. And I can't tell you how many. Diligence materials I see whether it's a SIM or, you know, internal documents that say, you know, we're an AI forward company. We're AI powered and then you start to get under the hood a little bit and ask questions and and on the diligence call they say well it's aspirational, right? And they think if you're trying to sell your company that feels like a no brainer. Like, do that all day long. We're gonna call it AI powered, right? Because we could be AI powered and we're thinking about AI and we want to be doing that. But as you said, right, that really gets you boxed in. And if you're making representations that aren't. Through that's a problem. So I think what we talked to clients a lot about is what are we actually doing right? That's one bucket. What are we anticipate doing this year or in the next three to five years, right? And what's totally aspirational? We're looking into it. At some point, we'll do it and we got to be really clear about what that is. Any anything too. Back to maybe what's a recurring theme is we also have to understand, you know where those potential use cases fall in terms of risk and in terms of value to the business. Because if we're talking about creating some sort of internal tool that does basic analytics, that's one thing. If we're talking about creating a chat bot that interfaces with customers on an e-commerce platform, that's a very different kind of conversation. And So what we mean when we say AI can can mean very different things in very different contexts.
Chris Mulligan: Look, AI policy is incredibly important to have them. Certainly seeing examiners look at them very closely. Some degree, perhaps learning about how advisors are dealing with them, but very important especially is the more important the function gets books and records. You definitely need a strategy. How are you handling this? What's your level of risk that you're willing to take, but not necessarily today issue that we're seeing live on exams with something you think about, but when it comes to marketing, that is a today problem. There is no hesitation, no holding back at all. This is something examiner very comfortable looking at, something that enforcement obviously has been very interested. Then and so of all the topics. That is the hottest of the hot topic that you absolutely have to get right. Need policies and procedures. You need to have strategy in books records, but you absolutely have to be able to substantiate all your statements about AI in any sort of marketing materials, because if you don't, that's something that could get you in trouble pretty fast. Finally, just want to talk about you mentioned briefly connection to cyber security. Look, we've had clients from all over the map in terms of somewhere prepared for the December 3rd deadline for SP amendments. Others have been waiting, hoping for a delay. We are now after Labor Day. I think if you haven't gotten started on SP policies for December, you probably need to time to sort of running out and we haven't gotten that delay yet. We still may get it. Many are hopeful that we'll get it, but I think we probably need to be focused on that. Olivia, what sort of timeline? You know, I know we've got a ton of. These it takes some time to think through these issues though. Right. Yeah, it does and.
Olivia Greer: And I think it's, you know, it's not the worst lift we've seen in terms of having to comply with an update, but it is a lift and it requires some thought internally, as Chris said, we're doing a ton of them for clients and and it depends a little bit on sort of the the current state of affairs in your own internal. Policies. But there are a bunch of updates, mostly it's like. You got to make sure you're saying the right things in the policy. You probably already are doing a lot of the stuff that now needs to be more specifically documented, but it's having that conversation internally to make sure that you are great because back. To. To Chris, seem like if you're saying something that you're not actually doing, that's a problem in cyber security side as well. So it's it, you know it takes some digesting. Of what the new obligations are. There's some drafting involved. We've been doing a lot of work just to update clients, policies for them and and have those conversations intern. There's more that needs to be done with respect to vendor diligence we talked about earlier. There's some requirements, you know, like just funky ones that you just need to pay attention to. They're not hard, but for instance, you have to have a draft breach notification letter in your policy that's just ready to go. So it's like it's not that hard. You just have to really do it and. That there. And I think if I've learned anything from Chris and Andrew, it's that examiners really want to be able to check their boxes. So you got to give them. The language that they're looking for.
Chris Mulligan: That is true. Well, Olivia, I'm sure you have 8 million other things to do, so we will let you go. The next time you have 30 minutes to spare, we're going to have you back on the show. I don't know when that's going to be, but.
Olivia Greer: I would be delighted.
Chris Mulligan: Thanks. Thanks so much for coming.
Olivia Greer: Thanks for having me.
Andrew Dean: Thanks. All right. Before we go, we wanted to talk about two other major retail advisor cases that came out on the Friday before Labor Day and one of the matters the SEC found that the advisor failed to disclose conflicts relating to how financial advisors were compensated. In particular, the comp system for the FA's included incentives to enroll. And retain clients in a particular fee advisory service, and even though one of the documents that brochure were disclosed that the FA's were eligible for this bonus and that the incentive existed. There were other documents, the form CRS and the supplement to the brochure that contained contradictory disclosures. Namely, there was no additional compensation. In addition, they would have found that the marketing material on the website regarding conflicts of interest stated the FA's received no financial incentives, which the order found that they did so. With this course of conduct, the SEC found violations of 2062, which is a negligent breach of a fiduciary duty and rule 20647, which is failing to have and implement reasonably designed policies procedure. The penalty was 19 and a half million. No disgorgement in a case like this. We assumed the compensation was paid to the FA's and so there was nothing to scourge from the advisor itself. Size of the penalty though does suggest that a substantial amount of money may have gone to the FA's for that financial incentive in the second matter. The SEC made similar. Findings involving a duly registered investment advisor and broker dealer on the investment advisor side, and this is kind of a classic SEC case. The SEC found that the statements along the lines of that. That a advisor, quote UN quote, may get a bonus compensation is not sufficiently fair and Full disclosure where the party actually is not just may receiving the incentive compensation. And So what the SEC is looking for in these cases is a very clear statement full and fair disclosure. That there is a conflict and that there's an incentive that may be against the clients best. Interest on the broker dealer side, we had similar violations. The corresponding versions of Reg BI. In addition to record keeping violation for failing to document the relationship with the client with the customer. That case had a $4 million in disgorgement of $750,000. Penalty appears the SEC was in that matter able to trace the benefit received by the firm connection with the failure to disclose the conflict. And what's remarkable about these cases is that there's not a lot remarkable about these cases. These are the types of retail advisor cases we predicted the SEC would continue bringing, you know, as is typical of these types of cases, the orders described in careful detail, the disclosures that the SEC viewed as violative and that's. Intended to give guidance to industry about how disclosure should read and what the SEC staff views as being problematic, we see a policies and procedures charge. Some predicted the end of those cases that still exists. It exists in this case, that violation has been charged in this. Commission still notable though NO20648 case, which prohibits false misleading statements by advisors, again not needed here because there was a 2062 charge. We're still waiting for the 1st 20648 out of this Commission. Big take away from me that you know, there are lots of places that advisors and broker dealers have in their writings. Including their websites, marketing material aware representations are made about compensation received by the broker, dealer, or the financial advisor. Center and these need to just be combed through to ensure that there's nothing that may be contradictory to what is actually happening. Chris, any thoughts from you on these cases or any any impact you see on the compliance side?
Chris Mulligan: No, I mean understanding your conflicts of interest and disclosing them robustly, right fully and fairly, which means not saying May when it's will these are eternal lessons, but they're really important and I think it can sometimes be a challenge to understand all the conflict and they're in places you may not think about. So you know just a lesson that continues which is make sure you understand where all your conflicts of interest. And did you disclose them in a very? Awesome way.
Andrew Dean: Chris, any final thoughts for today?
Chris Mulligan: Thanks, Andrew. There's one more issue I want to discuss. There was an enforcement action that just came out about our favorite topic, the Advisors Act marketing rule. This isn't a very important enforcement action because it certainly shows where the new Commission is at and where it is on. The marketing rule appears to be no different than where the old Commission was. In this enforcement action, there was a statement on the advisors website about conflicts of interest about how it refused conflicts. Interest in the enforcement action indicated that it started with an examination out of the Boston Regional Office and then was referred to the Asset Management Unit. You're old unit, Andrew, and the division of enforcement. They said that this statement could not be substantiated by the investment advisor. In addition, there was information it's form. V that actually contradicted this statement, and so you know, this is something we've been talking about quite a bit on the podcast, this idea that when you say something in an advertised. This new substantiation requirement that requires investment advisors to be able to substantiate statements and material fact in its advertisements upon demand by staff of the Commission is something that we are seeing come up all the time during examinations, and it's something that we know that division of enforcement has been interested in and has brought. Actions in the past. So this is yet another example. Anytime any statement is in your marketing materials, you have to be sure that you can substantiate it. It takes a lot of care, a lot of effort, but it's really, really important. And this is just a a further example of the fact that on this particular issue, it does not appear this Commission is any different than the previous Commission.
Andrew Dean: So that's it for today. Thanks for joining us on Asset Management Corner and we'll catch you next time.
Chris Mulligan: Could you give me a hint how many letters are in it?
Andrew Dean: I think it abbreviated as 2.
Chris Mulligan: Oh, is it? Could it be AI?
Andrew Dean: That's right. That's right. I know, Chris, you were probably thinking cross trading rules, but those came in a distant second today article. Today artificial intelligence.
Chris Mulligan: That's right. I I feel like all all of our, you know, many of our listeners have contacted us said that was a great episode. That was fantastic. When is the episode on AI coming out? Because I really want to listen to that one. So this is sort of by universal demand from our listeners and to help us today, we're joined by the most in demand partner at Weil and she took time out of her very busy schedule and it is very busy, Olivia Greer. She is a partner in our technology and IP transactions practice and she is head of our privacy and cyber Security group as well as Co head of the firm's AI task. No wonder she is in demand and we are thrilled to have her here as a guest today. She represents our clients and connection with privacy and data security matters, helping clients across all market sectors navigate the business implications of new and evolving technology, including of course AI delighted to have you, Olivia. Thanks for taking a few minutes to talk with us.
Olivia Greer: Thank you for having me. I've never had such a warm welcome. I'm like this is, this is like I've arrived.
Chris Mulligan: So why did you set the stage for us we hear about AI. Everyone wants to talk about it. It impacts so many different parts of the asset management industry. But it's not just one thing, right? It it crosses so many different trading activities, books and records privacy. There's just so many areas that AI is affecting and it's impacting. So many different areas we're going to try to stay focused on the areas that are sort of red hot right now, but why don't you set the stage? Because it's not just securities laws, right? There's federal. There's state laws, there's obligations coming from everywhere right now. And why don't you just set the stage about what is happening in the AI space on the technology side and on the regulatory side across different regulators and how all of those things are sort of colliding?
Olivia Greer: Yeah, happy to. So I think the cleanest way to do it and then we can obviously get deeper into all the issues that that you want to talk about is think about where we were 3 1/2 years ago. And we were really just starting to talk about this big thing that was coming, folks in my world. Were chuckling a little bit because we've been talking about AI for a decade or more, we just called it machine learning. And so, you know, when we sort of look at the landscape over the last, you know, 3 1/2 for four years, that obviously has changed dramatically in terms of what? Yeah, I can do, but coming back to basics, it's really a lot about what's the data. The information that's being used to train the model or the tools to put into it to allow it to do its job and what's coming out of it and what's the impact of that. So I think that's like the framework that we think about all AI within and that's why coming at it as a privacy professional, I have a unique vantage point because I'm looking at data all the time, but it also really sets the stage. Regulatory landscape and before I get to that, I'll just say quickly, right that framework is the same whether you're talking about machine learning, whether you're talking about artificial, Generative artificial intelligence, Agentic artificial intelligence. Right. We could parse all those terms, but fundamentally, you're really talking about data and how it's going into these tools and how it's coming out of these tools. So that now lives in this constantly evolving landscape both domestically and internationally. And I think businesses across multiple sectors and particularly so in sort of the asset management and wealth management space. Just because the information you're handling is quite sensitive and you have a very specific regulatory sort of sandbox that you play in, right, there's layers of of regulation that you have to be thinking about, right. So obviously you're thinking about. The SEC and we. Talk about this all the time and we can. Get into this. Sort of nitty gritty of that, but because you're talking about so much data. You are also talking about privacy laws, right? So we now have upwards of 15 privacy laws in different states across the US most of those have parameters around how do you use automated decision making technology? What kind of notification do you owe individuals? If you are using data to train models or using data as inputs, right? There's that layer you've got this sort of collision of, you know, federal agency regulations against state laws. And then if you're operating internationally, you have an added layer, right? If you're subject to the GDPR in Europe and or in the UK or or other international privacy laws as well as then sort of the big one, which is the UA I? Fact which, and I think we'll talk about this more, right. If you are engaging in any AI use that might be classified as high risk, you have entered an entire world of different types of regulatory obligations. So there's a lot and I'll pause there, but I think that's sort of laying the groundwork for. What we're looking at when we talked about AI.
Chris Mulligan: All right, so basic question, do you think you should have AI policies if you're using AI Olivia?
Olivia Greer: Chris, I think you have to have AI policies. That was a good setup I just like knocked that one out of the park.
Chris Mulligan: Oh, wow. OK, well, I think we're, I think, end of conversation then. OK.
Olivia Greer: Done. We're done. But the reason I would say that looks really different for different businesses, right? I think that's what's really critical is yes, you if you are touching AI in any way and even honestly if it's machine learning, if it's sort of what you would think of as like basic, you got to have some kind of policy. And I would say even if you're just starting to think about it, your first step should be considering internally, what that policy is, right. And so for businesses that are really building. Truly building their own models or building tools that they're they're pushing out publicly, you have to really have an AI governance framework in place that is going to involve multiple policies and multiple teams, right? But it's not a one-size-fits-all model, right? So for most companies, when I say you need an AI policy, I'm talking about. A fairly basic handful of pages that set out the high level dos and don'ts, and let employees know what they can do. What they can't do, and when they need to go. Talk to someone. And who to? Go and talk to so it's. I wouldn't say it's super super complicated. We got to have something on paper.
Chris Mulligan: And I think we are seeing very deep interest from SEC examiners about AI and about AI policies and procedures. And I think one thing that we talk about all the time is how are you using AI because the more important that function gets from the Advisors act perspective, right, you have a fiduciary duty that attaches to your investment. Decisions to your services that you provide to your clients and they just can't be outsourced to a I you have to have if you're using AI for trading or investment decisions, you have to have a process to make sure that it's doing what you think it's doing. You have to have oversight. You have to have testing. You obviously have to have robust policies and procedures that sort of incorporates all of that. But the more important the function. As you mentioned it earlier, under the Advisers Act, if you're registered investment advisor, it becomes incredibly important to make sure that you have your hands around exactly how you're using AI. And if you have proper. Oversight. But I think you mentioned that the more important the function, it's not only an advisors act issue, but there's also issues with States and maybe other jurisdictions.
Olivia Greer: Yeah, I think that's exactly right. And and I think your point is, is the first point right, which is you really have to understand how the organization is currently using it and how it. Wants to be using. It and be planning for that because you know. It's too easy to sort of let people lose and then find out later that somebody was putting confidential materials into the public version of ChatGPT, which sounds crazy but is really not crazy and is something that I think is happening less frequently than it was, maybe a year or so ago. But is is really something that people get so excited. Right. They think that there's all this efficiency and and there is right, there's great opportunities and there are really good tools out there, but you really have to be thoughtful and be aware of where you're going to bump up against regulatory applications or regulatory restrictions. I think you've raised some of the critical ones, so needing there to be human oversight I think is probably maybe the most important piece to consider. And the second one I would say is explainable ality and particularly for investment advisors. You've got to be able to explain the decisions that are getting made. And most AI tools are are in a black box and so if for example, one of the the use cases that is coming up more and more is utilizing AI to make trading decisions well if if that tool is making a decision and something goes horribly awry.The organization has to be able to explain that decision, and if you don't understand how the AI came to that decision and there was no human as a checkpoint before the decision got actually finalized and executed on, then you've got a real problem.
Andrew Dean: Just jumping in there like I think there's actually some comparisons that could be drawn to like the algorithmic and high frequency trading firms and like their need for policies. And we've seen enforcement actions that sometimes even when you could actually program it to do a particular thing, it doesn't always do the thing you want to do. Do or. There's a lack of oversight or supervision about who has access to the code, who has the ability to change it, and so we we see that as an issue. As well sometimes.
Olivia Greer: Yeah, and and I think it's maybe worth noting that you know what we've found with clients is that sometimes there is, you know, you've got legal, you've got compliance, you've got you deal professionals, you've got these sort of different buckets of of employees inside an organization. That it can be challenging at the beginning to figure out how to get everybody on the same page, but I think what we've found is that organizations that are able to get policies in place and get the right people to the table to make sure that everyone's aligned on what the policy is, they actually find that they're better able to onboard more AI tools more effectively. Because everyone's starting from the same page and so I often come in as the person being like you got to do this thing. You don't. New, but I think that internally legal and compliance don't have to be the office of No. I think they're they're really the office of like, let's figure out how to take advantage of these opportunities in a way that is going to be beneficial to the organization and not subject us to whole bunch of risk unnecessarily. Right. There's risk associated with AI. Full stop. But if we come at it. From a vantage point of really kind of getting all the stakeholders on board together early, then there's a. Lot of opportunity there.
Chris Mulligan: And it's not just your use of it either, it's also your vendors use of AI as well and so diligent in those those contracts and those agreements is something that we're seeing coming up because now all your vendors are using AI well, how are they using it right and how and what's your oversight and particularly the more important the functions are.
Olivia Greer: Yeah. And look, I mean, you know you and I have been talking about this and it's an important deadline coming up, which is the updates to Reg. SP. Around cybersecurity, which you can't really separate from the AI conversation because you've got all these cyber security obligations, one of which has to do with vendor diligence, right? And what you're what you're getting your vendors to agree to in terms of their protection of your data. So if we come back to basics, AI really is data and the the quality of the AI. Is dependent on the quality of the data and the security of the data, right? You're hard to look at your compliance with Reg. SP is also related to how you're positioned to be taking advantage of AI opportunities.
Chris Mulligan: So I want to shift a little bit and talk about two specific areas. So obvious. The use of AI raises a ton of questions that you need to work through with well thought through policies and procedures that you can implement that do what you want them to do and allow you to use AI in the way you want to use AI, but in a way that has proper oversight and testing and everything else. So that's sort of like the high level. There's two like, really. Specific issues that keep coming up over and over and over again. One we have seen a pretty aggressive exam and enforcement focus on and that's on marketing and I'll talk about that in a second. But a really hyper technical issue that we haven't seen a lot of focus on from the SEC currently is AI and record keeping. And part of that is just and Olivia, can you speak to this, just the volume of records that often can be generated by using AI in a way that quickly and just ton of documents? That can be created through extensive use of AI in a way that that maybe firms are not accustomed to.
Olivia Greer: Yes, I think the one of the biggest uses of generative AI tools is text generation, right in various contexts. And so tools that are pretty uniformly adopt. Would include call transcription or call summaries, summarizing emails, summarizing other types of documents, right, and those every time you do that, you are creating another document on top of the documents that already exist, right? So you might have a confidential information memorandum. And you want a summary, which makes sense because that's efficient and it helps you kind of get through a bunch of things. In your morning, you've now created a document over on on top of the right. Similarly with the e-mail summarization tools you've already got the e-mail. You now have a document summarizing the e-mail and what's tricky about that is you have limited amount of control over what that tool spits out, right? So if it. Mates, if it misstates something, if it frames something in a way that is counter to the organization's policy, or how you would approach it, it exists in writing, so you're just you're sort of in a space now where you're just generating a ton more written materials that you may not be getting fully and may not. Be totally aligned. With policy and the same goes for all transcriptions. I mean, I think those tools you know, in particular among some of the other tools out there, are really still evolving. We test them internally quite a bit and you get a lot of gibberish and you get a lot of hallucination. And so if you're not incredibly careful about vetting those, then you end up with records that may not reflect reality.
Chris Mulligan: And I think this is made more complicated by the fact that we are coming off of a prior administration that that was very aggressive in books and record cases using another technology text messaging with those very aggressive. Enforcement actions in that space we are not seeing and I frankly don't really expect to see a focus from the division of examinations or probably the division of enforcement for the next few years on this issue. But I think the problem is if we don't have clear guidance and look there are some legal theories about why these may or may not be books and records, and that's something you can work through with your counsel on in terms of developing a strategy for how you think about them. But the bottom line is there's not necessarily a perfectly clear answer for a lot of these types of books and records. And until we get that. From the SEC. You know, we're all sort of relying on the fact that we don't expect there to be a lot of aggressive exam or enforcement activity in this space and that has so far been true. But that of course that can change in the future and in the books and records of today are or where they're going to be examined by a future SEC potentially and said I think this is a space where we where I think the entire industry would love some guidance in terms of what they're supposed to do with all of these new records that are just being spewed out every day by these new tools.
Andrew Dean: Where I don't think they need guidance Chris. And you tease this is on the marketing side. I mean I'll say so when I was at the SEC and in early 2024, we brought what turned out to be the first AI washing cases. As the SEC went on to explain and you knew that it kind of like arrived at the SEC. To use Olivia's term, when. Both chair Gentzler and the director of Enforcement Gruber Gray Wall, both issued on the same day like these video press releases of the actions just to show like how important this topic was. I mean, and the general thrust of those cases was that they were explaining the advisors were that they were using AI. And investment decisions in ways that they were not actually using it right so very easy, kind of low hanging fruit type of a case. Very similar to and frankly, when we were looking at those cases, it's like a typical advisory violation. It's just when you say you're doing a thing and you're not doing it, that's a pretty easy breach of a fiduciary duty case. But again, low hanging fruit, we see the Trump administration has talked about this as AI being important. And looking at enforcing firms and companies, not just advisors, but publicly held companies that are doing. Things similar. So I do think on the marketing side and the way that their firms are advertising, I think is where people can get into a lot of trouble. Olivia, how are you advising clients on that topic or how much are you running into that?
Olivia Greer: Well, you know, we look at it with clients for sure, especially when they're pulling together filings. And So what what are they going to say about it? I I think this is a little bit just putting it in a broader context, which is one of the things that I spend a lot of my time doing is diligence seeing companies that are private equity and other clients are looking to acquire or invest in. And I can't tell you how many. Diligence materials I see whether it's a SIM or, you know, internal documents that say, you know, we're an AI forward company. We're AI powered and then you start to get under the hood a little bit and ask questions and and on the diligence call they say well it's aspirational, right? And they think if you're trying to sell your company that feels like a no brainer. Like, do that all day long. We're gonna call it AI powered, right? Because we could be AI powered and we're thinking about AI and we want to be doing that. But as you said, right, that really gets you boxed in. And if you're making representations that aren't. Through that's a problem. So I think what we talked to clients a lot about is what are we actually doing right? That's one bucket. What are we anticipate doing this year or in the next three to five years, right? And what's totally aspirational? We're looking into it. At some point, we'll do it and we got to be really clear about what that is. Any anything too. Back to maybe what's a recurring theme is we also have to understand, you know where those potential use cases fall in terms of risk and in terms of value to the business. Because if we're talking about creating some sort of internal tool that does basic analytics, that's one thing. If we're talking about creating a chat bot that interfaces with customers on an e-commerce platform, that's a very different kind of conversation. And So what we mean when we say AI can can mean very different things in very different contexts.
Chris Mulligan: Look, AI policy is incredibly important to have them. Certainly seeing examiners look at them very closely. Some degree, perhaps learning about how advisors are dealing with them, but very important especially is the more important the function gets books and records. You definitely need a strategy. How are you handling this? What's your level of risk that you're willing to take, but not necessarily today issue that we're seeing live on exams with something you think about, but when it comes to marketing, that is a today problem. There is no hesitation, no holding back at all. This is something examiner very comfortable looking at, something that enforcement obviously has been very interested. Then and so of all the topics. That is the hottest of the hot topic that you absolutely have to get right. Need policies and procedures. You need to have strategy in books records, but you absolutely have to be able to substantiate all your statements about AI in any sort of marketing materials, because if you don't, that's something that could get you in trouble pretty fast. Finally, just want to talk about you mentioned briefly connection to cyber security. Look, we've had clients from all over the map in terms of somewhere prepared for the December 3rd deadline for SP amendments. Others have been waiting, hoping for a delay. We are now after Labor Day. I think if you haven't gotten started on SP policies for December, you probably need to time to sort of running out and we haven't gotten that delay yet. We still may get it. Many are hopeful that we'll get it, but I think we probably need to be focused on that. Olivia, what sort of timeline? You know, I know we've got a ton of. These it takes some time to think through these issues though. Right. Yeah, it does and.
Olivia Greer: And I think it's, you know, it's not the worst lift we've seen in terms of having to comply with an update, but it is a lift and it requires some thought internally, as Chris said, we're doing a ton of them for clients and and it depends a little bit on sort of the the current state of affairs in your own internal. Policies. But there are a bunch of updates, mostly it's like. You got to make sure you're saying the right things in the policy. You probably already are doing a lot of the stuff that now needs to be more specifically documented, but it's having that conversation internally to make sure that you are great because back. To. To Chris, seem like if you're saying something that you're not actually doing, that's a problem in cyber security side as well. So it's it, you know it takes some digesting. Of what the new obligations are. There's some drafting involved. We've been doing a lot of work just to update clients, policies for them and and have those conversations intern. There's more that needs to be done with respect to vendor diligence we talked about earlier. There's some requirements, you know, like just funky ones that you just need to pay attention to. They're not hard, but for instance, you have to have a draft breach notification letter in your policy that's just ready to go. So it's like it's not that hard. You just have to really do it and. That there. And I think if I've learned anything from Chris and Andrew, it's that examiners really want to be able to check their boxes. So you got to give them. The language that they're looking for.
Chris Mulligan: That is true. Well, Olivia, I'm sure you have 8 million other things to do, so we will let you go. The next time you have 30 minutes to spare, we're going to have you back on the show. I don't know when that's going to be, but.
Olivia Greer: I would be delighted.
Chris Mulligan: Thanks. Thanks so much for coming.
Olivia Greer: Thanks for having me.
Andrew Dean: Thanks. All right. Before we go, we wanted to talk about two other major retail advisor cases that came out on the Friday before Labor Day and one of the matters the SEC found that the advisor failed to disclose conflicts relating to how financial advisors were compensated. In particular, the comp system for the FA's included incentives to enroll. And retain clients in a particular fee advisory service, and even though one of the documents that brochure were disclosed that the FA's were eligible for this bonus and that the incentive existed. There were other documents, the form CRS and the supplement to the brochure that contained contradictory disclosures. Namely, there was no additional compensation. In addition, they would have found that the marketing material on the website regarding conflicts of interest stated the FA's received no financial incentives, which the order found that they did so. With this course of conduct, the SEC found violations of 2062, which is a negligent breach of a fiduciary duty and rule 20647, which is failing to have and implement reasonably designed policies procedure. The penalty was 19 and a half million. No disgorgement in a case like this. We assumed the compensation was paid to the FA's and so there was nothing to scourge from the advisor itself. Size of the penalty though does suggest that a substantial amount of money may have gone to the FA's for that financial incentive in the second matter. The SEC made similar. Findings involving a duly registered investment advisor and broker dealer on the investment advisor side, and this is kind of a classic SEC case. The SEC found that the statements along the lines of that. That a advisor, quote UN quote, may get a bonus compensation is not sufficiently fair and Full disclosure where the party actually is not just may receiving the incentive compensation. And So what the SEC is looking for in these cases is a very clear statement full and fair disclosure. That there is a conflict and that there's an incentive that may be against the clients best. Interest on the broker dealer side, we had similar violations. The corresponding versions of Reg BI. In addition to record keeping violation for failing to document the relationship with the client with the customer. That case had a $4 million in disgorgement of $750,000. Penalty appears the SEC was in that matter able to trace the benefit received by the firm connection with the failure to disclose the conflict. And what's remarkable about these cases is that there's not a lot remarkable about these cases. These are the types of retail advisor cases we predicted the SEC would continue bringing, you know, as is typical of these types of cases, the orders described in careful detail, the disclosures that the SEC viewed as violative and that's. Intended to give guidance to industry about how disclosure should read and what the SEC staff views as being problematic, we see a policies and procedures charge. Some predicted the end of those cases that still exists. It exists in this case, that violation has been charged in this. Commission still notable though NO20648 case, which prohibits false misleading statements by advisors, again not needed here because there was a 2062 charge. We're still waiting for the 1st 20648 out of this Commission. Big take away from me that you know, there are lots of places that advisors and broker dealers have in their writings. Including their websites, marketing material aware representations are made about compensation received by the broker, dealer, or the financial advisor. Center and these need to just be combed through to ensure that there's nothing that may be contradictory to what is actually happening. Chris, any thoughts from you on these cases or any any impact you see on the compliance side?
Chris Mulligan: No, I mean understanding your conflicts of interest and disclosing them robustly, right fully and fairly, which means not saying May when it's will these are eternal lessons, but they're really important and I think it can sometimes be a challenge to understand all the conflict and they're in places you may not think about. So you know just a lesson that continues which is make sure you understand where all your conflicts of interest. And did you disclose them in a very? Awesome way.
Andrew Dean: Chris, any final thoughts for today?
Chris Mulligan: Thanks, Andrew. There's one more issue I want to discuss. There was an enforcement action that just came out about our favorite topic, the Advisors Act marketing rule. This isn't a very important enforcement action because it certainly shows where the new Commission is at and where it is on. The marketing rule appears to be no different than where the old Commission was. In this enforcement action, there was a statement on the advisors website about conflicts of interest about how it refused conflicts. Interest in the enforcement action indicated that it started with an examination out of the Boston Regional Office and then was referred to the Asset Management Unit. You're old unit, Andrew, and the division of enforcement. They said that this statement could not be substantiated by the investment advisor. In addition, there was information it's form. V that actually contradicted this statement, and so you know, this is something we've been talking about quite a bit on the podcast, this idea that when you say something in an advertised. This new substantiation requirement that requires investment advisors to be able to substantiate statements and material fact in its advertisements upon demand by staff of the Commission is something that we are seeing come up all the time during examinations, and it's something that we know that division of enforcement has been interested in and has brought. Actions in the past. So this is yet another example. Anytime any statement is in your marketing materials, you have to be sure that you can substantiate it. It takes a lot of care, a lot of effort, but it's really, really important. And this is just a a further example of the fact that on this particular issue, it does not appear this Commission is any different than the previous Commission.
Andrew Dean: So that's it for today. Thanks for joining us on Asset Management Corner and we'll catch you next time.
View more about Weil's White Collar, SEC Examinations, Private Funds and Securities Litigation practices.