NextWealth Insights: Episode 6, Season 5 – Advisory AI
By Sham Latif | 03 December 2024 | 31 minute read
Join Heather Hopkins and Alasdair Walker at Handford Aitkenhead & Walker as they explore the transformative impact of AI on financial advisory firms. Discover how industry leaders are integrating AI into their practices and preparing for the future.
In Season 5, Episode 6, Heather and Alasdair sit down with Alan Gurung, CEO and Co-Founder of AdvisoryAI. Alan shares the story behind AdvisoryAI, its mission, and how it’s transforming the financial services industry. He also dives into practical applications of AI and provides a 10-minute product demo to showcase its capabilities.
Transcript —
Disclaimer: This transcript was produced with the assistance of AI, so it may contain errors or inaccuracies.
Heather
Hey, welcome back to NextWealth insights. My name is Heather Hopkins, Founder and Managing Director of NextWealth. I’m joined by my co host, Alistair Walker, Managing Director of HA and W and a Chartered Financial Planner. Hello, Alistair,
Alistair
Hi Heather. Awesome to be back for episode six of the companion podcast to our AI Lab membership group. Really excited to be speaking with today’s guest, who you’re about to introduce. But before we dive in, as always, I’d like to thank our sponsors for making this possible. So thank you to Salesforce, ssnc, Aviva and fidelity, we’ve got a great guest today. And also, for those of you listening on audio, a slightly different episode planned. We’d really like to showcase some uses of AI and financial services, some some best practices. And so this podcast is going to include a 10 minute or so Product Demo, the link to which is going to be in the show notes, and it was
Heather
one of the things we wanted to do with AI Lab, was give people intros to some of the exciting tech innovation and the tech providers that are out there. But the AI Lab is really about helping practitioners showcase what they’re doing. So the podcast is a really great format, so we’re delighted to have Alan Goering here from advisory AI. He’s the founder and CEO. We already heard about advisory AI from Johnny Stubbs. He presented at our last event in September and was a guest on the podcast. Johnny, of course, is from Lyft, and they’ve had some fantastic results from using advisory AI for meeting notes and suitability reports. So really, really looking forward to this. Alan was a financial planner at Clearwater Wealth Management before he sold the firm to SJP and he founded advisor AI with Roshan Tamil Selvin, an engineer and serial entrepreneur in 2023 welcome
Alan
Thanks for having me. I’m really excited to go ahead and do this. Excellent. So
Alistair
we’re going to kick off, as I said, with the with the product demo of advisory. Ai, we talk to you and about advisory. AI, like, like, everybody who listens already knows who you are, because we’ve got, sort of, Johnny has talked about his work at lift with you, but I think it’d be really helpful as you, as you kick us off with the with the demonstration, to talk a little bit about advisory AIS, what’s the reason for existence, what’s the vision? What pain points are you trying to solve? And then sort of take us through how things actually work with it? Sure.
Alan
Yeah, absolutely. A bit of background. So what we’ve created, from our side is essentially an end to end productivity tool for both advisors as well as support staff. So we have names for the models that we’ve created. We have two models, one which is a support staff model, and another which is an advisor model. The support staff model is called Emma. A bit of backstory to this. When I ran my financial planning practice, the first power planner I hired. Her name was Emma, so it’s named after her, whereas the advisor plan is named Evie, so not named after any particular person. Just thought Evie, her name was, sounded nice. So what does this process look like? It’s an intimate process. And what do I mean by that is, once an advisor has a meeting, that meeting is recorded, transcribed, EV can then go ahead and create the like sort of meeting notes, hand over documents fact finds, and also populate CRM systems. For example, it can go up ahead and populate the fact find fields within IO, and then it gets passed on to the support staff through Emma. And through Emma, you can go ahead and create, submit letters and review letters, as well as summarize LOA packs. Now I’d love to go ahead and show you this, if that’s okay, absolutely.
Heather
Hey, listeners at this point, Alan shares a demo of advisor AI. He gives us an example of what Emma a support staff, or Evie, an advisor, how they would use the tech. It’s a fantastic demo. We’ve got a link to the video in the show notes, but doesn’t really translate to audio, so we’re going to skip to after the demo. Now you so what are the questions I want to ask is from I was at a para planners assembly event, their big day out recently, and it was really interesting, because there was a few para planners there who are using advisory AI and others that are experimenting with other tools. And one of the things they were talking about was that they have a particular way of writing, so so individual para planners within a firm will have a slightly different approach to writing, and they found it really difficult to give up that tone of voice. I thought it was really interesting, because I thought, but be Shouldn’t this suit a bit? Validity letter be the tone of the firm, and that should be agreed at a firm level. But the Para planners were saying that, you know, I’ve done this for so long, and I have my way of doing it, and it’s just a different tone, slightly different structure. And they were kind of, they were like, yeah, it saves me a lot of time, but then I sometimes feel like I have to put it back into the way I would have written it. And I was just wondering what your view is on that, because it struck me as a really odd thing to say, but it was actually really consistent across the different people. And they almost said it was hard for them to read it with the rigor that they wanted to really digest it and understand it and read it if it was written in a different way than they would typically have written it. So is it a different skill set? What? What’s What do you think?
Alan
You know, it’s so funny you say this because we come across this all the time, especially with larger firms who have 3040, power planners within the firm. You’re talking about, some people who’ve done it for decades, and others who have just done it for one, two years, they’ll have a very different way of doing things. And also, the output tends to be very different as well. I agree with you, Heather. I believe the suitability leather should be the tone of voice of the firm, as opposed to a single individual. But that being said, the tone of voice should be to the best report format that you can come out with, right? And so this is, this is the case that we try to put forward with any firm that we work with, which is, let’s have a round table with the leaders around the power planning department, and let’s discuss what the report should look like in the best case scenario, typically, those individuals who are slightly nuancing Their reports in a certain fashion, they’re doing that because they believe that’s a better way to go about doing that report when speaking with firms, the intention Here is, let’s understand what a perfect report looks like, or let’s understand what a great report looks like, and let’s make sure everyone can move towards that pathway. There should be some form of homogeneous reports in the in the fashion for a super for a specific firm. Now there’s even the conversation, or larger conversation here. There’s about 5000 firms in the UK, and each firm would have a slightly nuanced suitability letter, a different, different way of presenting this to their client. Now, 4999 firms can’t have the best suitability letter. So there’s also the conversation there as to like, well, what is perhaps the one best suitability letter that you can have across the whole country? But to answer your question, yes, we get it all the time. We believe the answer to that is just having a round table discussion saying, Look, we’re all aiming for the same thing here. How can you present something to a client that’s going to be extremely well readable and also something that’s going to be efficient for the firm to conduct in terms of going forward, just trying to find the right balancer?
Heather
I love that answer, because it’s about how you change people’s behaviors, how you engage them in the process, to bring them on the journey with you. And that’s one of the things that Johnny talked about. It’s one of the things that capil from Salesforce talked about, was the importance of the bringing the people on that journey, because you’re you’re asking people to make a pretty significant change. And the way to make sure that that change is successful is you really need to invest in change management, which is, which is what you’re talking about, right? The real grilling from Alistair,
Alistair
yes. So I tend to try and focus on challenges and risks, because I am a I am a massive geek, and I’m hugely positive on AI and technology, but I try and maintain my own momentum or control my momentum by thinking about what could go horribly wrong. And there’s so much potential for things to go horribly wrong when we’re starting to introduce AI models into suitability, I would really like to hear about where those issues tend to come up when you’re onboarding clients. Where are things going wrong, and what can people learn, even if they’re not using your software? But what can people learn about implementing these sorts of AI assistance into their productivity from those mistakes? Yeah, great
Alan
question. One word expectation. When we have conversation with firms, they almost go, great. Your tool can write suitably letter when suitably less is created. Amazing. I’ll just go ahead and send that to your client. So the expectation got to be set from the get go. And there’s a lot of misalignment at the moment with advising firms, because largely when you use SaaS models or SaaS tools out there, like software tools, it’s typically if you do A, then B will come across from the other side. It’s just not the same thing with generative AI. If you end up uploading one bit of information, for example, we create we showcase the email summary there. If we uploaded that same transcript and created the email summary again, the output will be slightly different from the last time, right? And it may be because it would say, instead of saying hi Alistair, it now says Dear Alistair, for example. So there’s always going to be nuances, differences that generative AI, as it learns over time, it can go by changing the outputs. What this means then is that there always needs to be reviewing throughout this entire process, and that’s such an important facet of that. So it’s the one setting expectation. And that it’s not going to do the entire job. It’s not a replacement tool. It is a productivity tool. And the second part is, in order for you to end up utilizing it the way they should be, you should always be reviewing it, and there should be stages to that reviewing process. For example, reviewing the information you uploading, reviewing the snapshot that we’ve created from our side, and then reviewing the end report that’s been created?
Alistair
Yeah, yeah, absolutely. That review bit is so important. And I keep, you know, I am can concerned that we will, you know, we will behave like we do whenever you get a new piece of technology. And we’ll, we’ll be very, very focused on that for a while, until we go, oh, well, the last date didn’t need much editing. So this one will send out without editing, and then that one’s got a massive howler in it that comes back to the change management, you know, train the people around the tech thing. I guess one of the other things that we’ve been Professor Adrian Hopgood, our last AI Lab event made a really good point, and that was not just to talk about accuracy rates, but to talk about type one and type two errors, and what impact they have on the outcome, you know, so false positives and false negatives. And he was talking about in a medical capacity, you know, but he was saying false negatives can have a much like an outsized impact, you know, if the AI models saying actually that person probably doesn’t have cancer, say, but actually, it turns out they do, and the AI just got it wrong. Obviously, big, big health impacts. But I’m conscious that, you know, we’re another heavily regulated space. And if the AI model says to take a very technical example, you know, there are no guaranteed annuity rates or guaranteed minimum pension allowances on the thing we’re looking at. And it turns out there are never mind that that could be a problem, could be picked up in the check. My worry would then be that whoever did that picking up the member of the of the team is now never going to trust that software again, because it’s nearly right, but that one bit’s wrong, and then you know, so I wonder if you could, could comment on that at all, about how you deal with false positives and false negatives as a as a business, as a piece of software? Yeah,
Alan
it’s a great question. And to make it specific to what is the we’re doing, then the question that we’re constantly trying to answer for firms is, how can we help these firms make it easy as possible to go by reviewing the output that M is creating. So what we recently come out with is citations. For example, if I quickly share my screen again, what you’ll be able to see here is, like a whole host of citations throughout and you mentioned guaranteed. So this comes nicely. So we can also summarize LOA packs here, but it can go ahead and give you the answers, GMP, G, A, R, and they can go by citing and see exactly where in the document this was received from. So the intention here is, how can we ensure that process reviewing is made as simply as possible, but you actually spot on? Emma will be wrong, right? There’s always that question of accuracy rate, yes, but there is the there’s the one, two, 3% chance they can be wrong. The other day, I had a conversation with someone mentioned someone named Julian. But when, when I looked at the transcript, it came out as Julian instead of Jillian. For example, those are all small nuances that can continue to crop up. Base that reviewing process is important, but to your point, that reviewing process has to be really easy, consistent, and something that people can go about doing almost nonchalantly, very, very simply. Yeah,
Alistair
definitely. That’s really helpful. And I would like there to be some more transparency from providers on false positives and false negatives. And I think that will only happen if we keep asking people for them. So I’m gonna, I’m gonna keep on asking, asking that question. We’re talking quite a bit in the lab about about due diligence and due diligence questions and answers. If there were, I accept that you’re gonna tell me all of the questions that you’ll be able to answer the best I say that, you know, tongue firmly in cheek. But if there were one or two really key questions you think advisors and planners should be asking firms they’re looking to partner with or use the software of, you know, the top one or two, anything that comes to mind straight away,
Alan
yeah, open source model or closed source model. That’s one. What do I mean by that? An open source model is a model in which anyone can go ahead and attach themselves with and then create their own model off the back of that, or fine tune a model specifically for their own company. A closed source model is one that you can attach yourselves with, but then it’s only going to be utilizing their resources. For example, there’s like sort of chat GPT as well as Claude out there, and you can attach yourself through that with an API. The question you need to be asking yourselves here is, is the information you’re uploading to that platform? Is that going to the likes of chat, GPT or Claude or Gemini, or is it a specific, fine tuned model that that firm has themselves? And it’s an important question here, because it just means the more third parties this information is being. Pass towards. There’s therefore higher risk of your information being out there in this in the atmosphere. That’s one really important one. Another one is penetration testing around cybersecurity. So you know, this just holds that firm accountable to ensure they are constantly getting checked and earmarked to ensure they are doing this cybersecurity checks that they should be doing. Those are probably two, and they’re both very, extremely strenuous processes for the firms. And so we’ve been through that. And yeah, I’ll definitely say those two points.
Heather
Yeah, really, really good points. And so before we get into our rapid fire closing questions, I just want to ask about the results you’re seeing from firms. So the impact you mentioned that the focus was on driving efficiency within businesses. Are you able to share? I know Johnny shared some stats on the podcast that we did about what they’ve experienced at Lyft. But could you share some use case examples or numbers for what you’ve been able to help firms achieve?
Alan
That’s very nice of Johnny to have done that. So Lyft is a great case. So we’ve worked with Lyft now for coming nearly to a year now, and within that year, they have about 20 advisors across the company. Within that year, they have ink they have doubled the capacity of clients. The advisors can go ahead and take on from 100 to 200 clients per advisor, which is astonishing. Essentially, the only reason why we do this right because we want to end up ensuring we increase the time capacity per advisor across the country. What that meant then was, for the past 1015, years, for Lyft, they’ve had a very hard stop at 100 clients per advisor. So for the first time, what they’ve done here is they’ve gone actually we’ve got so much more time now, due to the saved due to save a time from not meeting notes anymore, any review letters or ad hoc review letters, they essentially can go about increasing the current capacity. That amount in terms of data, what we tend to find is firms who end up coming to our platform typically take about three to four hours on average to go ahead and create a suitability letter, annual review letters being slightly less than that, around an hour to two hours per firm on average, after using Emma and Evie, it takes around 30 minutes for an annual review letter and about 45 minutes for a suitable letter. So it’s we’re starting to see a significant reduction in the amount of time that it goes about taking to write a super lesser around your review lesser, typically around 80 to 90% time savings almost with every firm. Sorry, I just want to add in, almost with every firm that we do and they’re working within the first 30 days, you should already be saving about 50% of your time in report writing
Heather
fantastic. I think one of the things that’s so powerful about the lift case study is also that they’ve done time tracking from, you know, for for yonks, and so they know how much they’re saving. And it’s, I mean, I run a professional services business. I’m not a financial planner, but, but if you’re not tracking the time, then you’re not going to know how much you’re saving. But it’s really tough about being honest about what you’re doing. So those are great case studies I know in our financial advice business benchmarks report, which our last podcast was a ridiculous notebook, LM, summary of that report, we said it was the average time to onboard a new client from initial advice until delivering the advice was on average four to five weeks, 33 days, and that hasn’t progressed at all in the four years that we’ve been tracking that. And the number one thing slowing firms down is getting data from clients. Are getting data from providers. And so that’s why I asked about the proverbial shoe box, because, because it is a challenge, right? It’s not going to solve it all, but fantastic. Let’s move on to the final section, rapid fire. So we’d like to end our podcast with a few rapid fire questions. First one, how can firms get started? And we have a rule no meeting note summaries a variety
Alan
of different ways. For example, I use something called which is also going to be my answer for the second question that you that you’ve also sent to me as well, Heather, it’s a application called superhuman. So superhuman is a way to reply to emails. But what they’ve done is they’ve added, like an AI twist to that, which is really interesting. They’ve created shortcuts, essentially, to go by replying back to emails. What’s awesome about how they use AI for this is superhuman can read through your entire emails that you sent over the past number of years. So if you’re thinking back going, what did I end up sending to Heather a couple of years back? Have I ever sent her that document? You can just ask the AI bot from superhuman and superhumans can read through all your emails and come back to you with the answer. That was a great. Way that that firms can start to utilize something like AI, not only for meeting notes.
Heather
Really interesting. I was just gonna say that I tried one of those, but it’s probably about a year ago, so it was maybe a little bit early, and I was asking a question, and it couldn’t figure out who, because people that I was trying to I was like, I know, I’ve talked to this person before, but they changed companies, and it’s but that kind of to, I mean, I’ve tried Gemini, I’ve tried a bunch of these things for script, you know, for doing a draft of an email that sounds fantastic. You sold me on superhuman. Oh,
Alan
so, and like so, for example, recently, we get a ton of feedback constantly from clients, which is amazing. And so all I’ve asked superhuman to do is going great. List up all the feedback specifically around this section, and it just breaks all down for me. I go, great. Let’s just go ahead and work on that now. So now I don’t need to go through 3040, emails, read each one to figure out who sent me feedback, who hasn’t massive time saver. And
Alistair
that’s also, I suppose, the promise of the kind of co pilot integration if you’re deep in the 365 ecosystem, which I know so many of our listeners, will be cause, sadly, it doesn’t everyone’s promise very effectively most of the time, I’m gonna make you think of a different application that’s made a big difference to you day to day. Then given, given that you’ve you’ve given you given your answer the first question. So is there anything else that you use, big or small, where actually just a little bit of an AI twist has helped you. I’ll
Alan
do a massive AI edition. One is Claude. Claude chat, GPT. I use this still every single day to go through my functions, everything from yes meeting notes all the way down to how I want certain, certain outcome outputs to be, whether it’s even through looking at certain possibilities of what can be done with this sort of technology. I almost communicate with Claude and chat GPT as if it’s my own best friend, like throughout the entire day, I’ll have constant communications with this so probably to a point where, if you ever read my history of the communications I’ve had, you probably go, all right, this is very sad, but I use a simulation database, and I think it’s incredible.
Heather
I’m looking forward to my Spotify unwrapped. I was thinking that a really good thing for chatgpt to do would be like a chat unwrapped, and like a highlight of all the different types of topics that you’ve asked chatty about, or Claude in the last in the last year. So maybe, maybe next year, they’ll do it for us. Well,
Alan
you know what’s what’s really interesting about that Heather is, well, you can now do a chat GPT. You can start up a new chat and go, based on all the conversations we’ve had, how would you describe me as a person? And they can then go through everything that you’ve ever spoken with chat GPT, ever I did it the other day, and he goes, You talk a lot about, you talk a lot about B to B SAS businesses. And I go, Yeah, that’s right.
Heather
So funny, so funny. All right, podcaster book recommendation. You obviously know a lot more about AI, and most people, I think that will be listening. So what? What podcaster book would you recommend? Sure,
Alan
I mean, a great podcast I always end up listening to is 20 VC. If you ever come across that before, 20 VC is great. He’s also based in the UK as well, which is amazing. There’s also another podcast called Y Combinator podcast. So Y Combinator is an accelerator based over in the US, and they have some amazing companies that come out, come out, the YC batch. And off the back of this, they also have some incredible people who talk about everything from Ai, what the future of AI looks like, and it’s they just tend to have quite a fun conversation. Another one that I’m just going on here is the all in podcast as well. It’s a really interesting podcast. I wouldn’t necessarily as my favorite, but they do have a lot of insights that I do think are interesting, brilliant.
Alistair
There’s some to add to the list. So this might be my favorite question, because we’ve had some great examples. What would you say has been your biggest AI failure as it might personally for my company, yeah, well, personally for you, it could be absolutely anything. Where has it gone so horribly wrong that you’ve got a story to tell about it, sure, sure, sure.
Alan
Where do I start? I’ve gone down incredibly embarrassed about so what we do, on a year by year basis, is I end up meeting with all of my cousins for because two of my cousins have the same birthday and same date, and we all come together. So there’s about good 20 of us who will come together. And so for the last year, in 2023 now, we all went to an ice rink, and it went SK ice skating, which is really nice. But what we do for this is we go ahead and give them a whole bunch of gifts and presents just before we get in the ice rink. So for this day, I ended up providing the presents, but I bought a card, as I normally do. But who’s got time now to get my writing an entire list of all the reasons for why you’re so grateful, right? Is what I was teaching. So we go on Write, write a really nice happy birthday to my cousin, as if it’s your own cousin. That’s what I put down the prompt as for chat, GPU. You. But what I’ve just done is I’ve copied and pasted it. And what I’ve copied and pasted it, it’s got everything right here. I’ve gone, great. This is all really good. And it was saying, because I’ve put down as if it’s your own cousin, it’s put down love, chat, GPT. And so I’ve just gone and copied and pasted the whole thing without even thinking other because I’ve read the whole thing up until, you know, dear Alicia, all the way down. And I think I’m sure it’s quite often good now. So that’s probably my biggest AI fail, and it’s also a very embarrassing story that I’ve got terrible attention to detail.
Alistair
Somebody needs to invent an AI proofreading bot. That’s that’s what I’ve heard from today. You want to put your suitability reports through that. You want to put your birthday messages through that the person who solves that is clearly going to be the next billion dollar
Alan
company. Spot on, spot on, when there’s a gold rush, be sitting in pickaxes, right? Yeah,
Heather
amazing. Thank you so much on. Really enjoyed that. Alistair and I are going to stay on for a bit and and just swap notes on what we took away from that conversation. But really enjoyed that. Thank you,
Alan
My God is thanks for having me
Alistair
on Well, what a great sort of discussion, a bit of a demo through the through the product of advisory AI. I think it’s fair to say they’re already the most sort of established and kind of spoken about, certainly in our circles, a piece of software. So it’s great to get Alan on to talk through some of the detail. What were your sort of Top of Mind thoughts from that, then Heather, in terms of reflections from our discussion,
Heather
I think that Alan’s background in as a financial planner makes a huge difference to the focus on understanding what a planner does, and it’s just such a leap forward compared to some of the other tech, not other AI solutions, but just other tech, where people think, Oh, this will really solve the advice gap, but they’re not thinking about, what are the problems in an advice business. Then I think Alan’s experience in an advice business just turns out a bit on its head, I thought is numbers around moving from 100 clients to 200 clients is really interesting, because that’ll definitely be a motivator for some but actually what we hear is that there’s a healthy proportion who just want to help make the job a bit more pleasant. Take away some of the agro, take away some of the admin for their staff and for their clients. And that might mean that they deal with more clients, but it also might mean that they drive job satisfaction. I mean that they can lower fees. It might mean that they can, you know, work four days a week, or whatever, whatever that is, I think, I think that’s, you know, there’s oppressive statistics, but I would just nuance that. I think, you know, I think he’s solving a problem that actually exists, and that’s and that’s fantastic, and we’ve seen a real life application of it, and it’s working, which is fantastic, because the amount of moaning I hear in the market about tech isn’t moving forward, it is, and it’s so great to see a problem actually being solved by tech?
Alistair
Yeah, absolutely. And I think I always worry with demos, because if people haven’t noticed by now, I’m always, I’m always the dissenting voice in one of these conversations, and I worry that we get a very perfected version of what things might look like. And so to know, and I’ve spoken to Johnny privately on a couple of occasions, Johnny Stubbs at Lyft about their implementation, how it’s working. And I think he said at one point at the AI lab that something like 95% their advisors are using the meeting notes. So they might not have got everything through for everybody, but they have got that one bit absolutely nailed down from a change management perspective in the business, and even if it’s just that. So on a much smaller scale, we’ve done that with off the shelf software, with meeting notes, and it is just enhancing the client experience and saving time, because that’s the other the other bit that I think we really need to be thinking about, it isn’t just time saving. It’s not just about serving client, serving more clients, but it’s enhancing the client experience as well. Email summaries of meetings are something we just never thought to do. I’m quite embarrassed to say that. I guess really looking back on it, because we were writing reports and we were seeing clients, the feedback is the email summary is actually more important to the clients and the report afterwards very often, because it’s top of mind, it’s short, it’s sort of punchy, and it gets the message across. So that enhanced client experience, I think is a really good selling point of the concept of embracing this stuff.
Heather
And I think it’s interesting how paraplanner attitudes have moved on a lot. And I’ve mentioned this at our AI Lab at the Para planners assembly, speaking with para planners at Lyft and other places a year ago, the view at that event was that AI was a real threat to their job, and now they see it as great actually, because it works really, really well for some of the stuff that they don’t really need. To do. And for the really complex cases, they still need to get involved, because it’s, it does require a lot of thinking, a lot of work, and so they really get to use their brains. So yeah, I think, I think it’s, uh, yeah, fantastic. Fantastic to see, to see it in action. Great. Well, well done to to Alan and the team, because they’ve, um, yeah, they’ve really listened. They’ve really helped to solve, solve the problem it exists, and look forward to seeing how it evolves.
Alistair
Yeah, absolutely. So next time we’re going to be speaking, this will be in two weeks time, we’re going to be speaking to Gary Abel from RE LOA. They recently rebranded from coke code, and this is a quite a very different approach to a bit of AI software, because they’re going to be they’ve taken a very it’s not a small problem, but it’s a part of the process. And they’ve absolutely focused on nailing that one bit, as opposed to advisory AI. You’ve taken a much wider view. So be an interesting contrast, I think,
Heather 31:01
yeah, absolutely so. Thank you to our sponsors, ssnc, Aviva, fidelity and Salesforce. Thank you to Alistair for joining me on the podcast, and thanks to our producer, Artemis Irvin, great conversation as always. And if you’re interested in joining AI Lab, I think we’re down to four firms that we can take now so so get in touch. The next meeting is in January. Great. It’s
Alistair
Good to see you, Heather.