Search

NextWealth Insights: Episode 4, Season 5 – AI Lab

By Next Wealth | 31 October 2024 | 25 minute read

Join Heather Hopkins and Alasdair Walker at Handford Aitkenhead & Walker as they explore the transformative impact of AI on financial advisory firms. Discover how industry leaders are integrating AI into their practices and preparing for the future.

In Episode 4, Heather and Alasdair are joined by Adrian Hopgood, an emeritus professor of intelligence systems, to discuss AI applications. Hopgood explains the evolution of AI, emphasising the shift from specialist, declarative AI to generative AI. He highlights the importance of AI in healthcare and its application across various industries, while also addresses the environmental impact of AI carbon footprint.

Watch via:

Subscribe to NextWealth InsightsSubscribe to NextWealth Insights Apple Podcast

 

Transcript —Ā 

Disclaimer: This transcript was produced with the assistance of AI, so it may contain errors or inaccuracies.

 

HeatherĀ 

Welcome back to NextWealth insights. My name is Heather Hopkins, Founder and Managing Director of NextWealth. I’m joined by my co host, Alistair Walker, Managing Director of HA&W, and a Chartered Financial Planner. Hello again. Alistair,

 

AlistairĀ 

Hi Heather. Great to be back again. Can’t believe we’re already recording episode four. This series particular runs as a companion podcast to our AI Lab membership group and events. First of all, I need to thank our sponsors for making both the season of the podcast but also the whole membership group possible. So, thank you very much to Salesforce, Aviva, SS&C and Fidelity, they’re the reason we can speak to you today. And Heather, we’ve got a great guest today.

 

HeatherĀ 

We do.Ā  Adrian Hopgood is our guest today, and he was a speaker at our event in September for AI Lab. And it was a fantastic session, which was really, really good reviews from people in the room. And so we’re in for a treat. Adrian is a consultant on AI and emeritus professor of intelligence systems at the University of Portsmouth. Welcome Adrian,

 

AdrianĀ 

Thank you. Pleasure to be here.

 

HeatherĀ 

Do you want to introduce yourself for listeners? Because we got the pleasure of seeing you a wee while ago, but, but not everyone would have been there,

 

AdrianĀ 

Sure. My name is Adrian Hopgood, and I’ve worked in the field of AI for about 40 years now, mostly in universities, but I’ve had a couple of spells working for technology companies. All of my AI projects have had a practical application. But because the real world is never straightforward, there’s usually been the need for an innovation of some form, which is how I’ve built my academic record of research. As you mentioned, I’m emeritus professor, which means that I’ve retired from the University of Portsmouth, but I remain involved in some projects and supervising PhDs. I actually describe myself as semi-retired because of those involvements with ongoing projects and because I offer my services as a self-employed consultant in AI.

 

AlistairĀ 

Yeah. In fact, every time I’ve spoken to you, Adrian, you’ve always sounded incredibly busy for somebody that claims to be retired and and to our benefit as well, of course. So we’re very happy about that one. We would really like to expand a little bit on some of the concepts and issues that you spoke about at the event. And the one that has really sort of lived rent free in my head since, since September, is that you introduced a new way of, sort of thinking and placing AI and its applications, certainly for me, and speaking to other people in the room for them too. So we usually ask our guests to introduce AI from the top, but just to sort of raise my understanding of what you said, and I’m sure you’ll expand on this too, people are sort of aware of the current crop of generalist generative AI, so ChatGPT and Claude and Gemini, all these sort of models. And your point, I think, was that actually there’s another end of the spectrum, which is where AI has been over years before, the most recent, which is specialist and predictive. And you could sort of split the board into four quadrants. I’m not sure I’ve seen much, much example of anything crossing over, particularly which I’m sure you might have some views on. But is that a fair layman’s description? And how would you describe AI to somebody if they were asking you?

 

AdrianĀ 

Yes, thank you. No. That’s does sort of encapsulate some of the points that I was making. And I suppose it reflects the fact that AI does actually have a long history. You can really trace it back to 1950 with the original thoughts of Alan Turing. So it means, from my perspective, that a lot of the AI tools that we’ve had along the way, are not completely redundant. They still have a role. So to sort of go back to the beginning, as it were, these classic definition of AI is the use of computer systems that mimic human mental faculties or capabilities. That definition is a little problematic, because AI rarely works like a human, and in many cases, it can exceed human capability, certainly for speed, if nothing else. So I think a more sort of pragmatic definition of AI is that it’s the use of computer systems to solve problems or perform tasks that would otherwise need a human. And there are historically two main approaches, symbolic AI and data driven AI. Symbolic AI being the sort of oldest approach, which was very much about capturing human expertise within a specialist domain and putting that into a sort of flexible computer system so that sort of. Was all the rage in the 1980s and 90s, with systems known as expert systems, which would sort of encapsulate everything that was known about particular narrow field of medicine, the legal profession. But today’s AI is dominated by this second category, the sort of data driven AI, of which machine learning is the main tool, and that’s all about learning patterns from data sets, so that when a trained algorithm is presented with something it’s not seen before, it can give a sensible response to the new case, as long as it resembles in some way, set of data that it’s seen in the past. So coming to your point about the sort of the quadrants. Until very recently, nearly all AI was what I would term classic AI in the sense that it was, it was specialist and declarative. That is to say specialist that it only dealt with one particular narrow domain and declarative in the sense that it was for making a decision that could be a sort of a recommendation, a forecast, a classification, but it was, it was a single decision, and as you correctly say, Alistair, there’s been a shift in recent times towards generative AI, so it’s not just giving you a decision or an outcome, it’s actually creating something new, typically text, but it could also be music or works of art, and the subject base is much broader. So these large language models like ChatGPT, you can ask them anything, so it’s not restricted to a narrow domain, so they become much more generalist. But I do wish to emphasise that the classic AI is still relevant and very important. And I think it’s fair to say that a great deal of the practical uses of AI at the moment remain in that sort of classic domain of AI.

 

AlistairĀ 

And in fact, that was sort of brought about in a different way from our guest from last week’s podcast, Kapil Salesforce, because Salesforce have just had a big release of their agents model, and their concept with agents is that you would deploy a number of different agents to do particular jobs that are kind of leaning on generative AI and using that sort of interface, but actually focused on your customer service model or your data centre model, or whatever that was going to be. So the specialist bits seem to be hiding behind a generalist sort of front at the moment.

 

AdrianĀ 

Yes, absolutely. And actually, I was going to make the point that one of the interests of mine, over many years has been specialized, AI agents that sort of work as a team, so that each of them works on their narrow area of expertise, and they work together towards solving a broader problem.

 

HeatherĀ 

That’s really interesting. There was an Economist special report recently, which I’m sure you read Adrian about AI in healthcare, and they talked a little bit about that, particularly in Radiology Departments. Really, really interesting. And I guess coming on to one of the reasons I thought of that example was because at our AI Lab, you shared some actual applications of AI from outside of financial services. And I thought it was really helpful at that event to just broaden people’s view a little bit, to think outside of our narrow industry and the applications within our industry, to think a little bit wider. And you know, we can’t go through all those examples here, but you know,could you share? Share one that you think would be relatable for people working in financial advice or wealth management businesses.

 

AdrianĀ 

Yes, indeed. I mean, I’ve been privileged to work in a wide range of areas with professionals in areas such as engineering, communications, manufacturing, logistics and so forth. But most recently, my projects have been mostly in the medical domain. So I’m going to pick an example from from that area. And specifically, we’ve had some work published on predicting patient outcomes following bowel cancer surgery. This project was interesting in a number of ways. First of all that it was using data that already existed, so data that had been gathered over a period of 10 years or so at the Portsmouth hospitals University Trust, covering about four and a half 1000 patients, so not an enormous data set and anonymised data, of course, but with about 47 variables for for each patient, and using that sort of data set and machine learning, we were able to build a predictor for length of stay in hospital after surgery, likelihood of readmission and the mortality rate or survival timeline, if you like. And with that sort of tool, the clinicians are able to have a better consultation with the patients, design better care plans, better plan the hospital resources. And of course, the patients can be better informed as to what to expect and indeed, what sort of treatment they actually wish to select.

 

HeatherĀ 

I love that example, because that better-informed patience leads to better outcomes is so directly applicable to financial planning, because one of the challenges we face as an industry is people aren’t well informed when they walk through the door of their financial planner, and that can create some barriers to getting people engaged and lengthens the process. And actually the better informed patient outcomes and better informed patients is all this fantastic symbiotic to me circle that will lead to better outcomes.

 

AdrianĀ 

And if I may, one of the sorts of likely follow-ons from this is the introduction of symbolic AI to capture the clinicianā€™s expertise to design those care plans. So you’ve got your machine learning which will make a prediction, which, incidentally, I would say, is kind of classic AI, because it is. It is producing a decision, actually, as a set of decisions. Here are length of stay, readmission, mortality, timeline, and it’s in a narrow domain of bowel cancer surgery, but then that can be embellished, if you like, with encapsulated clinical expertise as to what’s the best care plan for that individual.

 

AlistairĀ 

That kind of combining the two makes for a really interesting kind of thought process for me thinking about from a financial planners point of view, not just better informed patients, but also better informed financial planners, in our case, about their clients, rather than patients. I remember a platform once saying they had a predictive model that could say with 95% certainty that an account was going to be closed within a certain amount of time. My cynical view, when I heard that, was, well, yeah, I suppose if everything’s moved to cash, that’s probably a pretty good predictor, or something like that, right? So hopefully it was a bit more involved. But this idea that, in fact, we could as planners get data insights that allowed us to act. The classic one, I think, is, you know, if investment markets have a really terrible year, human beings tend to be great at making bad decisions with that data, and we know that intuitively. But knowing, you know, I can see, for example, if you have a client that’s checking their investment values 10 times a day when things are going badly, that kind of data being serviced and predicting that it might be worth intervening and taking a call and that sort of thing, sounds like it might be directly applicable. .

 

AdrianĀ 

Yeah, and you can, you can see that in the in, in in the medical domain, that really, this kind of tool that I’ve described can be thought of as an adviser to the adviser, or, you know, a trusted friend that can advise the specialist on the best course of action.

 

AlistairĀ 

And the medical domain is an interesting one. We talk a lot about the risks and the challenges associated with with AI implementation, and I have a friend who’s a doctor, and we’re often talk about work, and I quite often preface with anything I’m saying with, well, you know, in our case, generally, nobody’s died when these things go wrong, but we are heavily regulated, and there are quite serious issues when things go wrong, and people are rightfully concerned about that, but I guess even more so in the medical domain.

 

AdrianĀ 

Yes, that’s absolutely right. So I suppose the kind of example I’ve just quoted is a way of having better information available to the patients and the clinician. So there’s still a weight of responsibility on on the clinician to do the best they can by the patient, and this is a tool to to enable them to do a better job, if you like. But as soon as you start to allocate responsibility to the AI for those decisions, that’s when it starts to get really quite challenging. And you know, you really need the trust in the in the AI system. So, for now, for this kind of life-and-death decisions, the the human in the loop remains absolutely crucial.

 

AlistairĀ 

That’s music to my ears, because I keep rabbiting on about seat belts and about this idea that even if you keep the human in the loop, if you build structures around that human, and they get lazy, and they get complacent, you know, the AI model predicted them, wrote the right thing nine times in a row, so I might just not bother checking the 10th one. The risk compensation factor in how we deal with that, it’s an unanswered question for me. I’ve not I’m not quite right, other than saying it’s still your fault if it goes wrong, but I’m not sure that’s substantial enough.

 

AdrianĀ 

I mean, I suppose what we will see progressively is that we give AI agents or systems more and more autonomy, autonomy based on trust, but that trust has to be earned and built up over a period of time. So you know, sticking with the medical domain, we are quite happy now to have machines that will monitor a patient’s heart rate, breathing, blood pressure, fluids and so forth, and keep those at an acceptable level, or sound an alert if, if something’s wrong. So we’re already happy to sort of entrust those low level actions. And I suppose as trust builds up, we will give more and more autonomy. But you can’t just suddenly say, right, the AI is working, let’s put that in practice and give it the give it the sole responsibility.

 

HeatherĀ 

One of the other challenges or issues that you raised at the Ai lab, that I thought was really, really interesting was the energy consumption of some of the data centres that are used. And one of the things that a number of financial planning firms and a lot of firms actually in the in the financial services industry tried to do is measure their carbon footprint, commit to net zero. And so I think the the stat you quoted at the event was two and a half percent of UK electricity consumption is from data centres. I mean, it’s equivalent to, sort of a small country. So it would be great to hear you talk a bit about that, but also how firms can measure the impact of their use of AI on their carbon footprint.

 

AdrianĀ 

Yeah, the stat you quoted is correct, and it’s one that I’ve borrowed from LoĆÆc Lannelongue of the University of Cambridge, and that two and a half percent of electricity consumption in the UK is growing. Another stat from the same source is that the amount of carbon dioxide equivalents globally from data centres is equivalent to the American commercial aviation so that’s shows us the scale of it. So what companies and organisations can do is to think very carefully about how they’re using data, particularly if we’re talking about large models. So large machine learning algorithms, the computer intensive part is the training of them, less so the deployment of them. So the whole sort of principle of recycling applies here in the same way that does to sort of material things, and indeed, to software, it’s routinely the case now that software systems are built by patching together existing libraries rather than coding everything from scratch. And the equivalent can apply in the world of of machine learning, you don’t have to train these large models from scratch, you can use pre trained models and then just tune them to the specific task at hand. And that’s one way of moderating the carbon footprint of the machine learning. And indeed, if you want to measure the impact of a project that you’re thinking of embarking upon the green algorithms calculator at greenalgorithms.org is worth having a look at.

 

AlistairĀ 

Yeah, I can just imagine a world in which, you know, you get your kind of, you know, the organic food badge that you might see in the supermarket, that it’s your kind of, you know, green AI badge. At the same time, the pessimist in me thinks, you know, I can’t imagine a world where this seems to be every tech company’s dream is to have the most functional AI that they’ll happily open source and share their data sets. It just feels a little bit too, too idealistic in 2024

 

AdrianĀ 

Yeah, that’s probably true, although there are enormous resources of data in all sorts of domains available on the internet at the moment, sites like Kaggle have just fantastic resources for openly available data, which is really useful, particularly well I’m sure it applies in the finance domain, but in the medical domain, where there are quite a lot of hoops to go through, quite rightly so in terms of ethical access to data, it’s quite useful to have some of this publicly available data at hand.

 

HeatherĀ 

Fantastic. I think that brings us to our rapid fire section. Adrian, we’d like to end our podcasts with a few rapid fire questions. So we’ve just got five for you. I will pose the first one, how can firms get started? We usually have a rule. I don’t think it applies to you, because I think you think a little bit more broadly. Some of the people that that not, not that our podcast guests have been big thinkers, but, but I think a lot of people default to meeting notes. So that’s, that’s not allowed, but, but how can firms get started?

 

AdrianĀ 

Yeah, so, well, I think office functions like meeting notes are an obvious target, but I must admit, I think it won’t be long before we’re all adopting those kind of routine forms of AI and treat them as though they’re sort of standard IT. So then, you know, it’s a decision for each individual or organisation as to whether they want to be early adopters or not. I actually think it’s might be more interesting to focus on a topic that’s important to your own sector or your own specific organisation. Consider what would be useful to your business. Are you looking to do things better, or are you trying to do what you currently do with greater efficiency? Worth looking at your organisation’s own data assets to see if, see if you’re already sitting on something that could be mined with machine learning algorithms to give you a competitive advantage. The only caveat is the care that needs to be taken over personal data, and I’ve just alluded to that in the medical domain.

 

AlistairĀ 

There’s going to be a few specific AI for financial services companies. Very happy to hear you say that. So my question next, what’s one application of AI, perhaps a more recent one for you that’s made a big difference to your day to day?

 

AdrianĀ 

I’m going to give a sort of slightly left field answer to this, because I can’t give you an impressive answer, like having generated a business proposal or report, but I would argue that AI is all around us, and it’s become so ubiquitous that we hardly think of it as AI, often. Like most people, I’m often doing Google searches, and the search itself is, no doubt, AI assisted, but I noticed that Google is now producing an AI generated summary. For instance, the letters that arrived on my doormat this this morning were sorted by automated reading and interpretation of the addresses. So that’s that would, until very recently, have been regarded as AI, but now, because it’s so kind of routine, we always think of AI as the sort of the more advanced challenges that lies ahead, but I think it’s fair to say it from my perspective, that AI is all around us.

 

HeatherĀ 

Right podcast or book recommendation for listeners. Adrian, what? What do you suggest?

 

AdrianĀ 

Well, obviously there’s my own book, Intelligent Systems for Engineers and Scientists, fourth edition. It’s subtitled A Practical Guide to AI. But I do recognise that its target audience is different from the audience for this podcast. So obviously there’s this excellent series of NextWealth insights that aside, I’d recommend looking at the BCS, formerly known as the British Computer Society, which is the Chartered Institute for IT, they have a specialist group on AI abbreviated to SGAI that organises events and webinars, many of which are free. The sort of in person events, there tends to be a fee, but most of the online events are free, and some really useful, interesting speakers present at those events.

 

AlistairĀ 

Oh, that’s, that’s a really great one. And I’ve got to say your book was purchased and read by a financial planner who absolutely loved it and brought a copy for each of her team members. So, you know, perhaps we will, we’ll get some gems out of that as well. This is my favourite question of our rapid fires because we’ve always got so many examples. Certainly, Heather and I have some great examples ourselves. But what is your biggest AI fail?

 

AdrianĀ 

One AI failure that I saw recently highlighted in the media, so I thought I’d try it myself this morning, is if you ask Google’s AI agent, the difference between a sauce and a dressing, it’s quite amusing. The the answer that comes back is the main difference between a sauce and a dressing is their use. Sauces are added to food or served with it, while dressings are applied to wounds. So it’s another example of lack of context, and this is a real issue for current generations of AI.

 

HeatherĀ 

That’s fantastic. I love that one. I love that one. Last question, one thing people should do tomorrow to get started?

 

AdrianĀ 

Well, I heard one of your previous guests suggesting buying several open AI licenses for members of the organisation, and I won’t disagree with that approach. I would point out that other AI services are available, but I would say that after some initial playing with the tools, if you’ve not already done that, I would suggest what’s really useful would be a proper analysis of what would be useful to the organisation. Otherwise, you could end up having a lot of fun without actually delivering anything useful.

 

AlistairĀ 

I think both Heather and I will probably attest to being guilty of that every now and then as well. Thank you so much for your time today. Adrian, really, really interesting, as always before we let you go. Where can people find you if they want to find out more?

 

AdrianĀ 

Well, I maintain a website at adrianhopgood.com and you can find all my details there.

 

AlistairĀ 

That’s brilliant. We’ll get that in the show notes, no doubt. So yeah, thank you so much for coming on and speaking to us. Really looking forward to catching up with you soon.

 

AdrianĀ 

Thank you very much. It’s been a great pleasure.

 

HeatherĀ 

I really enjoyed that. Thank you Alistair for introducing us to Adrian. I thought he was really good at setting the scene at a much higher level than we often think about AI. We’re often talking about meeting notes and suitability reports, but thinking a bit more widely about what is AI and the applications. His discipline around thinking about the business benefit, I think, has been a common thread through our podcasts. The thing I really hooked on to was some of the things he was saying around healthcare, use of AI in healthcare and detecting bowel cancer. And I mentioned that special report in The Economist will include a link to the Babbage podcast with the author of that special report. But one of the things in that in that Babbage podcast, that made me think of Adrian was when we were talking about the risks of AI, it’s also the risks of not using it, because one of the stories in that podcast was a story about the seven year old woman who went for her last mammogram had never, you know, just routine test, never any sign of breast cancer, no family history, nothing. And because of her age, it would have been the last one that she would have been called for. Because, I think that’s the policy in the UK with the NHS and and the the AI found this tiny thing to look at. It had been cleared by all of the radiologists, but they were using this AI tool to experiment, and it identified, I think they said, as a five-millimetre thing to take a look at. And it turned out she did have very early stage breast cancer and so. So I think it’s really important to think about the risks of using AI and how to use it carefully, and have those right guardrails the seat belts, as you rightly point out. But it’s also not to get so wrapped up in the risks that because there’s a risk of not using it as well.

 

AlistairĀ 

Yeah and we didn’t even get onto it in the conversation today, but Adrian made this big point at the event, about having not just the accuracy rate, but also the impact of false positives and false negatives, and how that might affect the outcome. But that applies in reverse as well. You know, what’s the impact of not getting that 80% accuracy? What’s the human accuracy rate in these kind of discussions? The one that always comes up is self driving cars. You know, self driving cars when a single car has a has an accident or a collision. Everybody hears about it, but, of course, I don’t know how many numbers there are, but it’s hundreds and hundreds of collisions every day with humans driving. So yeah, absolutely, we’ve got to bear that in mind. Really,

 

HeatherĀ 

really interesting conversation. Loads of links in our show notes. We’ll put a link out to our AI Lab, of course, which we hope listeners and members of or will think about joining that Babbage podcast, I mentioned the green algorithms calculator, Adrian’s book, of course, and his fantastic suggestion for the BSC, Chartered Institute for AI. I can’t wait to look that up and Adrian’s website as well Adrianhoppgood.com. Thank you ,Alistair for that. Really great chatting as always.

 

AlistairĀ 

Thank you very much as always, Heather, it’s been another great chat this week,

 

HeatherĀ 

Fantastic. And thanks to SS&C, Aviva, Fidelity and Salesforce for your support for the podcast and the AI Lab series. And thanks to our fantastic producer, Artemis Irvin, and great to be working with you as always you

    Direct to your inbox

    To stay up to date with what's next in wealth subscribe to our newsletter