Speaker 1: So Samina is the Managing Director of Artificial Intelligence Research in Digital and Platform Services. She and her team work across the firm to create AI technologies for business transformation and growth, and she is here to talk to us about how AI is powering the future of financial services. So it's going to be a rich... She's going to present, we're going to have a conversation. Anyway, it's going to be a rich discussion, and I hope you'll all join me in welcoming Samina. So the stage is yours. I'll be back for some questions. Thank you. Good afternoon, everyone. It's a pleasure to be here.
Speaker 2: Hasn't this been such a great conference so far? I feel so validated because now the focus of AI is shifting from celebrating pure algorithmic success to how it's applied in the industry. I feel like my job just got hotter. And so that's what I do. I create AI research and apply it to business transformative settings, and then for real business imperatives and constraints. So one of the things that I always talk about is when you take something which is a research prototype and you apply it in a business setting, if you have developed the algorithm in outside just a pure POC versus you had actually incorporated those constraints right from the beginning, you may end up with very different models. And often this pure POC would not even make it to an end production system if those business constraints were not incorporated right from the beginning. And one of the other myths that I like to talk about is that sometimes people think that if I incorporate all of these business nuances and constraints from the beginning, I may lose some of these scientific scholarly advancements that may come from pure research advancements. But such has not been the case in my personal journey and my team's journey. The team and I have been building AI research for business settings for more than 15 years, and we have had several best paper awards, several scientific accolades, along with transformative test function changes. It's funny, outside I was speaking with somebody and I mentioned that I'm an AI scientist and this is what I do, and the person had the best look on their face, and they look at me and say, but you look so normal. I did that as a compliment. With that, I'll talk a little bit about, I joined J.P. Morgan two and a half years back and how the bank has been leveraging AI and how we built a number of different algorithms that have powered some of their key in business imperative functions. The first one that I want to talk about is something some of you may have been very familiar with, which was this GameStop and retail focus on certain stocks and meme stocks and all of that that was unraveling in early 2021. Some of our traders had actually noticed that a lot of the retail activity was having this disproportionate impact on certain stocks for trading and market, and they wanted a structured way to be able to monitor those social conversations and for them to be able to predict, for example, potential short squeeze, that would be a dangerous position for them to be in, and they wanted to mitigate that risk by proactively putting in some risk mitigation practices before that happened. They came to us and they talked about if we could come up with an AI system that would allow them to do that. We thought about it. From a scientific perspective, one would think, yeah, I can create a sentiment algorithm. I can read all these Reddit forums. I can read Twitter data. I can figure out what's positive, negative, neutral. I can compute some scores. I can compute some volume around it, present that. But then some of the other things that a lot of the speakers have already touched upon is this gap between industry and scientific innovation, and then we paused for a second and we thought, finally, the business problem, the main keyword there is a leading indicator because you could have a 100% accurate algorithm for determining sentiment, but if it's in retrospect and it's not leading enough, you can't actually put risk mitigation practices in practice and the purpose is lost. So then we started thinking about the word leading and what would we need to do to address this leading predictor, and then the problem morphed. So then you think about not all voices are equal. Then you think about who are the people who have disproportionate virality and who are the people who will then cause those viral movements that could then predict short squeeze and all of the other things that could happen. So then the problem changes, and then it becomes a problem of finding influencers, and then to find influencers, your models actually need to change. You need a lot more history. You need to understand influence. You need to understand how things become viral, and the kinds of things you look at into your algorithm are very different. So that's what we did, and then in the end, we could come up with a leading indicator, and that helped our traders and figure out these are the top stocks that could be potential, and then they could put risk mitigation methods in practice, and then eventually JP Morgan won a very prestigious risk award because of one of the reasons was this work. I'll move on to the second story, which is a story about growth. So often people miss about this concept that AI can actually be a big help and a big enabler in growth for companies. So for JP Morgan specifically, there is this segment that the bankers traditionally didn't cover, which was these early stage startups, startups that are looking to raise capital, but are not yet big enough for a large M&A or that type of banking team, because it was not possible that given constraints that one would be providing coverage, and if there was such a solution, AI had to be a big part of that solution, and the solution had to be completely digital. So this is Capital Connect, a new digital platform that JP Morgan is launching, and AI is front and center. So I'll talk about three algorithms that we built for Capital Connect. So basically if you think about the task, the task is how do you teach a machine to perceive, decide, and act just like a banker would, except that the market segment is now different. So what we did was first teach a machine to find good prospects. How do you find good prospects in a domain that is fundamentally very opaque and has a lot of variability? So what we did was we created an AI machine to continually look at internet, firms' websites, investor websites, all these regulatory documents that firms have to file, and third-party data, and triage all of that information into standardized representations that could represent a startup and an investor. Once we had that representation that was created, we were then able to figure out things like eligibility, because not all firms would be considered eligible under different criteria. Once we figured out eligibility, which you could do using various methods, then the next question is, okay, these firms are eligible, but who is likely to invest in them? Who is the specific partner at a specific firm that the machine thinks would be looking to invest in the next few months in such a particular firm? So we did all of that, and the results turned out to be quite good. Initially, when we started showing the results to bankers, they were skeptical, which brings us to the other topic, which has been talked about in building trust and explainability, and then we ended up building an explainable algorithm that then also showed bankers that the machine is recommending this particular match because of these reasons, because this particular partner or this particular firm has invested in such startups before, or their competitor has, and they may now be looking to have such a firm in their portfolio. And when we started providing such explanations, it was very interesting that the lift was very significant, and the data points, many bankers told us that they had actually not known about that particular context or that particular data point, and that's the power of machines. There's so many different data points that they can try it, all that information they can find, and propose that it's not humanly possible many times to have all of that very scaled context. So this, I believe, would not have been possible had AI not been able to achieve this level of success. So this project is going well. Now several firms have been onboarded in different beta versions, and it's well underway, but the key point here is that, yes, AI can enable growth, and you should look at AI for enabling growth for your firms. The third use case which I want to talk about, and I'm very happy to talk about this particular one because it breaks a lot of myths. When we think about financial firms, we think it's mostly about numeric data, but actually, from an AI perspective, it's mostly about textual data. And a lot of data, I mean, the other thing which I think goes underappreciated is that often it's about discovery of data. For many things, there are no clean pipelines that are serving you go-forward data that you could just lever it off the shelf. So effectively, what happens is humans engage in this tedious discovery process of finding different documents for different types of data points that they need to try it to do a task effectively. Let me give an example. For example, KYC, which is a large function in many financial firms. Now, to perform KYC, one needs to look at SEC filings, articles of incorporation, tens of different types of documents. So what humans do is they go to different regulatory bodies, they basically leverage the best possible tool for this, which is Google Search. That's the best possible tool that exists. But Google Search, as we all know, is very good, but it's not task-aware. So effectively, what happens is that there are armies of people who take that search result and transform it to the end question or the eventual task that needs to be solved. And in some cases, the people are very good at Google Search, and they'll get to the right document or the right data point in a couple of minutes. Others, not so much. It could even take them two days to get to that answer. And such variability is never very good. So how could you then create machines to take away that variability, make the search process much more task-aware? And the other macro phenomena that's going on in the world is that we are moving away from these passive search results, right? We're moving away from passive, once a person then thinks and goes and does something, to versus these always-on, autoresponsive, proactive models, right? Temperature adjusts automatically. Things are happening automatically around us. So that's what we want for our systems. We want our systems to be always-on, always monitoring, and alert us when our attention is needed. This is kind of a representative pipeline that we have built, which also talks about, in practice, many things are not one algorithm. Many things are a pipeline of algorithms and components that need to sit on top of each other. And structuring pipelines and components on top of each other will allow us to address far more number of use cases than otherwise possible with point solutions. So for example, in this case, even in a case in which, like KYC, for example, the kind of questions that exist, they are very high variability. Like some questions are factoid questions. Some questions are Boolean. Some are narrative. Some require getting snippets across different documents and combining them and having a reasoning layer over them. So they could actually have, like, not just one path that goes from one of these components to the end. You could have multiple paths that are on for reaching one particular use case. So the work is not done. It's obviously complex. It obviously requires new components to be built as we discover new use cases. But we have been very successful with public data discovery and with solving many, many different use cases because of these existing underlying components and the existing underlying structuring of information and finding insights at different levels that then feed other use cases. So with that, I want to just conclude with one thought. I think for industry, what I've always found useful is that we don't fall in love with specific solutions, that we fall in love with problems. And with that, thank you, and I'll take any questions there may be.
Speaker 1: Okay, so let's dig in a little bit on this. I have many thoughts. You shared some really wonderful use cases in your presentation. Talk to us about some of the other areas that AI is being applied in financial services industry.
Speaker 2: So that's a great question. I'm glad you asked me because the kinds of use cases are so varied. So from AI research perspective, we at some point laid down a framework of the kinds of problems that we are seeing in the bank, at least. And we saw that there are kind of directionally seven different themes that exist. So one could look at things like creating safe networks. So these could include problems that arise in the space of fraud, things that arise in the sanctions, anti-money laundering. These are really rich, deep problems, but those require fighting financial crime and things of that nature. A second area which we find is this around data. And Andrew Neng also talked about it, that data is a key part of AI, but data is also a customer of AI. So like homomorphic encryption, things around... In many use cases, actually, we don't have data. How do you come up with proxy data? How do you come up with synthetic data? How do you use data? You may have data, but you may not be allowed to use data for certain specific use cases. So all of those problems around data are another key, big, rich area. A third one is the markets area, so trading, multi-agent simulations, things of that nature. Another fourth area that we believe is around the client side. For example, how do you perfect client experience? How do you market? How do you find clients? How do we find out their intent? Things of that nature. Another area which we find is around empowering employees. So all the work around augmenting human knowledge workers to be better at their job. It's funny, people always talk about taking away jobs and things like that, but some of the work that I did, nobody ever told me, like, hey, don't take away Google search from me. I'm very passionate about it. I always want to do it myself. These are really tedious things that humans do, so empowering them. And then in other areas around policy and regulation. So a lot of the law is encoded in language in different legalese. And how do you take that to something that's executable function that then applies and constantly monitored and applications around that. And then finally, I think it's underneath all of this AI is, of course, your fairness, bias, ethics, sustainability, and all of those components.
Speaker 1: So I think there's- I want to come back to fairness and bias in a minute. But first I want to ask, you didn't say anything about AI fund management. I know that there's not a lot of that going on. And there's earlier this month, Horizon, is Horizon out of Canada, I think? Yes. They announced that they were going to shut down their, what they call active AI global equity managed fund because it wasn't enough investor interest. I don't know if that's, that sounds like a really good thing for the marketing team to say, but I wonder if it's also about results. And do you think there's a day when one of the things that you'll list to that question that I asked you about, what else is AI looking at doing in financial services, that you'll also talk about an algorithm running mutual funds?
Speaker 2: It's a great question. So I think there's like, the robo advisors are very common. And I think that trend is definitely on the upside. With regards to AI and especially the actioning part for the active funds, I think there could be many reasons why they shut down. It could be performance. I mean, I don't think like they disclosed what the performance was, but it was mostly like lack of investor interest. It could also be like trust. It also could be that how much is there of uplift that AI could give you. And also I think people just penalize AI a lot more. So unless the uplift is like really substantial, people don't- Is it something you're working on? Active fund? Yeah. I personally am not. Yeah. Okay.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now