David Arturi: Today I have the pleasure of speaking with Dr. Tiffany Perkins Munn, managing director and Head of Data and Analytics for JPMorganChase marketing. You know people they think sometimes they think AI, they think you know Skynet, or whatever that’s going to take over the world right. They’re afraid to use it. There’s this big adoption hurdle internally and they think it’s going to replace their jobs, but the reality is, it’s not going to replace your jobs, it’s going to make your job easier. It’s going to make your life better. You’re not going to sit here and bang your head against excel all day. You’re going to have AI do what it needs to do, and you can focus on doing things that you enjoy. So, I’m just curious about your thoughts of adoption with using AI in the workplace.
Tiffany Perkins-Munn: I think people are moving closer. So let me say this, I think generative AI has brought people closer to really understanding what’s possible with AI. It has made it a hands-on, tangible thing that regular people, who go to their regular jobs every day can connect with, right? It’s not like, oh, those are those data, Gurus over there building models. It’s like I am a regular person. I can connect with this tool and utilize it. That to me, I think, has made it much more accessible as a theory so that people now have more, instead of talking about AI as this thing over there that they’re doing that is coming to take our jobs. They understand it as a tool that can be useful and I think that opens the door for them to understand or think about critically, how can I use this for my job, for my role, to help me in my day-to-day life so as that becomes more incorporated into society, this idea that AI is here to take over, I think, will kind of die down because they see AI in action in front of them every day when they’re using even doing a Google search, the Google search gives you like the AI generated response, right? It just becomes more of a natural inclusion in the way that we think, in the way that we do, and the way that we search, and the way that we make selections on products and services. And as that becomes more, more prevalent, I think this idea that AI is around to take jobs will go away. Now, AI will replace some jobs actually, in my perspective. But those are jobs that are logic based and that are easy functionally to execute. But that’s fine, because it also opens the door for so many more jobs like up until a few years ago no one had even heard about a prompt engineer, right? And there are a myriad of jobs like that that AI will bring opportunities for people who have never even thought of data to get engaged with data as a discipline, as a career. And I think the upside of what AI and ML can offer is much greater than the downside. And as people engage with it more, and start to read about it more, and understand how it works in their lives, I think there will come a time when there will be less concern about AI becoming sentient and taking over the world. Right?
David Arturi: Absolutely. I think you’re a thousand percent, right. So, with AI playing a larger role in data and analytics within marketing, how do you approach data, privacy, bias transparency in AI driven marketing initiatives, particularly from the standpoint of financial services organization? I know we kind of touched on that.
Tiffany Perkins -Munn: Yeah, we kind of talked about privacy. But maybe we could talk about bias mitigation. We could talk about bias mitigation. And what was the other one? Transparency. From bias, mitigation perspective, I think you want diverse data sets, you want to ensure that training data represents diverse customer segments so that you don’t have skewed results. And you want to, as I’ve mentioned several times, regularly, audit data sets for potential biases right. In AI for people who are executing AI, it’s really important that they implement fairness constraints in AI models to prevent discrimination. Some people may know this, but there are techniques like adversarial, de-biasing or equal opportunity methods that you can use to create algorithmic fairness. That’s another bias mitigation strategy. Regular bias audits, which I talked about like periodic audits of the outputs to detect and correct unintended biases and to use tools and frameworks that are designed to detect algorithmic biases. Maybe diverse development teams, you know, foster diversity in AI development teams to bring varied perspectives and reduce unconscious bias. And then reduce bias in financial products by paying special attention to like are there any biases in credit scoring or loan approvals or insurance offerings like, really, you know, checking to see. Understanding, paying attention to see if that’s happening. Oh, and your next question was transparency, right? So, from a transparency perspective, it makes me think of explainable AI. Using interpretable AI models where possible, especially for decision-making processes that affect customers. There are techniques like lime or shap that explain AI decisions. And I think, that’s another one that as AI becomes more explainable, people will start to engage with it more like when they really understand how it’s happening and why it got to the decision, or what factors it used to create the decision, come up with the decision. And so, in that case, clear communication is key informing customers when they’re interacting with AI systems like you know, Google says this is an AI generated response. Or providing, like clear and jargon, free explanations of how AI is used in marketing initiatives. It’s all about communication with consumers to help them understand, being explainable, communicating, documenting, so model documentation. Detailed documentation of your models. What is their purpose? What’s the data source? What are the potential limitations? Think sometimes people create what we like to call is black boxes. And the problem with the black box is that it’s not auditable. So, you want an AI system that’s auditable like you want to ensure that the system is designed to be auditable by internal teams and external regulators, so that there’s a level of credibility to the system that you’re building. And then, obviously everything comes back to customer control. You want customers, you want to have that partnership, you want them to have control of their data, you want them to understand how it’s used in AI systems. You want to offer them options, to view it, to correct it, to delete personal data like you can use this. But you can’t use that right used in AI models. So, I think those are some ways to mitigate bias.
David Arturi: I think the black box point is really interesting, because again, that’s something we come across a lot. Again, to your point, you’re going to have auditors and regulators, and if you’re trusting these models to do something, you want to be able to go back and know what they did, what they looked at. Most companies don’t think about that, but I think that’s critical. Right? This is transparency, as it relates to what is to your point. What are they doing with this data? How did they get it? Where did it come from? Where is it going That’s huge, and it builds to that comfort level. So, I think that’s really interesting for a lot of firms.
Tiffany Perkins-Munn: And I know the black box creates a conundrum, right? Because that’s everyone’s secret sauce. That is the way that they differentiate themselves from other competitors. The way that they offer a unique product. However, it is important that and as we are building sort of governance practices around how we will engage with AI going forward. I think there are some processes that have not yet been created but this process of how a company with a black box solution can still create something that is auditable and something that is able to be shared to some extent with customers, so they understand. Like that process, I don’t think really exists now, but it is something that needs to be considered and incorporated as we move into a more sort of AI present environment.
David Arturi: Yeah, it’s a good point, right? Because then it’s like, you know, maybe years ago the secret sauce was, you know, great customer service, or you know, writing hand letters, or something like that, right and kind of automating all this right? And so, to your point, you still want to play that close to the chest. Those are your cards, but you also want to have that trust with the consumer of where this is coming from. It’s coming from a good place, not a bad place. Yes, and I think you’re right. I don’t know if that process has been defined yet. It’s funny, too, because the AI technology is very advanced, but we’re just kind of at the tip of where this is all going. I think a lot of people have this conception that technology is 10 years ahead of where it actually is and all this stuff is flushed out. And the reality is, we’re just starting.
Tiffany Perkins-Munn: We are learning as we’re going which is why that concept that you and I discussed about doing small pilots to really understand what’s happening to get in the detail in the minutia, to understand the nuances, is very important.
David Arturi: Yeah, it’s very important. Again, it’s it’s, you know, which is so early. And that’s also where I think we talked about trust and transparency and things like that. I saw Oprah did some special the other night about, you know, with Sam Altman and all these people but the reality is most of the public, this is this is so new, right?
Tiffany Perkins-Munn: And they’re only interested now because of Gen AI. Great, by the way, because, you know, we’ve been doing AI and ML for years right? But that Gen AI has connected the larger society to what is AI, and what the possibility is magnificent. Because now it brings them into the conversation, in many ways into a conversation which I think many people didn’t feel equipped to participate in. But as a user of a tool, you have all the authority to participate in that discussion. Right. So, I think that’s very, very important to sort of the future of where we go with AI, how we engage with the consumer and the public, and even how we set up sort of that partnership with the right, sort of compliance and regulatory practices in place.
David Arturi: Right sticking with the future, I’m curious to get your thoughts on AI agents right. I mean, I see companies rebranding to agentic this, and agents are the future. But you know I’m curious to see, do you think that agents are going to be autonomous? Do you think it’s a blend of users interacting with agents? I’m just curious, broadly, because, you know the same sort of thing. I think companies like to pretend that they’re a lot further than they are when it comes to agents. Um, and say, oh, we’re here, well, it’s like, well, if we look under the hood, maybe you’re not quite there. So, I’m just kind of curious on your thoughts and insights on agents, and where you think that all goes.
Tiffany Perkins-Munn: I don’t think agents will ever be autonomous quite honestly. I think they will approximate that they might even mimic that. But will it be real is the question, because there is always some human intervention that is required to even get the agent up and running, to get the agent to continuously improve, to get the agency to execute in the way that you need it to execute, to deliver the information you want, to make sure that information is right. To me it just goes back to opportunities in jobs, job creation for people. Right? I think this idea of autonomy will have to be redefined because it won’t be autonomous as long as there is human engagement, interaction, decision making that goes into the process, right? So, it might mimic autonomy, but it won’t ever. That’s why bias is so important. Because there’s always going to be some kind of human engagement with the development, creation, execution, delivery, use of the tool. Right. And so, in that way, we want to make sure that we are thinking through all of these things we’ve discussed here, because while it may promote itself as autonomous, if you dig down, dig inside, you’ll realize that there are places, where maybe there’s more autonomy in some places than others, right? But a purely autonomous tool is I think, a far way off.
David Arturi: I think it’s a far way off, and I think you know we spoke about adoption. I talk to customers all the time and I’m the same way. I wouldn’t want an AI agent, or whatever technology sending emails on my behalf or communicating in my name or anything like that, right. And even when we were working, we’re automating processes, we’re deploying AI, there’s still always that human in the loop element. It’s interesting, because one hand like, hey, we’re going to automate this whole process but it’s like, oh, wait a minute, don’t, because I still want to see what’s going on. So, I think it’s the same kind of thought here of, that’s great do the stuff that I don’t want to do, but I’m not letting this out of my sight without getting eyes on it first.
Tiffany Perkins-Munn: Yeah, exactly. And I think that’ll always be the case, like there will be people who will be. You know, even when people were moving into the digital age, there were like, I only want to engage digitally. I don’t want to ever pick up the phone. I want problem resolution and everything digital. I want to be able to go online and solve the problem. I don’t want to call right. And then there are those people who are like. I am never logging on to any system, because the moment I put in my name and my password, I am at risk. And so, there’s that dichotomy of who people are right. And there’s some broad spectrum all along that barbell of those who absolutely don’t, and those who absolutely will. And I think this is the same thing, right? There’s always going to be a need for human intervention, human engagement, human interaction with these systems, because not everyone is going to be open to the system taking so much autonomous control over their lives.
David Arturi: Right? I mean, I’m millennial, and I hate chat bots. I hate anything. I’m picking up the phone, I’m calling you person. Right, and I think that then you also get into the generations, right? I know my parents like there’s no way my father’s using a chatbot right? Like he’s going to call, and he’s going to talk to somebody. So, it’s also the generational adoption with this type of stuff.
Tiffany Perkins-Munn: And it’s I’m Gen. X but I am a texter for life like you want to talk to me, text me. If I want something I text, if I can’t get through to text, I’m annoyed if I must call, I’m super annoyed right.
David Arturi: I don’t want to be on your bad side.
Tiffany Perkins-Munn: Well, you better get that text working.
David Arturi: Then I got to get that phone on at all times. This has been awesome. We’ve ran through all the questions I’m looking at 9 min left. Is there anything I didn’t ask that you want to touch on just to kind of wrap up here.
Tiffany Perkins-Munn: I don’t think so. It’s been great. It was a great discussion. I’ve enjoyed it. Thank you for inviting me. I appreciate it.
David Arturi: This was fun before you hopped on. I told Denise this was my 1st time doing this, so you made it very. Yeah, I’m getting my feet wet. You were a good one to start with. So, this was, this is nice and easy.
Tiffany Perkins-Munn: Thank you so much. Yeah, absolutely my pleasure.
David Arturi: Thank you, Dr. Perkins Munn, for joining me today and sharing your very valuableinsights. Look forward to catching up with you soon, hopefully, we talk in the near future and thank you to everyone who listened to our discussion today. To learn more about Lydonia, please visit Lydonia.ai. And for more interviews like this one, please visit CDOMagazine.tech.Thank you all so much, and we’ll see you soon.