Skip to main content

Art, Algorithms, and the Future of AI in Banking

At Q2’s Dev Days 2025, CTO Adam Blue challenged attendees to see AI as art, not automation. In this episode, he joins The Purposeful Banker to explore creativity, orchestration, and how financial institutions, developers, and communities can harness AI to build the future together.

Listen

Subscribe

  

Related Links

[Blog] Smarter Support Starts With AI

[Blog] Going Beyond Chat to Bring a Human Touch to Digital Banking

[Blog] What If You Could Code Without Coding?

[Blog] Turning Partnerships Into a Living AI Ecosystem

[Website] AI for Everyone | Q2

[LinkedIn] Adam Blue

[Survey] q2.com/podsurvey

Transcript

Cheryl Brown

Hello and welcome to The Purposeful Banker brought to you by Q2, where we discuss the big topics on the minds of today's best bankers. I'm Cheryl Brown. Welcome to the show. 

Today we're joined by someone who's made many appearances on The Purposeful Banker, Adam Blue, our chief technology officer at Q2. And we're going to get to talk about one of Adam's favorite topics, AI. 

So at Q2's recent Dev Days 2025, Adam gave a keynote that was very on brand for him, of course, and it set the tone for how Q2 looks at what's next in technology and innovation. He's talked about AI as art, about what happens when creativity and technology meet, and about how we can use AI not to replace people, but to amplify what makes us human. And we're calling that AI for Everyone. 

We'll dig into some of those topics and the tangible ways they're coming to life here at Q2. So welcome back, Adam. 

Adam Blue

Hey, thanks, Cheryl. Happy to be back. Happy to be back. And like I always say, if you can't be good, at least be on brand. 

Cheryl Brown

Exactly. So at Dev Days, you gave a keynote. And just to level set with the audience, at Dev Days, you know, we bring developers together and we talk about developery and nerdy type things, right? And this year we had a heavy focus on AI, of course, because everyone's talking about AI. But you opened up your keynote by saying we're in a post-taking-AI-seriously world. So tell me what you mean by that. 

Adam Blue

Yeah, I think if you think about the way people talk about AI, you have a pretty broad spectrum. You have the people that say, “We're going to end up in a scenario where everybody's on universal basic income and capital owners will drive all of societal, you know, benefits.” And you have people that believe in their Roko's basilisk theory, which I won't get into, but you can Google it. You have other people that say AI is just another iteration of technology that makes people more of what they already are. 

It's unusual, in my opinion, that you have a technology change that inspires such radical separation and such radical diversity in the thought and opinion around it. I think where a lot of people are landing, a lot of people who don't have a vested interest in everyone on Earth, believing that AI is the most important thing to ever happen in human consciousness and human history is that it will be very important and it will be very impactful. But, you know, collectively as humanity, we're going to have to come to grips with what do we want it to mean? How do we want this to happen? 

So when I kind of joked we were in a post-taking-AI-seriously world, I'm starting to see more and more conversations about what's the impact going to be. How do we think about realistic guardrails? What will the societal impact of AI be. What is it good for and what is it not good for? And a lot less discussion about … you know, when Sam Altman every year says we're almost at AGI? What does that really mean? It's like, does it matter what it means? Probably what matters now is what can we do with this set of technologies that is interesting and important? And rather than focusing on the disruption of what happens or the excitement or the kind of histamine reaction that we're having, it's a lot more interesting to ask the question, “How do we get more out of our organizations? How do we get more out of our people? How do we get more out of ourselves?” By using AI to do more things that we wanted to do, but being intentional about it and grabbing it by the handle, as opposed to this kind of very fanciful sort of thinking about it. So that was kind of my thought to open the talk that way. 

Cheryl Brown

Well, in that keynote, you went back to some topics that you've talked about on this podcast. And that's where AI, it's as much of an art form as it is a science. So unpack that just a little bit, you know, what that means for us in the banking industry. 

Adam Blue

Yeah, I think I'm going to get vaguely philosophical here, but I'll bring everybody along. So in the beginning we had analog technology. Right. And analog technology, it's not discrete. It's not digital. You know, we had AM radio and FM radio and we had analog guitar amplifiers. And the guitar amplifier to me is a great example, it’s a fantastic technology. It behaves in moderately unpredictable but artful ways. 

If you've ever seen someone or listened to someone, and I guarantee you everyone has who's really good at playing the guitar, they are, in a sense, playing the amplifier as much as they are playing the guitar. Right? The impacts of distortion, the sound of the amplifier. The people use pedals to modify the signal. All of this happened for years and years and years in the analog domain, and it's fantastically satisfying the analog domain. 

And we've modeled all the amplifiers and pedals and all that stuff digitally. So now I can play Rocksmith on my PC, and I can sound like Angus Young of AC/DC without, you know, 12 by 12-inch Dumble amplifiers. That's fine. But the point is, we used to engineer things around tolerances, and we had negative feedback loops, and we thought about the way they worked. 

There was a time when your car was predominantly mechanical and analog, and you would get behavior you wanted out of the car by moving a screw a little bit, or changing a setting, or turning a valve and changing the mix. Now all of that is digitally controlled, and that's neither worse nor better. It's simply different.

Generative AI, in particular, and to some extent machine learning feels a little analog to me versus digital. And the reason I bring that up is when we do software engineering, we're always comfortable with 100% repeatability, right? I have an algorithm. I apply the data. I get the same result every time. If you've ever worked with an LLM or struggled with an LLM, especially on a task that you break up into chunks and repeat many times, it will do the task a little bit differently each time you pass it in a chunk. You have to remind it that no, no, no, do it like this, like you did it the first time. And part of the reason for that is that embedded in the LLM is a certain amount of randomness, a certain amount of intentional chaos, because the models themselves are probabilistic and not fixed. 

Well, when I learned computer programming, I learned garbage in, garbage out, and, you know, specificity and functional programming, which is this mathematical, almost digital purity, right? If you've ever worked with electronics, you know that the worst thing you can do is design an electronic circuit that relies on a really, really significantly narrow input voltage because the wall current that comes out of your wall is garbage. It's full of noise. It varies up and down. If you've ever measured it—I don't recommend this; it's a great way to get electrocuted—but if you've ever measured it with a voltmeter, your wall current because you're fighting with an automation system in your house that turns the lights on and off, you'll find out from day to day, minute to minute, the quality of your wall current varies. Yet we have all these analog technologies that work beautifully. 

Ever try and run a dimmer on an LED bulb and it just flickers on and off? That's the intersection of analog power and digital LED, right? That mismatch is awkward. It doesn't work well. So if you want to do well in AI, you have to think a little more like an artist or a brilliant guitar player. And you have to understand that the technology has this analog-like component. It's a different kind of engineering. And thinking about that engineering, thinking about the downsides, thinking about protecting yourself from problems, thinking about taking advantage of that little bit of randomness in the engineering is a very, very different approach, right? And because that approach is different, a lot of the ways that we do traditional software engineering are not as applicable anymore. 

And so, you know, there's a continuum in everything you do. Like you make a grilled cheese sandwich. There's a lot of different ways to make a grilled cheese sandwich. I've made many grilled cheese sandwiches in my life. I feel like I can make a pretty good grilled cheese sandwich. And I have my procedure. But it varies depending on which pan I pick up, which burner I use on the stove, which kind of cheese comes out. And my reptilian hindbrain kind of knows how to deal with using the Horizon organic all-milk cheese slices versus the really good way to make a grilled cheese, which is your good old Kraft plastic cheese, right? That's the best one. But you need a little different technique to do those kinds of things. Different kind of breads, cold bread, warm bread. All of that is embedded right in my neural network that lives in this, you know, kind of fat melon of neurons between my ears. There's a certain randomness to that that becomes artful about the way you produce your grilled cheese. It's not a fixed algorithm. 

Using the same algorithm on different inputs all the time creates chaos. It creates outcomes that are not ideal. And so as engineers, as product managers, as support people, as implementation engineers at Q2, we have to shift away from this very digital thinking. We have to embrace a little bit of the chaos. We have to embrace some of the randomness. And the word I use for that is art. Like, I don't know if you've ever done any painting or watercolors or drawing like, you know, those kind of creative kind of fine arts. Yeah. Unless you are very, very good. The paint does not always go where you want it to go. 

Cheryl Brown

Exactly. 

Adam Blue

Yeah. And that's OK, right? You make all these thousands of microdecisions. Cory Doctorow wrote a great article about why he doesn't like AI-generated art. I don't really like it either. And he says the reason is, in his opinion, because art is about communicating this big, numinous feeling that I have. You and I make a lot of microdecisions when I'm doing whatever the art is. Even if I'm working digitally as a human, I still make these microdecisions in brushstrokes and color choices and mistakes and what have you. And what I leave in is as important as what I take out. A slip of the chisel on the marble might be as important as my intentional stroke against it when I'm doing sculpture. All that's lost when AI does the work. And so I think for the most part, I don't know if people can always definitively identify when it's AI art or not AI art, because it's getting very good from a simulacrum sense. 

Cheryl Brown

But I can always identify when I don't feel very much about it, right? I can see when the human part is missing somehow. 

Adam Blue

That's right. I'll take a heartfelt crayon drawing on the back of a Stuckey’s menu over the best piece of digitally rendered slop that you've ever produced. And so the art is in. We have this new tool. It's a fantastic tool, but it's got this analog component to it. And the techniques we use for producing art, the techniques we use for managing and harnessing uncertainty to produce outcomes with high value, those techniques we learned in art and engineering a long time ago. And that's what I say when I talk about bringing art to AI. And that's what I mean about the artfulness of it. There is no art in the AI. It's in the human connection to it. It's in the human use of the tool. It's the way you swing the hammer, right? That is the part that's really valuable. 

Cheryl Brown

Well, and you know, that might just lead right into this next question. You compared AI's evolution to the birth of hip-hop and remix culture, and that may be what you're talking about here, where we have the mix of AI and humanity to come together to create something that's really great. How is that shaping the way Q2 is thinking about innovation? 

Adam Blue

Yeah, that's a great call out. So I remember when I discovered hip-hop on a cassette tape when I was like 11 years old. I came to it pretty late because of where I lived. I did not, you know, I didn't live in Queens, and so I didn't get to go to block parties that people ran off, you know, stolen power from street lights. That would have been pretty awesome. 

And I heard this music, and my dad, who's a great guy, heard the music as well. And I learned a lot about music from him. And he played music all the time in our house. The Beatles, Led Zeppelin, Rolling Stones, you name it, right? And he said, you know, music has three things: harmony and melody and rhythm. And I'm really only hearing rhythm from this new thing that you like called music. And I'm not sure it's really music. And I knew he was wrong, but I'm like 12 years old at the time, and so I didn't, like, have a really cogent argument other than like, “Dad, you're really old and that's not cool, and you don't like cool things.” And it turns out hip-hop is now arguably the predominant form of popular music. It is everywhere. The hip-hop backbeat is now in country music regularly. There's a lot of hip-hop and country crossover. I wouldn't have predicted that. I also would not have predicted that I would like it. But if you can listen to “Tipsy” by Shaboozey and not feel something, you are a cold and dead human being in my eye. 

So hip-hop arises from this fundamental inequality, right? If you want to learn how to play the cello, you're going to have time to play the cello. You’ve got to have a cello; that's an expensive instrument. You’ve got to get instruction on how to play the cello. The first thousand hours of sounds you make with any stringed instrument will be objectively terrible, unpleasant. But you can take a piece of vinyl with a cello recording on it and a turntable and a mixer, and you can make music with someone else's cello. That does not invalidate the music that you've constructed any more than a conductor is not a musician because he owns only the baton and not a bow in his hand. 

And so, you know, I likened in that presentation the notion of Questlove is digging through crates, you know, to find sounds that he wants to use as a hip-hop producer to express the ideas that he has. And he is recontextualizing music other people have also already produced. Is this theft? I mean, arguably, yes. But does it matter? Because that kind of theft—first off, we've legally worked out what's OK and what's not OK and how you pay people for samples and sampling, but forget about that part. If you bring something new to it by recontextualizing the art, right? If you are saying something that's different by using these pieces that are already there, how is that different than using a chord progression someone else used? How is that different than listening to a beautiful piece by Beethoven and picking out a melody line and transcribing that to electric guitar and making it a snarling riff that makes everybody move their butts. You know, when they hear it on the radio, what's the difference? And the difference is the newer the approach is and the more different it is from what you were taught, the less it feels like the right way to do it. 

So in the AI space, we now have a tool, a technology that's capable of rapidly recontextualizing the primary way that human beings express themselves durably, which is writing things down. It's good at taking things people wrote down, mapping them into a model of that information and data, and then applying new inputs to that model to produce outputs that feel like talking to a human being. That's dangerous and fascinating and exhilarating all at the same time. It is very much akin, in my view, to having the technology of performing sampling, right, or being a conductor and conducting the orchestra. 

But here's the takeaway: If your job is to make beautiful cello sounds with a bow and a $2,000, $3,000, $4,000 instrument and you spent 10,000 hours of your life learning how to do that, and somebody comes along and takes a recording of you playing the cello and makes new music, I get why you'd be salty about that. I totally get it. I get why people that are good at coding and pride themselves in writing great code are salty about code generation vibe coding. I get why people that do graphic design are irritable about what they see on LinkedIn all day, which is a never-ending river of AI slop. 

But the other thing I would say is it doesn't invalidate your work. People still want to learn how to play cellos. People are still going to learn how to do beautiful art. But lowering the bar for people to be able to create those kinds of things effectively and rapidly can lead to more interesting and effective innovation. And the fact about great hip-hop music is not that you can produce it quickly or not, that you don't have to assemble all of the people that were in the orchestra to make the music. It's that you get this massive set of choices, right? You know? Childish Gambino flips an Adele sample in a couple of his songs, and then layers it over the top of, like, very classic 808 beats. How are you going to get those two things together? Otherwise, you're going to call up Adele and have her come down and sing for you, and then get somebody to play like a perfectly tuned drum kit. Like, that's just not an option. And so the key is to think about lifting yourself up, right? Becoming the composer, the orchestrator, the hip-hop producer in this model. 

Think about being able to work with agents as they become more mature and they become viable. Think about working with the LM and coordinating it. If your job today is to do tasks in a repeatable way, and your only job is performing those tasks and then going home at the end of the day, your job is probably at risk of being automated away by an LM. If you can bring something to that job—taste, discretion, oversight, orchestration, all of those kinds of things—hat's what you should be focusing on. Those are the durable skills. That's what we're not going to get in the short term from AI. Maybe we get them from AGI. And Sam Altman is right. And we're just 27 minutes away from having artificial general intelligence and the collapse of the world. And I don't know. I don't think that happens. But if it does, we probably won't be worrying about whether it's OK to do vibe coding or not anymore. That eventuality is so extreme, I don't even know if it's terribly interesting to entertain. 

For the rest of us that are going to work through the world in in the way that it progresses and in this accelerating culture, lifting yourself up and investing in those skills around coordinating every individual contributor, thinking more like a manager, every manager thinking like a director. Every director is thinking like a vice president. That will be how you'll continue to stay relevant because your taste in assembling those sounds right. 

There's a big difference between the sampling in the Fugees catalog and there is in “Ice Ice Baby.” Now, you might like “Ice Ice Baby.” And I like that song too. It's a great song, but it is not a sophisticated use of the sample, right? It's not flipped. It's not interesting. It's pretty much the baseline from “Under Pressure” by Queen, and it goes pretty much the way it does in the song. And it gets revolved and he even says it. “Check out the hook while my DJ revolves it.” It's just a one-sample hook. He raps over it. There's some clever dancing; it’s a catchy song, but it's not a great piece of art, right? 

So if you have the discretion and the taste and the understanding where you can do more with that, where you can reassemble those components in an interesting way, where you can orchestrate the agents and you can get volume and scale but bring your personality to it, bring your sense of taste and what's right and what's wrong and what people want, that I think is going to be very powerful in the same way that like, we didn't get rid of writing when we got typewriters. We didn't get rid of graphic designers, you know? 

Cheryl Brown

Exactly. I mean, you know, as a writer, you know, someone who has spent the past 35 years of my career honing my voice, my tone, my writing style, first as a journalist, then as a, you know, communicator and then a marketer. My argument has been in these conversations—writers have been having these conversations for a while. I think programmers and developers are just now getting to where, you know, their art is being quote unquote, overtaken by AI. Ours was the first one up. Right? But, you know, my argument is always AI is a tool, right? AI doesn't replace me as a writer because AI cannot write the way I write. AI is built on good writing that we have all done as human beings, and it's fed into the, you know, the language model. 

My daughter's a linguist, and I tell her, I'm like, we wouldn't have AI without linguists. We need somebody who understands language. 

Adam Blue

We wouldn't have compilers without linguists. 

Cheryl Brown

Yeah, right. Exactly. I mean, so it's a tool. We as humans need to use the tool to make ourselves better and to make ourselves more efficient, frankly, to be able to do more, faster. But the beauty is still ours to own. You know, the part that that is uniquely human. We have to still inject that because it's not going to come out of the large language model. I don't think it ever will. But we'll see. Like you said, let's see, maybe some people are right. 

But so for us at Q2, you know, we're talking about AI for Everyone. What does that mean? What are we talking about when we say AI for everyone? 

Adam Blue

Every time there's a technology shift in humankind, there tends to be an implicit sorting. And when the sorting happens, a lot of times technologies pay off in a more significant way for people on the upper end of the income curve just to reality right now. Over time, thanks to human ingenuity and sometimes just human cussedness, we're often able to return some of the rents of the value of technology back to people at the other end of the economic spectrum. And so there's probably less worldwide hunger and poverty in the last five years than there was at any other time in human history. You know, I don't love late stage capitalism, but I do not have a proposal for anything better. And so when these technology events come along, the first people that can benefit the most are the people that are well positioned. And they tend to be affluent and they tend to be highly educated and they take advantage. 

What I'm cautious about and I have been—and this is true of mobile, it's true of computing generally, it's true of the Industrial Revolution—is making sure that AI is accessible, especially in financial journeys. So when we say it's for everyone, in a practical sense, we mean it's for employees at Q2. It's for employees of financial institutions. It's for customers of both. It's for partners of Q2. We can use AI across the spectrum of what we do in a more philosophical sense. What I mean when I say AI is for everyone is that I don't want some segment of the market left behind in our shift to AI and our shift to using the technology to change the way the financial journeys can happen. We need to make sure that we are using AI to lift up people that maybe don't have a lot of money to invest. We need to make sure we're using AI to lift up people that maybe don't have large running daily balances. We need to use AI to increase education and inclusion in the financial system. 

You know, if we can figure out a way to use AI to reduce the cost of delivering digital, and that lets us make better inroads on addressing the needs of people that are unbanked or underbanked, that's very, very valuable for everyone. A world in which a whole lot of people drive Ubers and make food and run DoorDash for relatively compressed wages, and a relatively small number of people do all the information work and make extraordinary amounts of money is not a very interesting world to live in, even if, in my opinion, you're one of the people that works in the information worker space and makes a lot of money. It's not a place I want to live. It's just not interesting. 

And so AI is for everyone means we're going to broadly apply the technology across a wide set of use cases at Q2. But AI is for everyone also means let's not leave anybody behind in the way that the financial system and the financial journeys that people take evolve, because lifting everyone up literally lifts everyone up. So being able to capture some of the benefit of AI and making sure that that is evenly felt, I think is really important. I don't know how much impact at Q2 we can have on that directly because we're a company that has a mission to strengthen community financial institutions, to build strong communities on a day-to-day basis. We will partner with our financial institution partners to make sure that accessibility of financial products and education and everything else is really paramount. And then they, in turn, will empower the people they work with: individual consumers, small businesses, large corporate entities. So it takes a partnership from everybody to do that. 

But I think we can bring visibility to it, and we can think about delivering AI in a way that it is maximally accessible up and down income levels, up and down use cases, up and down all the different ways that people interact with the financial system. 

Cheryl Brown

Speaking of use cases, at Dev Days, we demoed four use cases for AI. We had one that was an AI assistant for account holders and customer service reps. We had one that enabled better self-service tech support for financial institutions. We have one that's vibe coding, you know, making it easier to code within the SDK. And then we had one that kind of demonstrated an agentic AI ecosystem. What was your favorite among all of those use cases? Which one resonates more for you? 

Adam Blue

That's a good question. I think they're all fantastic. I'll talk to you about the one that I'm kind of most engaged in because it's kind of the one that's the most diffuse, which is, I think, that we're on a curve and it's a steep curve to get to an agentic ecosystem. And vibe coding is fantastic and it's important. Copilot's for front office, back office, very important. Those are products where I think we have a pretty good view of what they might look like and where they might go. I'm really engaged in the agentic ecosystem piece for a lot of the reasons that I already talked about, which is I would like to be able to lift financial institution employees and financial institution customers out of the need to repetitively perform tasks, right. And the way we get there is by doing things that are agentic in nature. 

And there's a lot of argument over what agentic AI means. But I'll just go back to the word agentic, which means someone or something that operates independently. That's what I want from agentic AI. I want to be able to give a piece of code that lives somewhere in an algorithm, an instruction that is durable over time. And I want to outsource some fraction of the many, many things I feel like I have to think about in a given day to that thing and have it remind me: Take care of the thing, only bother me on exceptions, you know, whatever that is. I want to be able to say if the electric bill is under $250, just pay it. if it's over $250, let me know. And then I'll find out that in the summer, my electric bill is over $250 every time. And then I'll say, hey, let's revise that for the summer months. If it's over $400, let me know. And for the winter months, if it's over $250, let me know. And other than that, I don't want to know about the electric bill. Just pay the electric bill safely. That's an agent that I want. That's an agent I can deal with, right? 

I want the ability for someone to say, “Hey, wander through the secure messages and stack rank them by level of customer urgency, and then work them in that order for the day.” Or better yet, “Walk through the set of secure messages, stack them by urgency, identify the ones that you, agent, can fulfill without really needing much input from me, and then I want you to show me the responses you're going to send for my approval. And then anything that you think is more complex than you can comfortably handle, I want you to drop those into my inbox and let me deal with them. And then I want you to save this prompt, and I want you to run it every morning at 8:15.” And when I get back from grabbing my first cup of coffee, I'm going to spend 15 minutes doing this each day and be done with it. And I'll never have to think about it again. That's the agentic promise. 

And then eventually we'll get to the point where maybe you have two or three agents that operate in concert. And I say to my tax agent, “I need you every week to walk through all of my purchases and flag things that you might be charitably donating. And then I need you to make sure that every month on all of my accounts, you collect my statements and put them in this folder” so that when my tax accountant asks me for them at the end of each year, I don't have nine hours of administrative work to do where I feel like I work for my accountant, which is not happy fun times for me. And then I can have another agent financially where I say, “I want you to go and allocate this budget that I've set aside for charitable contributions and tell the tax agent that I made these, and they're definitely charitable contributions and hand that one the receipts” so that this all happens in the background. 

So the ecosystem to do that is complex. And I think that we're going to have lots of people with fascinating ideas about how to do that. And I don't think at Q2 we are going to have all the great ideas, and I don't think we have all the domain expertise we need to execute all those ideas. So first you need, at the baseline level, a plane of secure access to a huge amount of data, because that's what it takes to train these models effectively. And not just generative AI models, but traditional ML models around AI, maybe even baseline heuristic models around AI. 

Second, I need a way to bring all my partners to the table. I need to learn more about customers and the environment from my partners, and if I can learn more from my partners and I can bind that to the data set, then every partnership we turn up, they get more data from the network, and we get more data from the network. And then that data becomes more enriched in a geometric way, not exponential. Everyone uses that word geometric. Geometric is still very powerful. 

And then the third piece is I need to expose all of those API surfaces across the Q2 fabric and then across the partner fabric so they're knitted together into this MCP-facing API that an agent can safely interact with. So I know what the agent is doing. I know why they did it. I can get the agent to stop doing it. I can apply oversight from another agent or a person. All of that framework has to be built up. 

We're just now figuring out as an industry how that's going to work. So on the side we have the data, we have partner relationships, we have first-party value we can add, but we have to build up the API surfaces, the MCP layers on top of the API surfaces. We have to build up the safe access to permission data. All that infrastructure has to come to pass. And then we can say to people, here is a safe place where you can build, operate, and manage an agent that can perform these tasks on your behalf. 

So the beginning of this, right, is these copilots, which are agentic in nature. They're semi-independent, but they're not really autonomous the way I think of agentic. They probably don't interact with each other very much, and that's OK. They're the first step on that chain. But the thing I'm really excited about is this multiyear horizon to get to an agentic ecosystem where anyone—a customer, a bank customer, a partner, someone in Q2—they could envision and build an agent that would have value and then, you know, hire out that agent for people to perform tasks and let people get to the higher-order business of living life and creating value and doing the artful things that we can't rely on the technology for that. I think this is the promise.

Cheryl Brown

That one, to me, seems like the most pie in the sky, but it's definitely the most exciting. It's like it kind of answers the question of where do we go from here? Which, you know, one of our mantras around AI and you also said this in your keynote, is experiment boldly, deliver responsibly. How do we ensure that AI remains ethical and human-centered at Q2? 

Adam Blue

Yeah, I think it's a tall order. We do have a good experience in the last 20 years of thinking about delivering technology through the frame of the mission, and so when all else fails, we can come back to the mission, and the mission will give us a great tiebreaker on should we do this or should we not do this. And so we've built models before that are pretty good at identifying things like mobile remote deposit capture fraud. And then when you dig under the hood of the model and you find out what the model is really tagging is people that shop at certain retailers. And then you look at that list of retailers and you find out that list of retailers skews towards, you know, low-cost goods. And you think, is this model really identifying that people that are at the lower end of the income spectrum are more likely to get socially engineered into accidentally committing fraud? And then you ask, well, if that's endogenously embedded in the model, is it ethical to roll this model out? The model might be accurate, but it doesn't seem equitable. 

And so we've had a lot of those conversations. And we've had a lot of conversations about the way we build features and the way we roll things out and the way we use data. So I think we have a good muscle memory for how to do that effectively. One of the challenges with AI is it allows you to move so rapidly, and it's so powerful in some ways that you can build things that maybe outstrip your capacity to manage them very, very quickly. So it can feel a little bit like, you know, one of those chemistry kits from the ‘50s that had actual radioactive isotopes in them. I don't know if you've seen these. They were in the market for about six months. They're terrifying. Like, I don't know if anyone got really seriously ill, but they certainly could have. And it can feel like accidentally ending up in somebody's fissionable materials drawer when you work with some of these things. 

So we'll apply the same principles we've always applied, which is to be focused on the problem, not the solution. To center the end user in the conversation around design and governance, and then to be respectful and thoughtful about the way that compliance and security frameworks work. You know, I bristle sometimes, to be fair, at some of the compliance and regulatory structures that we labor under because I feel like they're not always necessarily accepting of innovation. But as we are able to accelerate our use of technology and move faster and faster, I think we will be happier and happier that there are guardrails and there is governance and there is risk management thought wrapped around these things. So just really centering on the problem and the customer and then being respectful of there are reasons why we have compliance requirements, even though some of them, you know, they're not always as up to date with the technology as they need to be. 

But there's a lot you can draw from. What was the objective behind the way in which this was structured? What is the outcome we really want? What is the real job to be done here, and can we make sure that our solutioning doesn't outstrip the nature of the solution to the problem and become something that's bigger than that? So I'm hopeful that we can intersect being aggressive with AI technology and being thoughtful about the way we impact people's lives and the way we distribute the returns from implementing that AI technology. But it is a challenge, and it's something we talk a lot about and think a lot about. 

Cheryl Brown

Well, Adam, as always, it's been a pleasure to have you on the pod. I think if we can take one thing from today's discussion, it's that AI doesn't make banking less human. Right? It gives us the space to be more human, to listen better and build faster and serve more people in more meaningful ways. I mean, does that pretty much sum it up for you? 

Adam Blue

Yeah, I think it does. I think the danger is to use AI to do more of the things we're already doing at a higher rate of speed. And I think the opportunity is to use AI to change the way we think about the nature of the problem and the nature of the solutions to the problem, and to ensure that we are using AI to automate away the tasks. And we leave the orchestration and the taste and the preferences in the hands of people. And we continue to drive for making banking relationship-driven, and we really honor the space that our financial institutions hold in their community. And if we hold true to that, then our use of AI, I think, will be very aligned with the mission and what we want to execute against. And that's the way forward for us. 

Cheryl Brown

Well, if you'd like to see the different innovations that we showcased at Dev Days, they're out on our blog at q2.com. I'll also put a direct link in the show notes, but you can see each of those demos that we shared. 

And that's it for another episode of The Purposeful Banker, a reminder to share your feedback on our podcast content at q2.com/podsurvey, and there's a link in the show notes on that one, too. You can subscribe to the show wherever you listen to podcasts, including YouTube, Apple, and Spotify, and you can see our archive of podcasts at hub.q2.com/podcasts. Until next time, this is Cheryl Brown and you've been listening to The Purposeful Banker.