Fintech Layer Cake
Welcome to Fintech Layer Cake. A podcast where we slice big Financial Technology topics into bite-sized pieces for everybody to easily digest. Our goal is to make fintech a piece of cake for everyone. Fintech Layer Cake is powered by Lithic — the fastest and most flexible way to launch a card program.
Fintech Layer Cake
AI Adoption Survey and Fintech Trends with Jas Randhawa
In this episode of FinTech Layer Cake, host Reggie Young welcomes back Jas Randhawa, Founder and Managing Partner at Strategy Brix, for a candid look at how fintechs are really using AI in compliance.
Drawing insights from a new survey of nearly 100 fintech companies, Jas shares what’s hype versus reality, why most teams are still in the ‘talking’ phase, and where budgets and talent investments are falling short. He also explores how AI will reshape compliance roles, the importance of training over tooling, and why regulation might never catch up.
Finally, Jas explains why he’s bullish on crypto and stablecoins—and how traditional banks are quietly following fintechs’ lead. For anyone building, leading, or scaling in fintech compliance, this episode delivers clear-eyed perspective and practical takeaways.
Reggie Young:
Welcome back to Fintech Layer Cake, where we uncover secret recipes and practical insights from fintech leaders and experts. I'm your host, Reggie Young, Chief of Staff at Lithic. On today's episode, I'm excited to have a repeat guest, Jas Randhawa, the founder and managing partner of the risk and compliance consultancy, StrategyBRIX.
Before founding the firm, he was VP and global head of financial crimes at Airwallex, head of financial crimes at Stripe, and built and led PwC's financial crimes fintech practice on the West Coast.
Jas and StrategyBRIX recently sent out a survey to almost 100 companies in the fintech space of all sizes focused on how fintechs are adopting and using AI. So a lot of interesting insights from the industry. Folks should check out that survey, but Jas and I go into some of the top takeaways from it.
Fintech Layer Cake is powered by the card-issuing platform, Lithic. We provide financial infrastructure that enables teams to build better payments products for consumers and businesses. Nothing in this podcast should be construed as legal or financial advice.
Jas, welcome back to the podcast. Great to have you back on. You've been on before. Folks should go check out that episode. You and StrategyBRIX are at the forefront of all things compliance, fintech, which, compliance, normally people hit the snooze button, but they shouldn't for this episode because Jas is one of the few folks who makes compliance super interesting and relevant and practical.
I wanted to get you on because StrategyBRIX has just wrapped up, or is in the process of wrapping up, a large AI survey that you did with a bunch of fintechs. You work with fintechs of all different sizes, start-ups to large, established public companies, a lot of names that listeners would know. You have a good vantage point of the ecosystem. You talked to a ton of fintechs about what AI they're using for compliance, risk, and other functions. What were some of the takeaways from that big survey you did?
Jas Randhawa:
Thanks, Reggie, for having me again. For your audience, if they go back to the previous episode or not, the one chain they're going to see is my beard playing out a lot.
Reggie Young:
Fintech will do that.
Jas Randhawa:
Yeah, it does. And I'm so glad that I'm not in-house. I don't know what would have happened to this, but I've been looking a lot like you then. Super happy to be here.
On the AI front, we spent the last almost five months talking to a little bit under 100 fintechs. Most of them were not early-stage. A lot of them outside of the US, too. It’s a very well-crafted set of questions in terms of, hey, where do you stand in your compliance AI journey? And that's a question- we ran down that path predominantly because earlier in the year, we had a lot of CCOs, CROs swing by and they were like, hey, we're getting a lot of pressure from leadership to pull together our AI strategy. What are you seeing? What are we talking about?
I don't think anybody was ready then for us to be able to go back and start building our strategies and op models and designing what agentic compliance AI and bots are going to look like. So it's very high level. And we wanted to make sure that when we go back with advice to our customers, we exactly know and have a good pulse on the industry.
So we took this survey out, and it was quite startling even for me. After having been in the space for a while, you start feeling like, yeah, I know what I'm about to run into. Some of these things are mind-blowing for me. They are definitely allowing us to strategize and make our offerings a lot more precise and a lot more geared. And I'll come to what these aha moments have been in a second.
So that's what we did. We had the first release. We had promised to only the respondents that they've had access. We are getting some feedback. Some of the feedback is also a little scathing on what we could have done a little bit differently, which is, there's always room for improvement. And the team now has a plan to release the same report and bits and bobs. By the time, I'm sure, this episode airs, if folks have questions or if they need access to the reports or the bits and bobs, they can reach out, be more than happy to engage.
So some of the interesting findings, the TL;DR finding, the big one. So I think there's way too much noise. There's no question. And the gap between the implementation and action versus conversation is massive right now.
Reggie Young:
Everybody's talking about it, but a few people are doing it.
Jas Randhawa:
That was the punchline of the first section of the report. Something that I realized was every CCO feels like they don't know, they need to know, and they know this is going to impact them. And the biggest thing is they feel like somebody else knows or their peers know. Everybody's exactly in the same bubble, maybe barring one or two people who have generally spent a lot of time or might have a more technical background.
From a budgeting perspective, more than 50% of the respondents still say today, AI compliance is only less than 5% of their overall budget, more than 50% of the respondents. But when you ask them, how good do you feel about this, they're like, we feel phenomenal. This is it. You're thinking like, your budget was a couple of million dollars, but you only allocated 5%, which is a little surprising.
For everybody else, even the ones who feel that they are doing better or they are ahead of the curve, their budget is less than 25%. So if you think about it, on an average, you're saying a huge chunk of your actual investment, even today, is going to regular transaction monitoring, onboarding, sanctions, people, training, the same stuff you used to do. But all of a sudden, from an OKR's perspective, a goal's perspective, AI is everybody's number one list. If you talk to a CCO, they're going to come back and tell you this is exactly what they're doing.
So there is a little bit of a misalignment, because I think the narrative is that this is important because there's also a lot of pressure from up top to be able to go back and find savings and find efficiencies and gains. But for them to be able to make that happen and realize those changes, you need to be able to make the investment.
So that brings me to maybe a second trend that we've been seeing, which is outside of the report but more practically on the ground, is there is almost an immediate reaction by the industry to go and look for solution providers, vendors. There's an early mover advantage to everybody in the field today. If you're going back and saying, hey, I can do this, what took you 50 people or 500 hours in a fraction of a time by using AI?
There is a lot of truth to that reality. So everybody's gone on a shopping spree, which means a lot of these deals have not been signed, but you walk into people, you're asking them, so what are you doing about AI? Instead of the response being like, hey, we've hired an engineer in the compliance team, or we now have AI infrastructure rolled out to our people where we can start testing out things and we have a sandbox, the immediate response is, we're talking to this one, we're talking to that one, and we're talking to all of these people. You’re like, okay, that's great.
So props to folks who are going to market and they're building things fast enough, but at the same time, from a practical standpoint, I have my gut feeling and my commercial sense tells me that the compliance industry is going to run at a little bit of a lag, like we always do, where we can catch up to the cool tech in a while once we have a good sense and a good handle on infrastructure training.
Reggie Young:
Yeah, it's great. It's like that compliance fintech phenomenon, too, but all the AI headlines of the vendors out there. You talk to half of them, nobody's figured this stuff out. It's so early. This is just how it shows up in the back end to me of the headlines. You can see all the tech crunch news, updates around AI and fintech, and it's still so early innings.
Yeah, if you're just going off fintech news vibes, it seems like the AI ship has sailed and everybody's already on it. And it's like, that boat hasn't even boarded yet. It's early days.
Jas Randhawa:
That's very interesting. There's another fact that came to my mind. The companies that said that they're doing really well and they're in that top 80% bracket, and then you go three levels below and you ask them like, you're doing phenomenally well, and you're killing it. And you ask them like, what percentage of your processes have been AI'd? That's less than 10%. So you feel like the gap still is significant. And I feel like if anybody today feels like, in compliance, oh, we understand what AI can do, it's operational efficiency, reviewing alerts, and that's the holy grail, I don't think you’ve scratched the surface. I think this whole ecosystem is about to get upended one piece at a time.
There are thinkers and there are companies who are going to do it the right way and they're going to do it well. If they're able to do it well, my sense is that you could be a multi-billion-dollar payments company or a bank, but your compliance teams are going to look very different in a couple of years from now from what they do right now.
Reggie Young:
Yeah, it's exciting. I feel like compliance can suffer from the same fate of legal, which is, oh, this is just a cost center. And I think you start looking at these AI tools and like, yes, you have to spend on the tooling or invest in the engineers to build them, but it helps bend the curve more towards operational efficiencies and compliance functions. It's an exciting time.
Jas Randhawa:
I was at the ACAMS Conference the day before. There was an audience question about, is there more- and I think there are a lot of compliance analysts and compliance officers in the audience. One of the questions that always is front and center for all of this whole crowd is, we're going to lose our jobs. I do genuinely also believe that a lot of industry leaders are not doing justice. Then a mic has been given to you and you're going back and saying, no, don't worry about it. You're going to be okay. In reality, you're not going to be okay. And if this market is going to get upended, if the rinse repeat jobs can be automated, they can be done with efficiency and scale, then you need to decide what role you have to play.
To your comment about the legal industry- and we had one of the lawyers, smart, sharp guy from Davis Polk, alongside. And I think the one point that resonated with both of us, which we had to take back to the same crowd, was if anybody is going to lose their job before you do, that's going to be consultants and lawyers. A lot of our work is so rinse and repeat that before you start making internal cuts, you're looking at making external cuts.
Now you've got two choices, and I'm bullish on this, and that's what I'm telling the crowd, that you need to pick your barrels and you need to be on the right side of the industry and the line. When this happens, don't be sitting somewhere in a back room thinking that I'm going to slow growth, slow change, this is not going to happen. We're always going to have a human in the middle and all of that stuff. You can always learn and be on the other end. And if you are, you're going to be fine.
This other very interesting thing that had come up was somebody mentioned that as risk and compliance professionals, your job is oversight is important. Governance is very important. So it's your responsibility to slow things down. As a part of my commercial brain that lights up, you genuinely have to go back and tell people that the biggest advantage in our world, if you think about AI in compliance, and you look at the founders and the investors and folks behind these big companies and banks, at some level, risk detection, mitigation is important, but cost reduction is very, very important.
AI makes companies and businesses a lot of money by saving money. I can't imagine a world in which compliance professionals in the back sit and say, let's just pump some brakes on making money. We're going to be fine. It's not going to work. Industry is going to move 150 miles an hour. The only way you can provide or throw your compliance, risk, oversight, governance muscle is if you understand what you're trying to govern and what you're trying to provide oversight to. The sooner you get on the bandwagon, the sooner you understand how LLMs work, the sooner you understand what machine learning is all about, I think you're going to be better off. That's my free two cents of advice.
Reggie Young:
Yeah, the easy answer is, okay, existing compliance folks need to learn AI tooling and understand the tech. But I think there's some interesting questions. There's a line of thought, oh, yeah, the Model T put horse drivers out of business, but it also created jobs for gas station attendants and roads, right? Every technology also creates jobs in some ways. And so to me, the interesting question is that second order effect of like, where are we going to see the additional new types of jobs that pop up for folks who are in compliance now? Yeah, it's going to be an interesting five years.
Jas Randhawa:
Yeah. I think a practical gap- I love the example, but I'm going to steal it.
Reggie Young:
Not my example. I don't deserve any credit for that.
Jas Randhawa:
Earlier this year, we had an offsite with our team. Most founders have been on this tech and the AI bandwagon, and you're like, you got to do this differently, right? From our standpoint, the idea is not to reduce the size of the business, but it's to be able to go back and give the big guys a credible challenge by still being a consulting shop and being able to go head-to-head with all the big names that are out there. We need to be able to do things at scale. We need to be able to do them with precision. We need to be able to leverage these models to our benefit, to carry the load that we generally would have looked at associates and senior associates to do for us.
So I'm amping down all the pressure, going back to the team saying, hey, this is going to be a part of our performance evaluation process. You can’t be able to go back and deliver engagements without bringing the value and the quality to the customers. And you have to be able to tell them how much of their spend we were able to reduce because we were able to use these models so that they understand that we understand this and we can have a better book of work.
Now keep in mind, most of the people on our side are very technical and they understand tools, tech systems really, really well. So this offsite, I'm sitting down with people one-on-one and I'm asking them like, hey, how are we going to do this? What are you working on? What about this new project and that one that we'd launched? One of the smartest people who we have on the team, she asked me this question. She's like, I'm a little confused. I actually don't even know where to begin.
That, for me, was my big aha moment where I feel like I had failed and I had gone wrong, just like I feel now, every other leader in the industry, who you've gone back and said, you've understood because you have the vision. That's why you started the business and you're doing everything else. This is very cool. So you see something and you're like, this is going to be a game changer. And you spent a lot of time understanding what that game changer might be, and you have some ideas and now you go and delegate. You're like, make this happen.
The one big thing that we completely dropped the ball on and we had no idea how to do it was you need to train people. You need to give them the tools. You need to tell them exactly how to parse one from the other. They need to be equipped when you think about, hey, how is this data going to move? Can I put our proprietary data? Can we put our client's data on this? When do you rely on the advice? How do you figure out that this is consistent? If I do an exercise for you this year using a model, and then you make me do the same exercise in six months as a refresh, if I'm giving you different answers. If I don't know how these systems technically work, if I don't know how to hone in and fine-tune these systems and make them work for me, that's a big struggle.
So we paused everything that we were doing apart from this one solution that we had built. And all we are trying now to focus on is training, and that's the other thing. Constantly going back and talking to a lot of CCOs is, don't ask me what's out there. Don't ask me to help build a controlled inventory and then give you a stack rank of what you should put in on AI. Those ideas need to come from your teams. Make sure that your next 6-, 12-month budget is predominantly training people on not just the higher-level nuances of AI and LLM, but just going very deep technically. Run hackathons, make them build stuff. Let's see, give them tools and give them guidance in terms of what they can all use. I think that would be great.
Reggie Young:
Yeah, I love that. We just had our company retreat a couple weeks ago. We did an AI- it was like a forced AI hackathon where it was like operational processes, plus some technical folks for each group. People are so busy on a day-to-day basis that if you don't give them time and space for that sort of tinkering and education- we've also been trying to do regular sort of company-wide show-and-tell things almost.
I have a friend who works at a drone company, and they signed up with one of the large AI platforms. And they came in and talked to the company. And they basically had no guidance for how to use the product because they were like, we don't know how your business and your industry is going to use this. So instead of being a tutorial from the AI platform, it was like, keep us posted, let us know what you end up doing because we'd love to find out. That's just the state of the technology, right? It’s, we're all figuring it out as we go.
Jas Randhawa:
That's interesting. I could be completely wrong in my assessment of regulations in this space. But I think going back to that comment that I'd heard about, let's try to slow this down, let's wait for regulations to come, I think it's probably an overarching policy question at a federal level. Are you going to regulate- again, could be wrong, but my understanding is in the United States, we have no intention to heavily regulate AI, predominantly because one of the biggest and the most major use cases is going to be surveillance, it's going to be a lot of warfare, it's going to be a lot of geopolitical stuff that you're going to use this for.
The one thing that I think we as a country definitely understand is the innovation that comes from Silicon Valley, comes from New York. And for that innovation to happen, you just can't put wild restrictions on how some of these tools and technologies are getting tested and experimented with. So for that to continuously keep happening, I think there's more innovation, less regulation for us to be able to stay competitive with countries like China and Russia and everybody else out there.
So anybody holding their breath for hardcore regulations to come out, I wouldn't do it. But at the same time, you could fundamentally apply a lot of conventional models, maybe not in their purest form. But if you are a veteran with years of risk, compliance, governance, oversight type of experience, and you understand now how these systems work, I'm sure- this is not a rule-based system, if there's been that kind of a situation. But still, you should be able to put four or five good measures that allow you to understand, are these models being used correctly, incorrectly, and knowing what you intend them to do. Are you testing them more aggressively?
Because that's the innovation that I feel in our industry is going to be far more relevant, and just basic agentic AI trying to solve compliance problems is going to be more of, hey, you got an agent, you got a bot. Great, congratulations. I have an agent that can actually do very comprehensive testing on how it works. I can do a lot of explainability around these solutions. And that, I feel, is an industry that no one's scratching the surface on.
Reggie Young:
Yeah, agent observability and- I don't know if you saw former OCC controller, Hsu, he published a white paper last week, super interesting. Folks should go check it out. One of the things in financial services and fintech is a lot of the AI is operational stuff internally. But there's some core functions, like using AI for underwriting, really thorny question. It's an existing regulation that doesn't fit, a set of regulatory postures that doesn't fit well with AI. His piece is just I loved-
The problem for a lot of underwriting or decision-making is, usually, financially regulated entities need to be able to explain the decision, tell a person why. And Hsu’s approach turns it on its head a little bit. This is very broad strokes, folks should go check out the white paper, but he approaches it more from this angle of, can we fix the outcome if it's wrong? That is almost more of the question to ask for AI.
It's like this acknowledgement that a lot of early engineering, early architectural engineering was just trial and error. A lot of the first bridges were like, this thing held up, we can't explain the physics of why. There's this sort of fundamental that a good chunk of progress is you can't necessarily explain the mechanism.
I'm not convinced the US is in any time soon going to get to a place where the regulatory posture is actually going to be able to accept that. It's just a good example of the regulatory innovation that I think is needed to see more- put more innovation on the hard parts of fintech, not just here's the operational workflow in the background, which is a huge market, huge problem, massive progress for problem solving there. But then there's the fun thorny, like underwriting-type use cases that- who knows, we may never be able to get there on those. We'll see.
Jas Randhawa:
No, that's fair. My thought on that paper was predominantly that- I think that theory works when you don't have a regulatory and an oversight apparatus as strong as we have. If you're building your first bridge, and bridges are brand-new thing, and then a bunch of them fall and a couple of people lose their life, you're like, oh, oops, sorry, we’ll do better out here. You start thinking about a bridge today is going to be like 20 regulators-
Reggie Young:
A bridge in San Francisco? Good luck getting that built within 20 years.
Jas Randhawa:
So you got code and you've got so much going on. You could potentially go back and say, hey, the technology has changed so much, we could just have a levitating bridge, and doesn't need to do blah, blah, blah, whatever. But that code is written for something, that code has to change. It's not like you can go and build another thing that looks like a bridge, which is very efficient and very cost effective, and it works 95% of the time. The other 5%, it just is a catastrophe. And someone's going to let you do it. They're going to take that bridge and stick it back into the same code, and you're going to be running circles around trying to solve that problem. Yeah, that needs to change.
I think my other belief, this might sound a little crazy on our part, we've been trying to take a very different approach. We've got two teams right now. One, they've been actively engaged with folks who work at Tesla, who did the full self-driving cars. And that's a team that does QA for the robotaxis division. So you're just having very dumb and naive conversations. One of them we heard was like, you only have a decision whether the car can go off a cliff or run over a kid or a family, whatever. Rock and a hard place. You listen to that question, you're like, I need that answer. That's the answer that my regulators are looking for. You just have to contextualize it differently. And now you have to go back and explain.
And that's a lot of AI. No one there is saying that you put a car out on the road, and then we don't know what happens next. Some days it goes south, some days it takes a different route. That's not true. There is a lot of deterministic behavior that the car is undergoing. It's just that it's a lot more real-time. But it's still happening based on certain facts and features, which are being fed to that model in that situation.
The second one, we haven't gotten a lot of success with that one, but that's going to be mind-blowing because it's a more conventional industry, is somebody got us connected with somebody at Boeing. So think of all the autopilot mode that the Zoox is and the self-driving industry is working on. You've been flying planes for decades. That's the technology that they work on, and there is explainability. Every time you experience a catastrophe that happens, it takes months, but they're able to piece everything back to somebody who did something at some point, there's a fracture and not, some little thing that happened that caused a catastrophe to happen or how the decisions were made.
And these are both highly heavily regulated industries. All you're trying to understand is a little bit of like, how do you go and draft your test cases, and how do you go and explain and be- once you get a little bit better in understanding the mechanisms, my personal goal would be to go and figure out my counterpart, who would have probably seen the rulemaking process, the policy change, just get a general sense of did the regulation come first, did the governance come first, or maybe decide to shoot first and think later.
I think that's going to be the answer to this industry and this whole larger problem, which, of course, if you don't solve it, somebody definitely is thinking about it, they're going to do it. But this could be a good- maybe a use case example for you guys to do your show-and-tell to.
Reggie Young:
Yeah, I love that creative problem-solving so far outside of fintech, where you're like- yeah, autopilot, this has come up multiple times in conversations in the past, like two weeks, around AI use. It's like, we've been doing this for high stakes flying planes for a while, so there are frameworks. There's not many folks who would think to look to self-driving cars and Boeing to try and figure out compliance regulatory AI problems.
Jas Randhawa:
My team's going to be happy. They're about to get a shout out for three people who have been testing validation standards for those autopilot modes. Right now, it's a standard page document. You're going to definitely use AI to read that for us.
Reggie Young:
Yep. I love it. Well, awesome. Anything else you want to cover on the AI survey you did? Otherwise, I'd love to jam quickly on any trends or hard questions your clients are bringing to you nowadays.
Jas Randhawa:
Yeah, there's a lot to unpack on that survey. I think the top two, three things are, where are people making the investment? Where they intend to make investments are in the operational cost-reduction areas. If you stack rank, that's 90% of everybody. You ask them like, hey, what do you think is going to be the first use case for you? It is a use case that’s alert reduction investigations.
The bottom bracket, which is very, very small, is about proactive risk detection and mitigation. So there are no use cases, and no one's right now saying that the first few use cases are going to be like, hey, I'm going to go and find more risk and what really matters. I think that has to change and that will change because, realistically, what you're solving for right now is a byproduct of bad systems, is a lot of alerts.
What AI is going to do is it's going to give you a better detection, better system, which means your alert quantities are typically going to be very low. Today, if I had to go and build a compliance AI, regtech solution, I would focus on detection and mitigation more than investigation because that problem is going to solve itself in a couple of years, is what I feel.
The other couple of things that jumped out, the number one problem that they mentioned is holding them back from being able to go and double down on AI is data quality and infrastructure readiness. I tried to understand a little bit more-
Reggie Young:
Yeah, I've seen a preview of the report, and that was one of the things that I wasn't expecting to see as much focus on. So yeah, I would love to click into that a bit.
Jas Randhawa:
It was surprising for me, too, going back and thinking why- there's a lot of enrichment that can happen. I think the idea and the intention behind that response is the quality of data that's going to get fed into these models itself, the confidence level is very low in terms of both completeness and the accuracy of that information. What I mean by completeness and accuracy is almost all financial services institutions that we deal with, including fintechs and crypto businesses and larger banks, one of the big problems is if you go back and ask them, hey, on a scale of 1 to 10, do you feel all your transactions are actually going to your transaction monitoring system? The answer is maybe a 4 on 10.
So you don't know. Your transaction monitoring system is supposed to catch all the transactions. Onboarding systems, training systems, supposed to catch all the names. And you ask the compliance officer, do you feel like all of this is going- and they're like, we don't know. One of the realities is because they don't own the seed data. Product teams do. Even if they find stuff, they go back, things never get fixed.
The second is the quality of the data. That's less of a problem with our fintech customers, more with our traditional banking customers, because banks have undergone a bunch of M&A transactions. They're like a bunch of core systems. Infrastructure is bad. And when the data is coming in, it's either garbled, it's mixed up, it's inconsistent. And with fintechs, you're working off of one or two data leaks, so that's less of a problem. So those are the two issues.
But I also feel like these problems eventually will be solved by AI in terms of figuring out the completeness of the data, the accuracy of the data, and being able to create real-time indicators, where if your transaction monitoring data is less than 5% of your tolerance, the system's not going to run. It's going to alert you till the time you fix the problem, somebody fixes the problem, you'll get there.
On the infrastructure side is the same thing related to that aha moment I had with my team. People are saying, we don't know, should we use Perplexity or Claude? Are we going to get the enterprise solution? Can we have an enterprise solution? Can we put personal stuff there versus somebody else's? And this is you're still talking about front-end UI stuff.
You guys did the hackathon you mentioned. I'm assuming this is more of gum loop, integrate with this, integrate with that, make a few things work and build a workflow. The teams are not even there yet. They know that they can potentially do all of this, but there is no clarity in terms of, this is like a 101. Start with this. Once you're able to do this, you're going to then move on to the next one. I think that clarity is probably missing.
Again, something that we're getting feedback from when we talk to CCOs, they're asking us to put some sort of a preamble together. Again, we don't do this as a service or a fee. This is more of a PSA type of a post that I'm being asked to put out. At the moment, I find time to blink.
Reggie Young:
Yeah. I love it. Cool. Maybe in the last few minutes, I would love to hear if there's any particular interesting trends that you all are seeing at StrategyBRIX. Again, you sit at an awesome vantage point in the ecosystem. You get to see, I think, a lot of cool stuff happening. Is there anything top of mind that you're excited about outside of AI? Because all of our listeners are going to go get your report once it's out. They'll know all that stuff. But any other trends you're seeing in the fintech space that are particularly interesting?
Jas Randhawa:
I was thinking about when you were asking me the question, I'm very excited about- it's like we put this seed in the ground when we started the business. We said we were going to do risk and compliance consulting for fintechs, for crypto businesses, and sponsor banks only. I remember that the first version of our website had a period in red. And I was getting a little upset with my team saying, hey, what happens tomorrow when a JPMorgan Chase wants to come and work with us? Then the response was like, we don't care.
All of our advisors, Reggie, back then told us, don't do it. The reason is fintechs don't have the bank funds, the investments to make in consultants. Fintechs are way smarter than banks, and they are as smart as consultants, if not more smarter. And then they move as fast as consultants can or faster. So all of this is stacked against us, where when we go to a larger bank, you're like, oh, I can work with you, I'm going to be faster than you. I'm going to be the smartest guy in the room, and then you have a lot of money to spend on us. So that becomes an obvious choice.
But I think our seed was, when we looked around, we saw this to be a completely greenfield, and green because it's hard and it's difficult. It's like eating rocks for breakfast every morning. But the one thing that I definitely knew was that for banks to exist in their current form, it's just not possible. They will have to become fintechs. And JPMorgan's tried this. Goldman Sachs has tried. All of these banks have tried becoming fintechs, but they've- I won't say failed. Fintechs are still eating their lunches under their noses.
So the idea was that there are two or three things that a bank would typically need or JPMorgan would need to give a Stripe a run for their money again. One is you need the product and the engineering firepower that a Stripe has, very first principles thinking, very hacky, very quick, very swift. The second is you need a battery of product lawyers, solid product lawyers. And the third most important one I feel is just compliance, people who can make your programs effective, lean, and still functional.
None of these three components- if I was a Jamie Dimon, if I had to go and transform the bank, I would not use any of the three functions from the bank. I would go and build these three functions and then become a fintech. So keeping that in mind, we knew when that happens, if that happens, us and any compliance officer who's been in this space doing fintechs, you're going to become valuable very quickly because the customers that you didn't target then, they will come looking for you.
The big trend that we are experiencing is we work with almost all the top crypto exchanges. In the last about 12 months, our phone's been ringing off of the hook. We get a call from a bank once a week, if not every 10 days, and mid-size to large banks, stablecoins, crypto, how do we do it? My question largely has been like, how'd you find us? Who recommended us? Who referred us? And some of the channels that they're finding us, they're phenomenally organic. We don't spend anything on outbound or marketing or any of that stuff. So that gives me a lot of faith and hope in everything that this fintech industry is doing.
There is a shift where bank leadership is going back to their compliance teams and saying, hey, this is about to happen. And this has become less of a personal issue, even at the government level, because if you get a sense of the midterms, the one thing that we are seeing is even folks on the democratic side who were very opposed to stablecoins and crypto and all of that stuff, now when you go back, you are seeing everybody ease out. So I do feel that crypto, stablecoin is going to continue to be mainstream. It's going to become an important aspect of any bank's portfolio.
And then lastly, the biggest inbound that we are seeing is from the larger Tier 1 banks, the private banking lines of business. You've got exceptionally high-net-worth customers walking in with gold wallets, that $250 million worth of Bitcoin. You can't say no to those customers of the bank. You've been very against, anti-crypto, and now it's time for you to go and do your diligence and understand how you're going to keep the customers.
For me, I think I'm very bullish on the crypto trend and, more importantly, the fact that this paves a very natural, organic road. For us to be able to go back into the larger banking space and do meaningful work that we take a lot of pride in and what we learned on the fintech side and then bring the fintech industry to mainstream banking, that's what I'm very, very excited about.
Reggie Young:
I love it. That's awesome. If listeners want to check out StrategyBRIX, it's strategybrix.com, B-R-I-X dot com.
Jas, awesome conversation today. Thanks so much for coming on the podcast.
Jas Randhawa:
Thanks, Reggie. It's always a pleasure.