Fintech Layer Cake
Welcome to Fintech Layer Cake. A podcast where we slice big Financial Technology topics into bite-sized pieces for everybody to easily digest. Our goal is to make fintech a piece of cake for everyone. Fintech Layer Cake is powered by Lithic — the fastest and most flexible way to launch a card program.
Fintech Layer Cake
Reducing Oversight Costs for Fintechs and Banks with Narrative CEO Gokul Dhingra
In this episode of FinTech Layer Cake, Reggie Young sits down with Gokul Dhingra, Co-Founder and CEO of Narrative, to unpack how AI is reshaping compliance, oversight, and growth in the fintech ecosystem. Narrative is tackling one of the industry’s biggest pain points—the rising cost of compliance and risk management—by turning oversight into a source of efficiency and even competitive advantage.
Gokul shares how banks and fintechs can align around shared objectives, why complaints data is an overlooked goldmine for growth, and how Narrative uses AI to reduce operational thrash while keeping humans in the loop. The conversation digs into the misconceptions about AI “hallucinations,” the subtle ways bias shows up in both humans and algorithms, and why empathy is the ultimate differentiator in building trust with customers and regulators alike.
If you’ve wondered what it takes to scale innovation in a highly regulated environment without sacrificing trust—or how AI can be deployed responsibly in financial services—this episode offers a roadmap from someone building it in real time.
Reggie Young:
Welcome back to Fintech Layer Cake, where we uncover secret recipes and practical insights from fintech leaders and experts. I'm your host, Reggie Young, Chief of Staff at Lithic. On today's episode, I chat with Gokul Dhingra, the co-founder and CEO of Narrative.
You may recognize the name because Alex Johnson wrote about it recently in Fintech Takes. In case you didn't catch that edition, Narrative is leveraging AI to build a next-gen platform for compliance, risk, and fraud teams. They help reduce manual and operational costs for fintechs and banks by streamlining complaints management, collateral review, continuous oversight and compliance, QA and testing, and much more.
Gokul and the team building Narrative are sharp and getting a lot of traction, so I wanted to get him on. I think Narrative is the exact sort of company that's going to lower the very high barrier to entry for innovative fintechs, so I've been excited to see them build and grow. You can find out more about them at thenarrative.dev.
Fintech Layer Cake is powered by the card-issuing platform Lithic. We provide financial infrastructure that enables teams to build better payments products for consumers and businesses. Nothing in this podcast should be construed as legal or financial advice.
Gokul, welcome to Fintech Layer Cake. Excited for our conversation today. I know you and I have been catching up for a while on various fintech things, and I know you're very up-to-date on some of the cutting-edge stuff happening in fintech and with AI. A lot to dig into in this episode, so I'm excited for our conversation.
Maybe the best place to start is with Narrative. What is Narrative, why did you start it, and where'd the idea come from?
Gokul Dhingra:
Yeah, absolutely. Well, first of all, thank you, Reggie. I'm excited to be on and a frequent listener and excited to have this conversation.
It's interesting, Jeff, my co-founder, and I, we actually have quite a bit of experience working at growing start-ups in regulated industries. Jeff being on the compliance team, in many cases, established workflows, processes, and practices that we've actually built today. So that gave us a pretty deep appreciation, I think, for the demands and desires, the needs, almost of start-ups, specifically fintechs. We lived with them ourselves. We experienced the unique dynamics of the fintech partner bank, or even more broadly, non-bank, bank relationship. And truly, we just set out to make that relationship more productive and more conducive to both sides' objectives. As we dug in a bit more and started speaking to users at banks and fintechs and digested that information, it was interesting, we actually started to realize there was a lot more philosophical alignment than either side gave their credit for. And the objectives were, in many cases, shared. It wasn't as though they were diverging or opposing.
So what we're aiming to do really is build a platform that ensures that those shared objectives are really the focus of the relationship. And the points of friction, the operational overhead, the back and forth, that is abstracted away with technology. And I think in practical terms, if we're successful, and what we've seen from customers that are using our product is it's a viable path to scaling and growing rapidly while improving risk oversight at lower costs, without the operational thrash, the back and forth, the friction that's in some ways treated almost as a universal truth today, but we've learned it doesn't have to be. The end product of that is banks feel more comfortable doubling down on innovation, on risk, on growing rapidly, and doing so without compromising oversight. On the other side, fintechs can do that while operating at start-up speed and achieving the scale that they want to.
Reggie Young:
Yeah. I love that. This is one of my hobby horses over the past two years. I think this is from a headline of banks that were exiting the fintech space. They pretty consistently cite compliance operating costs as the reason they're getting out. It's unfortunate because it means it's an expensive space to operate in. The barriers are very high and getting higher. If compliance is the main driving cost, the way to lower the barrier to entry for awesome innovation that we want to encourage is kind of streamlining the tug of war that can't happen between fintechs and banks.
Gokul Dhingra:
Yeah, absolutely.
Reggie Young:
So Gokul, maybe to give listeners a better, kind of concrete understanding of what the platform enables, what are some of the use cases and functions that you help banks and fintechs with?
Gokul Dhingra:
Yeah, absolutely. A couple and growing by the month, but one of the core ones is for a bank complaints oversight and reporting and management. If you think about some banks, their partners have thousands of complaints a month. Some of those complaints are, in fact, just inquiries. Some are true complaints. Some are indicative of fraud, some indicative of UDAP or Reg E or Reg Z, and that's substantial risk for the bank. So overseeing, monitoring, ensuring that appropriate action was taken on the part of fintech is something that one can certainly do and one can probably do without technology if you have 100 complaints a month. But if you're getting into the thousands because you're running a scaled program, generally they're doing random samples, and those random samples aren't always random and may, in fact, be missing something very important.
And so we help banks, particularly at scale, manage, oversee, and report on complaints activity of partners at any scale, hundreds of thousands, millions of complaints a month, a year, et cetera. And more than just checking the box and ensuring that they are, in fact, handled in the right way, we, again, orient this around growth. At a minimum, we need to be doing that, ensuring that the appropriate action was taken by the partners and there's not undue risk.
But we also view this as complaints are fundamentally a customer speaking to an entity, letting them know something did happen or did not happen. And in that is a lot of opportunity for growth and proactive risk mitigation as well. We'll notice that some partners might have UDAP issues, which are actually upper funnel, not necessarily always, and the end result is a complaint, which now speaks nicely to our other products, including asset review, which is kind of that proactive approach to overseeing collateral or auditing call center transcripts or call center conversations or videos or policies, procedures, adverse action notices. And again, by having a multi-product platform where the products speak to each other, you uncover interesting insights, and then our product can actually act on them. So if there is a UDAP violation, we can generally feed rules that overweight- or not overweight, but appropriately weight that when reviewing a partner's assets.
Again, it’s certainly that there is an element of control, as in a control function, but it is very much in pursuit of growth. I don't know many companies who love having complaints or dealing with complaints. It is, in fact, an impediment to growth. I don't know any banks that love complaints. It's not something they're excited about.
And going back to the shared alignment, you can actually take a process that is an oversight process and turn it into something that is a growth orientation process and streamline it, because now the fintech has the insights, they have the knowledge, and they have our models working on behalf of banks and themselves to take this activity that feels sometimes we need to do this and then turns it into, oh, we can learn a lot. Actually, we can get a lot better from this.
And then our final product that we're seeing a lot of traction with is what we call continuous oversight or continuous monitoring. When we think about some of the larger fintechs who we work with and who banks work with, they have, in many cases, teams that rival the size of the bank that they work with from a compliance and oversight standpoint. That doesn't mean that the bank doesn't need to be involved. Quite the opposite, they still do. But what that does mean is there are probably even lower friction ways that if that fintech does, in fact, earn the trust of their bank partner to operate that relationship on a go-forward basis, where maybe you have some kind of threshold where certain activities move from pre-launch review to post-launch review, but the continuous monitoring product enables us to not just enable or allow for fintechs and banks who are open to it to move to some post-launch reviews, but actually do so in a more comprehensive way. So you're reducing the friction of having to upload assets and get them approved pre-campaign while never giving up the oversight or mitigation piece of moving to a model where some of that activity is post-launch.
And really what we're doing there very tactically is absorbing every touch point with a customer across the web and truly every touch point and ensuring that what is being said, what is being displayed, what is being presented to the customer is, in fact, compliant, appropriate, and moreover that if something is not being said in the right way, it's dealt with pretty immediately through issue management and tracking.
Reggie Young:
Yeah, I love that. The complaints as product researchers are really a point, too. I know the team we have on the Privacy.com product, they keep a very close eye on complaints. Yeah, to your point, nobody wants to deal with them. They suck up a lot of time. And you can also just learn a lot. In a perfect world, you're not getting any complaints because you've seen around all the corners, but they're good intel for product building and growth.
Gokul Dhingra:
Yeah, exactly. We've seen banks who are really focused on the partner experience but obviously cannot compromise, do not want to compromise oversight, being able to streamline processes and then add value to those partners as well.
Reggie Young:
What's the typical customer profile and use case that you all are seeing?
Gokul Dhingra:
Absolutely. The typical customer we work with- and I'll speak about it in terms of characteristics and almost personality attributes because I think those are almost more telling than some of the other pieces. Typically they're growth oriented, and certainly that probably means from a growth mindset, but in this case, more specifically looking to scale and grow, whether those are existing programs, new programs that are marking and penetration in fintech. How do we enable ourselves to scale rapidly and to great and new heights while also ensuring profitability and not compromising on oversight?
And so folks who have larger programs, who have a larger presence often see tremendous value initially, but that's not to say that we're not interested and excited about partnering with anyone and everyone in the space. Today, we really have focused on some of the larger banks and fintechs and supporting them. As we've started to productionize, I think a lot of the components of each of these workstreams and workflows, it has actually allowed us to scale beyond them in a really efficient and robust fashion.
Reggie Young:
Yeah. I love it. How does AI play into what you've built? I know it's a key aspect of what's enabling you to have a little less friction point between the banks and fintechs. It's kind of interesting. You were just talking about you're selling to both sides effectively because both sides have an interest in this. Fintechs are interested because it can reduce their own headaches, and the banks are interested because it can increase the quality of their oversight all at reduced costs like we were chatting about. I think AI plays a bit into that, so I'd be curious to hear about how you're leveraging AI and how that plays into the platform.
Gokul Dhingra:
Absolutely. I would say our ambition is really to be the most empathetic, thoughtful partner to the industry. What that means from a practical standpoint is we truly believe in leveraged technology as an enabler of individuals versus a replacement of them. AI is certainly core to the platform. It enables us to provide value to our customers in ways that without it we couldn't. By no means is it a substitute for that matter, the only thing that we aim to do.
I would say AI helps empower and ensure that some of the processes, whether that is data extraction at scale, whether that is automating some of the pieces of operational overhead, like sending out messages or retrieving information, processing it, analyzing it, uncovering patterns that humans today don't have the time candidly, or in some cases, and this isn't specific to our customer, but in some cases, we're not wired to remember everything we've ever done in our entire life. And AI has the unique ability to, in many cases, do that and then extract patterns and surface insights that are new. And so for those purposes, we leverage AI extensively.
But there is a significant part of the product that is not AI focused, and that is the human piece of it, candidly, and adoption. I think maybe that also goes to one of the things that's not always discussed as much as it could be, which is how do your products make people feel and how do they actually interact with them? And so for that, AI is not what we focus on. That's, I think, a highly iterative and collaborative process with a partner. There's no individual that's exactly like the other. From a design standpoint, from a user experience standpoint, AI doesn't inform all of that work. We try to layer on those natural patterns, those ways of working, underpinned by AI to, again, take away a lot of the operational thrash but ensure that the judgment, the nuance, the comfort levels of our customers are always accounted for.
Reggie Young:
Yeah, I love it. I love the frequent theme of empathy for customers in the answer. People talk a lot about customer centricity, but I think empathy is the level two version of that, where it's, oh, do you actually understand? It's one thing to be customer centric, but to actually have empathy is the best or the next level of that.
I'd be curious to get into a little bit of the nitty-gritty of building with AI, because I know you and I have had some conversations about tactically what it looks like. There may be engineers who are used to using Claude Code and experiencing AI like that, or they may be using ChatGPT for basic prompts or whatever, mocking up designs, whatever, but I think you've been in the trenches. How do you actually deploy these models in a very specific regulated situation?
So I'd be curious to hear what are some of the thorny- or not necessarily thorny, just unexpected obstacles of the tactical data? Because I think about this from like, let me go to my Claude prompt and look up something, which is very different than, I think, the AI building that you all are doing. So I'd be curious for that sort of tactical lens.
Gokul Dhingra:
Yeah, it's a great question and I think also highlights one of the pieces and misconceptions sometimes, which is you can just plug things into ChatGPT for the space and get outputs, when, in fact, so much of the work in deploying this at the institutions and companies we do where trust and reliability are so paramount are really nitty-gritty things that one wouldn't think of. An example being at a point of someone uploading a document, that document, or what we call an asset, could be a video, it could be an image, it could be a PDF, it could be a different form of a document, spreadsheet, whatever it might be, we can generally support it.
But then when you think about what is a PDF, PDF could be of text. It could be a handwritten document. It could be a PDF of an image. It could be a PDF of an image of an image. In thinking through all those edge cases, and in many cases, experiencing them and having the fallbacks to ensure that, first of all, we can use whatever asset is actually uploaded and extract the information we need is one of those- I wouldn't even call it edge cases, but one of those things you don't think about when you think about adopting AI and leveraging AI in enterprises, because if really the context and the quality of the inputs informs the outputs, and spending a lot of time and people, a lot of the conversation seems to be about the outputs and very much like model-centric, but so much of the value truly is ensuring that the context and framing of the information coming in is actually clean, usable, accurate. And so we spend a lot of time thinking about that.
It also speaks to another point, which is building evals, building guardrails, knowing when to escalate things to humans, because we actually take a human-in-the-loop mindset towards every workflow. Some things need to have a human interject earlier or intercept earlier, and some things don't. And at the very least, that human is approving or auditing or overseeing the decision. But then we also have solutions that are at scale. If we want to deploy a new model or test a new model or test a new workflow, how does that impact the outputs? Because you can't just deploy and add a bank and hope to see what works and what doesn't. You really do need to be thoughtful about doing it in a controlled fashion with the right analytics monitoring and evaluations to ensure, again, that you don't lose the trust and confidence of your partner.
Reggie Young:
Yeah. This dovetails nicely to the next set of questions I wanted to ask, which is the trust in AI. I think you rightly called out there's the data in problem, which is AI is like the new version of is your data clean in a well-sorted data warehouse. The new problem is, okay, can your AI sift through the data that you're feeding effectively and accurately?
But overall viewing that sort of trust lens, two groups of questions that are getting to the same thing is what does it take to be able to trust what AI spits out? And then somewhat related, because you mentioned banks, how are banks and other larger players, when you chat with them, how do they view Narrative? What does it take to get them comfortable with the AI aspects of the platform?
Gokul Dhingra:
Yeah, it's a great question. I would say one area we're going back to the empathy and customer centricity that I do think feeds into this trust and is genuine to how the company operates is a high degree of inquisitiveness and desire to learn from our customers, which builds trust. A really practical example could be if our platform is overseeing an ad campaign, and particularly now where you have states who have potentially more restrictive or cumbersome oversight and regulation at the federal level, if you're not asking your partner, what states is this running in, it's probably a bad sign or a sign that your AI might not be fully built out or robust enough to accommodate the nuance. It's practically very important. Some of this, again, is very much oriented around the customer centricity and knowing what questions to ask. And the fact that we ask those questions, want to learn them, know what to ask, and then build around them, gives folks trust that we know what we're doing.
That very much starts with my co-founder, Jeff, who has deep domain expertise, but also just extends to the team, even at the engineering levels, deep desire to get to the core and ensure that everything is taken into account. It also shows up in things like you're building out workflows. We're working currently with a bank that deals with customers who have largely commercial end customers and dollar amounts that are significantly larger than some others. And so if you're building an escalation path and you're not asking what is the norm from one to the other, then you're probably not thinking about everything robustly. And so the fact that I think we are highly inquisitive, do think about the customer in those edge cases, I think, rightfully gives banks comfort that we're thinking about every dimension.
And then I think there are, I wouldn't say basics, but incredibly important pieces of the platform, like keeping a human in the loop, actually creating ourselves and creating our outputs prior to going live, having people on teams look at the outputs, evaluate them, iterate, they can see the impact of the changes that we make, further feedback. We have citations and sourcing for everything. We show the reasoning and chain of thought. So a lot of things that you can oversee the work is cited, your feedback as a customer is directly acted on. You can see the impact of that and how the product gets better. And then we truly do care about the details, which I think runs into some trust as well.
Reggie Young:
Yeah, I love it. You're talking about building trust with AI. I’d be curious, what are some of the common misconceptions that you encounter in these conversations? Are there top one or two or more typical questions you could ask where you're like, oh, no, that's not how it works at all, you don't have to worry about that?
Gokul Dhingra:
Yeah, absolutely. I would say this often falls into a couple of buckets, one being I've used ChatGPT. Others will say, I use ChatGPT, it hallucinates, how do I trust the AI? Or maybe an even stronger reaction would be it's not appropriate or able to function in our space. I don't want to discount those individuals’ feelings because it's understandable. But I think the number of ways that you can influence an output into AI is quite significant. The guardrails and practices you can build to ensure that outputs are accurate and usable are growing by the day and already quite numerous. The oversight monitoring of the outputs is, again, something we strongly invest in and care about. We wouldn't go live with an answer that is hallucinated. We'll build humans in the loop at every step of the way. We'll show our work. We'll cite our work.
And so I think one of the, again, misconceptions, going back to the question, is my experience suggests- and when I say mine, obviously not mine necessarily, but from the point of view of a customer. Their experience with AI today leads them to believe that it isn’t appropriate or able to be used for enterprises, when, in fact, I would completely agree with them that ChatGPT is probably not the best solution off the shelf to be using. First of all, use multiple models, fine-tune, and then set guardrails and context and content in a way that does make them useful is, like I said, something that you certainly have to focus on as a company. It’s something we do and allows us to combat some of those.
Reggie Young:
Yeah. The point I like to make is you can analogize that hallucination risk to the fears over self-driving cars. If you worry about this thing that is very mostly controllable risk, and your baseline is not perfection, your baseline is human, it's like human compliance, people at banks can also hallucinate. We don't call it hallucinations. We call it mistakes and errors and stuff. But yeah, your baseline isn't perfect. It's humans, which have a non-zero rate of hallucination themselves.
Gokul Dhingra:
Yeah. It's actually one of the value propositions, I think, we learned over time but probably underestimated initially was the standard application of best practices across banks. In retrospect, it's obviously very intuitive. We're all subjected to different environmental factors. I don't know whether we're hungry because it's before lunch or something else happened in our lives that makes us a bit less tolerant of things. It's going to have an impact even if one tries to control for it on work. And the nice thing about technology is it allows that bias or that impact to be significantly reduced. And then you can apply those best practices at scale without the, again, human piece of it in the negative way introducing itself.
Reggie Young:
The bias comment is really interesting. There's a lot of fear over AI having bias, but think about how long it takes a human to undo bias, loosely defined as loose as you want. Some people would never get over that. It takes years for people, whereas in AI, it's just okay, get better data to train them on and it's fixable, and you can update it very fast, which I find to be a fascinating consideration.
Gokul Dhingra:
Yeah, absolutely. It does speak to, again, incorporating the nuance and judgment of people at their best almost, and then allowing you to reduce the noise or variance when someone is acting maybe not, like I said, a very natural human thing. It is like you don't want to have a judge ruling your case prior to lunch or something of that sort. The same can be applied elsewhere.
Reggie Young:
Yeah. If listeners aren't familiar with the lunch reference, this is a well-studied documented phenomenon of judges have harsher rulings before lunch versus after. So yeah, a lot of human hallucination that can make its way into our decisions.
I wanted to chat about why what you're building matters in the fintech space, but I actually think we covered a lot of it. So I don't know if there's anything you want to add there.
Gokul Dhingra:
Yeah, absolutely. The one thing I would add is we truly are believers in innovation and growth and obviously have felt very personally the challenges- challenges might be too strong a word, but what the fintech goes through. But we also recognize it's paramount for banks to feel comfortable investing in that to have sufficient oversight and risk. And if those two needs can be met, again, we're realizing that shared objectives, which is innovation, growth, and ultimately impacts the end consumer, can be profound and exciting. I think we're really driven and excited about a future where financial products are meeting the needs of more and more people in the best way possible.
Reggie Young:
Yeah, I love it. For my typical wrap-up question, what's something you've been thinking about a lot lately but you think people in fintech aren't talking about enough?
Gokul Dhingra:
I truly think it is a human interaction piece and emotion piece of technology. Someone can build world-class software, and maybe actually I wouldn't even define world-class if it's not adopted and used. But certainly, from an architectural standpoint, a feature standpoint, maybe best in class, and if it's not used because the user doesn't understand it or because the user is threatened by what that might mean for their job and their future, then it's just not going to be adopted. I wouldn't call it world-class necessarily.
And so I think really speaking with our customers, the end users of the product, understanding what are their needs, what are their objectives, what problems do they want to solve, what areas of workflows do they want to preserve, is something that I think is time-intensive but incredibly important to get adoption, growth, and usage of a product.
I think many times the industry tends to look at really regimented processes and ones that you could probably Google, like how do you conduct an XYZ review. In some cases, that is very documented and step-by-step. But nowhere on Reddit does it tell you your compliance officer with 23 years of experience views this very nuanced workflow. That is knowledge that lives in their minds and knowledge that is incredibly important to both maintain but also build around. And that can't be done, I think, without the human piece of it, without inviting those people to those conversations, without them feeling comfortable sharing that, without them knowing that sharing that is not going to mean that they're going to lose a job, but instead that they can amplify their impact.
So I'd say that piece, how do you use technology to actually capture nuanced judgment and scale it, is often missed. So often we look at very defined workflows that, again, you could go to an examiner's guidebook, extract rules and say, this is exactly what I need to do. Those are the ones that are often talked about. But the ones that I would say are more valuable over time in our opinion are the ones that are often not spoken about, maybe in part because the work is very much emotionally and technological, not just the technology.
Reggie Young:
Yeah. Cool. I love it. Awesome. Gokul, if listeners want to go find out more about Narrative, where should they go?
Gokul Dhingra:
They should go to thenarrative.dev. They can reach out via LinkedIn as well, and obviously email me gokul@thenarrative.dev.
Reggie Young:
Awesome. Thanks so much for coming on the podcast. It's been a great conversation.
Gokul Dhingra:
Likewise. Thank you so much.