
Fintech Layer Cake
Welcome to Fintech Layer Cake. A podcast where we slice big Financial Technology topics into bite-sized pieces for everybody to easily digest. Our goal is to make fintech a piece of cake for everyone. Fintech Layer Cake is powered by Lithic — the fastest and most flexible way to launch a card program.
Fintech Layer Cake
Bank Data Crash Course with Lee Easton of iDENTIFY
In this episode of Fintech Layer Cake, host Reggie Young sits down with Lee Easton, founder and president of iDENTIFY, to unpack one of fintech’s least understood layers: bank transaction data. Lee breaks down why raw bank data is so unreliable out of the box, and what it actually takes to clean, enrich, and classify it so that it's usable for real-time fraud detection, underwriting, and customer engagement.
From manual tagging nightmares to building classification models across 1.2 billion transactions, this conversation is a practical deep-dive into what “decision-ready” data really means.
Bank Data Crash Course with Lee Easton of iDENTIFY
Reggie Young:
Welcome back to Fintech Layer Cake, where we uncover secret recipes and practical insights from fintech leaders and experts. I'm your host, Reggie Young, Chief of Staff at Lithic, and I'm fortunate to have my colleague, Alli Nilsen, the Head of Bank Partnerships, as my co-host on this episode. On today's episode, Alli and I chat with Lee Easton, the President of iDENTIFY.
We cover it more in the episode, but iDENTIFY helps banks wrangle and unify their data in the cloud. Lee gives a great primer on why data hygiene matters for banks, including fintech sponsor banks. He covers the problems they face, what solutions look like, and trends iDENTIFY has seen in the market.
Fintech Layer Cake is powered by the card-issuing platform Lithic. We provide financial infrastructure that enables teams to build better payments products for consumers and businesses. Nothing in this podcast should be construed as legal or financial advice.
Welcome to the podcast. Really excited for today's episode. It's long overdue. I know you have a wealth of insights to share. A lot we're going to dig into today, and I think it's some kind of gnarly topics that folks may not have a mental model or framework for. I have a feeling we're going to spend a lot of this episode kind of demystifying and picking apart how data flows at banks and many other fun rabbit holes work.
Maybe a good place to start is with exactly what iDENTIFY does. Maybe a good place there to start with is you previously made the comment that the long-term vision is no more files. What does no more files mean? How does that tie to what you're trying to do with iDENTIFY?
Lee Easton:
Great question. iDENTIFY was born out of a consultancy that I had started years ago. We had a few clients that were banks, and we were helping these clients audit data at the time. We didn't realize that these clients were partaking in what is known as embedded finance or Banking-as-a-Service. It was just new to us. All of it was. It was a lot of file movement. So we would take files in. We'd skim through the files. We would look for discrepancies in these files, and then we would visualize this through a dashboard to make it easier for the leadership team to consume.
That's kind of where the name came from, too, is we were trying to identify discrepancies. We were trying to identify gaps or just bad data, bad actors. I'm not saying this was a fraud tool or a compliance tool. We felt like it was just a data quality thing that we were working on. That is what's led to an understanding of the industry that all community banks still struggle with, is there's a lot of disparate systems. A lot of those systems depend on batch processes. That's a lot of SFTP, so a lot of just classic file movement that really moves money around the world. So we're on a mission to try and make banks cloud-enabled and get them in a spot where they can innovate and connect and partner and work with great companies like Lithic and hopefully, eventually, cut those files out.
Reggie Young:
To zoom out a bit, why does that data hygiene, I guess for lack of a better phrase, why does that matter? What does that unlock for banks, for fintechs, for end consumers, and businesses?
Lee Easton:
Data hygiene is a great term, actually. First and foremost, I think if we are going to hit it in the compliance and the fraud space, that's an important part of this. Everybody's heard the term garbage in, garbage out. You wouldn't want to build a lot of trust in data that is dirty or unclean or hasn't been cataloged because then you're feeding into a system for transaction monitoring or fraud detection that might not have accurate information. For us, I think that's a primary driver that opens a door with working with a bank, is maybe they're struggling getting data into a system from another system.
We typically have to say, hold on, yes, we agree with the problem statement, but have you really actually looked at the data first? Have you looked at what you're going to put in that system before you put it in there? Oftentimes the answer is no. And they know that there's a percent error, percent margin of stuff that's not usable out of that data, and we really try to identify that and help them out.
Alli Nilsen:
Nice. Just before we dive in a little bit deeper, obviously, there's a range of different listeners. Some are technical, some are operational, some are other. I suspect we're going to use the words data lake, system, cloud. Other than my brain immediately going to elementary school science and the flow of water between all those different things, could you help listeners understand what a data lake is, and to add to the messy vernacular, how the different systems engage with it, just so we can have that basis for the dialogue?
Lee Easton:
For the non-tech folks- I think everybody has heard the term data lake, but nobody really has a great accurate definition of it. Data warehouse, I think, is a better use of just somewhere to store something. I like to take the Amazon warehouse model. That's my favorite analogy.
I think data lakes are great because then you can say it's like a data swamp or there's a data retention pond. There are definite analogies you can build on the data lake thing. But the thing I think that a lot of people relate to is look at how you get packages from Amazon. Look at the whole process of getting something from Amazon and what they've built around the world of distribution. That is very similar to how we see data work in not just banking, but every industry. I think oil and gas is the same. Healthcare is the same.
What we try to provide is the robotics within the warehouse of the Amazon distribution system. A lot of people have heard that Amazon has these massive distribution centers. You order a package, and there's a good chance that package is already in that distribution center close to you or local to you. That's how they get this fast shipping. But they now have these robots inside these warehouses that have everything barcoded and scanned and organized on a specific shelf. That's essentially what we're doing, is we're taking what would be just a ton of boxes that could have just been thrown in a warehouse. We're taking those boxes and putting them on the right shelf in the right place so that when somebody clicks or needs that package, it's right where it should be, and it's organized in a way that they can access it pretty quickly.
I guess the data lake analogy is similar. You can have this massive piece of water with a ton of fish, but you have to build out that infrastructure to go fishing. Whether it's a dock, a boat, all the equipment to go fishing on certain parts of the lake might have better access to a certain type of fish. But you really fish organic that we can't organize in the same way you organize packages in the warehouse. That's the analogy I stick with.
Alli Nilsen:
I love that. I'm slightly scared to know what a data swamp would be. Maybe that's the furthest away from hygiene on this analogy. That seems concerning, but no, that makes more sense between the warehouse and the lake because, ultimately, if you can't remotely get into what you want- my family goes fishing, I understand you go to certain bodies of water to find what you need, and it's different processes for both. But the idea of going to a warehouse seems like a lot more efficient process. Maybe not as exciting if you're an actual fisherman, but ultimately, for data, you don't want exciting. You want accessible.
Lee Easton:
Yeah. The swamp analogy is a good one. That was actually coined by a couple of our clients early in the engagement. They were like, we have a data swamp. And they were like, okay, tell us what you mean. They're like, we've just thrown everything in there and we know it's not clean. Okay, that makes a lot of sense. Their goal is to get to a glacier lake. They wanted a crystal clear, something you would jump in and swim in and interact with. And so I thought the swamp was a good analogy. It's like, let's get you to a cleaned up lake.
Reggie Young:
I love it. When you're looking at, say, a data swamp, what are some of the problems- to make it concrete, when you have to standardize things, what do you go through and look for as the problems to clear out, the typical top two or three issues you have to deal with on a project?
Lee Easton:
I'm going to get a bit technical, hopefully not too technical. A lot of the legacy community banks that have attempted to make amends with their data have used legacy tools to do so. Commonly, we see Cognos BI is the primary reporting tool that any Jack Henry, Fiserv, or FIS Bank had in the 1990s and 2000s. To organize data through Cognos BI, there are some banks that even had larger data sets for using like SSIS or SSRS.
I'd say that the swampiest of the swamps that we get into is when a bank has built out maybe hundreds, if not- we had came across one, I think the most we've seen is 2,000 SSIS jobs running into Cognos BI. Just the technical debt and the cost alone- because the second that their on-prem infrastructure goes away and they move to cloud, a lot of those jobs are going to have to somehow transition into a newer modern tool or product to the point that it's almost so daunting, so costly, that they'd rather keep swimming in the swamp.
Alli Nilsen:
And because they have so many of their operations leaning on all those different swamp creatures, it's going to impact multiple departments in scary ways. I'm just imagining swamp creatures crawling out at this point in order to deal with it. So it's not just the swamp, it's the creatures that engage with it, too.
Lee Easton:
It is, because if we're not good about what we're putting into the pond or the lake, it can create a swamp. And that's a part of it, too, is the technical debt thing. People are pulling data out and using their own Excel workbook and manipulating things with their side of the organization. And oftentimes, it's all manual. And then when somebody else comes in later and takes over that job, but they don't know what that person was thinking when they built it, so they might start their own. They'll go back to the data lake to get data and pull it into their own report. We see this now that has happened maybe even over the course of decades where there's thousands of reports, and nobody knows who the original creator is of the report, nor do we know if the accuracy of that report- it's almost easier just to say, let's start from scratch. Let's go back and have a clean slate here.
Alli Nilsen:
Go for it, Judgy. I'm just sitting here thinking through all the potential, and you probably can't share, but all the potential scary situations you've seen with banks trying to keep their data clean or not having done anything and realizing at the end of the day, that leads back to people's money, either consumers or commercial or other. Ultimately, money is going to come back to numbers and data and tracking and ledgering and all of those different needs. So the fact that it's murky is a little bit scary. If you could share stories, great. Maybe people need to hunt you down at a fintech conference, but you probably can't is my guess.
Lee Easton:
Well, I definitely don't want to throw anybody on the bus.
Alli Nilsen:
I'm not looking for names by any means, because I also want to respect all of our partners out there trying to do a better job.
Lee Easton:
What I will say is I think all the banks have good intentionality. They all have great use cases and ideas for the data. It's tough because every single one of them is likely banking with a large core provider that has just had slow innovation over time. That alone has created this technical debt.
If you're at liberty, I think today we're in a much better place than we were, let's say, 5 or 10 years ago, even 5years ago. But Power BI, Tableau, some of these newer reporting tools were not commonly trusted or used within community banks even five years ago. So why would I use this old legacy system if I know that I can build a better report in Power BI? It is kind of a result of we don't have anything else, so we're just going to do the best we can with what we have. And that really has led to it. The intentions are great. It's just the challenges of legacy tech, the results.
Reggie Young:
Past few years, banks have been dealing with increasing regulatory scrutiny. Obviously, we see this with some of the fintech sponsor banks, but I think banking more generally, some of the exams have generally tightened up a bit. What's it been like working with banks during that process of increased scrutiny when you're working with these banks and their data? What are they doing to make regulators happy with how they handle data nowadays?
Lee Easton:
There's finally some clarity around what the regulators really want and what they're looking for. I think the last few years, there's a lot of just, I don't know, I'd say blind requests and just show me how you did that. But now I think that we're seeing more clarity.
We published a few blogs that we hope are helpful to banks. We've had the pleasure of working with a number during some of their exams and audits specifically on the data warehouse or the data lake. Oftentimes it kind of points to, you need to show that there's a clear plan around managing this, whether you've hired internal data analysts, database engineers, or external resources to fill that function. So there needs to be that part of this, like how are you supporting it? How are you managing it? With a system like that, you need to have some sort of health monitoring capability. What's the uptime or the runtime of the system? These are, I'd say, very easy things to show. You just maybe need to be aware that you need to show them or be ready to show them.
I think the one that's more common on the fintech side of things, so it's sponsor banks specifically, is that they do have the system and that they are doing a validation of everything going into the system. We're seeing that a lot more commonly, especially working with Gen 2, Gen 3 payment processors. There's now new files coming into the bank that maybe weren't previously in the last five years. And so how are we validating that we're actually receiving the right information within those files? Can the system show us that? That's part of what we're starting to see more with exams as well.
Reggie Young:
You hinted at five years ago what the regulators or banks would accept. It's changed a lot in the past five years, which is definitely my experience. I'm curious about the cloud-specific aspect of that. I know a decade ago, financial infra built on the cloud was like, oh, that's not really allowed, but it's here, it's going to come. We don't know when it's going to come. It feels like that's much more accepted. But I'm curious, when you're working with those banks that you know are going through some of that regulatory scrutiny, are they poking much at a cloud-based solution? Are there things you have to prove to get them comfortable with the cloud solution?
Lee Easton:
Yeah. This is shocking, but I don't think that there was really much trust in the industry until a year ago, maybe two years ago. We started working with our first bank in 2019, and they were very innovative, progressive-type bank. And when I started talking to others that would be potential prospects, I realized, oh no, they're not all like that. They're not trusting the cloud in any way, shape, or form. They want their data in some sort of data center, either local at our branch or in some sort of geographical region in their city. That was as far as I had really seen. There've been multiple businesses that have grown similar to mine that were very successful in the 2000s, helping banks get a data center built, which is the same as a cloud. If you really think about it, it's not in your office, it's somewhere else. But seeing it physically, I guess, was a big thing.
What's wild is I went to Acquired or Be Acquired in January of 2024, so a year ago. The banks and the executives I was talking to then were just now starting to poke at the idea of using Microsoft's cloud products, or Google's cloud products, or AWS. I think a lot of it was because Jack Henry and Fiserv were saying, we're moving to the cloud. And so they're listening, they're paying attention, and then who's going to be the one to take the first big step.
Reggie Young:
Interesting. Love it. You've said to me before that if you're talking to a banker, the speed of revenue for them is just as important as the speed of recon. I would love to dig into that. Tell me about that a bit. What does that mean to you? Why does that speed matter so much, the speed of recon?
Alli Nilsen:
Can I add some color to this, just for some listeners? Yes, I know I'm interjecting a bit. One of the reasons why I think this topic is so interesting and what Lee's team or what banks have to do is coming- so obviously, with a backend processor, I come from another backend processor, processors aren't apples-to-apples perfectly. Any of the players in the space, they don't have the same naming conventions for everything, even if when you swiped your card, the end result is hopefully being approved.
And so those banks, when they work with multiple tech platforms, figuring out how everything connects at the end of the day so they can surface an overarching picture that's uniform is highly complicated. If I have to get on the phone with the bank and talk about mapping, I'm not just talking translation. I'm talking infrastructure differences, too. So that's where speed to recon is a lot more complex than we might be alluding to at the beginning of this conversation.
Curiously, just to go back to Reggie's question, please answer it. Now that I've rambled, I'll just say speed of revenue is just as important as speed of recon. But now we have some premise of how much recon and elements are going into this situation.
Lee Easton:
Yeah, great context. I'll try to make this response less technical. I think the reason I like working with banks so much, the ones we work with at least, community banks, at the end of the day, they're like, they're small businesses, and I run a small business. And so there's this entrepreneurial spirit and camaraderie that I have with the CEOs and the CFOs and the CIOs at banks. In some weird way, we are doing similar things. So I understand they don't have a ton of profit off their revenue. The banks run on very thin margins. So for anything they invest in, they got to get that dollar back quickly. That's how I feel. What is this going to cost me? Okay, I need to be making money on this real quick.
And so anything like standing up infrastructure with us, or launching programs with fintech, or even just shifting a card product, or launching a digital bank, moving to BANA, or whatever it might be, they're making pretty significant investments considering their revenue and their profit as a business. So they need to see that return pretty quickly. Otherwise, they have to go capitalize and raise a little bit more money, and then they can draw that out. But most oftentimes, they've got to hit something quick.
The recon side of things is like a finished state. It's like the final step to say we've launched something new. And data is a big part of that, but there's five steps along the way to get to successful recon. And a lot of it points to, to Alli's point, you have to create a model of data that tells you what's real and what's not real in this file that you're receiving. Some of that is new to the bank, and some of that is a lot of engineering. I'll just put it that way without getting too technical.
So chargebacks and recon. If you're launching a new card, or if you're even launching, say, a new digital branch or digital bank, the goal is to grow the business, you're going to increase card count, increase interchange or increase customer activity through your app, whatever it might be. At the end of the day, that activity is data, and that activity has to flow into your system, and you have to prove to examine that that activity is real. If you can't prove that activity is real, that's bad. So speed to revenue is very important. But if you're ramping up all your efforts to grow revenue and you can't show that it's actually real, then that's pretty bad.
Alli Nilsen:
Yeah. We probably want to avoid that. I think most people like to view what's happening on their books of business as real at the end of the day. With that premise, let's just say we didn't have all these historic systems that we have to basically use a pull skimmer and clean out all the different things. What would you recommend the gold standard is if a bank was setting something up from scratch? How would they go about it? Maybe any tooling or tips and tricks? What would be the perfect cocktail for a bank, data cocktail, and add a new one into it?
Lee Easton:
A perfect data cocktail. I am a huge advocate of Snowflake and Databricks and these newer, modern- they're not necessarily cloud storage providers, we call them orchestration tools. They're tools to manipulate data in large volumes. Snowflake and Databricks, now even AWS and Azure, all have features of connecting your data. This is beyond an API. I think banks are still moving in the cloud, they know what APIs are, they're chasing and pursuing, should we look in the API? And it's like, well, by the time you get there, they're going to be outdated.
What I'm promoting is let's skip being a generation behind. Let's try to get with what's coming, and that's data sharing. The perfect recipe for payments business for a bank is to have a secure data share with all their partners and a secure data share, even if they're, let's say, leveraging Alkami or Q2 for digital banking, that's still customer account activity that needs to come into their system. And so that should be data share. All it really is, is we're both on Snowflake or we're both on X, so therefore we can share a server. So my server is your server, your server is my server. I query that data whenever I want as long as you put it there for me.
I've talked to a few organizations about- I think to Ali's point, what I would have said maybe a year ago or so is a standard data model. I think it would be a pipe dream for every bank that's working on a standard data model, if Visa's network had a standard data file that every bank had the same exact naming convention for. But I think we can move past that even if we look at a data share, because at that point, the bank can standardize their model, make that their model, and then people that they work with just throw raw data into a data share, and then it hits the standard data model the bank needs.
There's really two approaches there. I don't know if one's right or wrong. The standardized data model means everybody in the industry would have to work together. And that can be tough to make everybody agree on something. And then when the second somebody is like, no, I'm not going to do that, then it kills the rest of the recipe. Whereas a data share, we could at least have a standard data model that would apply to the banks that want to use it. It doesn't have to apply to the people shipping data.
Alli Nilsen:
I can barely get all my kids to agree on what we're eating for dinner, so I've just stopped asking. So I can see where probably the data share makes more sense. Are there any challenges with the data share model that banks should be aware of? Or are those things that you think are passing as people are doing adoption right now and on to the next thing?
Lee Easton:
I think the challenge is it is just still a bit ahead of time. So I'm optimistic. We know that there's banks out there doing it. We know it's slowly adopting. There's really no blind spots to it yet, unless it's just understanding of what it is and how it works. Technical knowledge, I think, is the key investment for banks to make.
Reggie Young:
What other trends have you been noticing in the bank data market or just the bank partners that you're working with generally?
Lee Easton:
That's a good question. On the traditional side of the bank, a lot of them are looking for understanding Customer 360. The biggest theme, I would say, is data lineage. We're hearing data lineage more and more. What it really means is if a customer updates a record over here in the system that hasn't been used in five years, will it actually trickle out and update all the records of that same customer? And which one is actually the source of truth?
Let's say I bank with a small community bank here in Tulsa, and I have three of their products. Let's say I have a checking account, a savings account, and their digital banking app. I just updated my information in the digital banking app. Is my information going to go update in their core? So when they have to maybe send me something via mail or via email as a notice, is it going to be the right lead or the right account? That seems to be a big one.
I'd describe that as a what-if problem, but I think the opportunity is knowing what customers you're not serving. It's like family members, for example. They should know I'm married. They should know that I share a house with my spouse, who's my wife, and that she's an opportunity for them. That kind of stuff, they just have very little insight to because the data is so silent. That's a common trend that banks want right now is Customer 360, better marketing and awareness of customer behavior.
Alli Nilsen:
You've spoken a lot about how you guys have done consulting and helping banks with organizing data and creating those warehouses. You've cleaned up the swamp and the lake and the warehouse. Data covers everything across the lifespan of any business. What kind of products have you guys built or helping the banks build? Obviously, I know some of these answers and I'm just toying you a little further for the listeners, but I know your team can do ACH parsing and you can compare processor to processor to help bank partners out. Would love to hear from you what, quote/unquote, products are the most popular within your team or ones you're excited about that your team is working on to respond to the trends in market.
Lee Easton:
Yeah, totally. Thank you for that. So ACH parsing, among the fintech sponsor banks, ACH parsing is something we've gotten really good at. And so we've automated that, and that it's part of our toolkit. We'll call it that.
The other one that I think could be considered standalone product is file validation. I mentioned that earlier, I think, Reggie, was garbage in, garbage out. You get a system. You're going to start bringing all these files into the system. Before you go push those files into any other system for uses, validate. So validate that you're receiving that. We're basically scanning through those files, looking for deltas. And when we see a variance or a change or even a file size change, that's a delta. And we have to call those deltas out. At certain thresholds, completely up to the bank, there will be alerts or triggers set on those deltas. So file validation is probably the other big part of our toolkit.
And then I mentioned the whole Customer 360 thing. That's something we're working on. We've got two or three dashboards built out as templates in Power BI. And so customers that are pursuing lineage, data lineage, Customer 360, that's probably the other big part of our toolkit right now.
Alli Nilsen:
Awesome.
Reggie Young:
I'm going to throw up my favorite wrap-up question, put you on the spot, which is what's something in fintech adjacent banks that you've been thinking about a lot that you think people aren't talking about enough?
Lee Easton:
I have a great one. It's on my to-do list to actually talk to some people about this. You guys might know this. I just haven't come across it yet. But in the last two weeks, we've had to fill out somewhere like eight different vendor diligence questionnaires.
Alli Nilsen:
Oh, I've no idea what you're talking about. Those aren't miserable documents in the slightest.
Lee Easton:
Usually, the process starts, depending on the complexity of the bank and how tech-savvy they are, they might send you a link or they might say, hey, send me a zipped folder. It's just a variety of ways that they request this. Well, out of the however many, I think three- when we actually went in to fill stuff out, I think three were through a company called Ncontracts, and then two were with Venminder. And so my thought was like, why do I have to fill out three times with Ncontracts? Ncontracts should just have my information. And then Ncontracts, couldn't I just set up an account with them and be Carfax? Like I'm going to give you all my crap, and then everybody else should be able to look at that crap. I shouldn't have to go fill out the same crap with another bank that's also using Ncontracts or with Venminder. I've done it once with this bank, another bank that's using Venminder should have access to that.
That's something I feel like nobody's really talking about. And I know the guy that started and founded Ncontracts. So it's on my list to text him and talk to him about this this week, because I built the fintech map of just this ecosystem of connecting, and my thought was like, that should be a big part of fintech map is more so what if we just go around and collect everybody's BDDs and create this platform for vetting who you work with?
Alli Nilsen:
I'm almost wondering if you could do that in the data share with Snowflake. Hey, we updated every single year with our different policies and procedures, how do you put it in a place where they can also just go access it. But I think the trickier part to an earlier part of the conversation is standardization, just because it's needed.
We use a tool in-house to fill out due diligence forms and everything. And we always do a human edit at the end, where we take past questions and answers and run the new due diligence document through the tool and it populates as much as it possibly can. But it has to be a living and breathing update because things are constantly changing, we're getting more tech-savvy, better certifications, all the fun stuff.
So yeah, I could totally see a space in the market for that, and my brain immediately goes to, how do we own it and protect it and feel safe? Because I know my legal team would be panic behind their eyes when they listen to this podcast. We're talking about another vendor having our very private information. So how could we do it in a way that puts it into the space where we have control over who sees what answers based around what they're asking for, but it's open and closed tabs kind of thing.
Lee Easton:
There's definitely something there. There's definitely something there. For our SOC 2, we use this company called Drata. They're not the ones that do the audit, but they're a platform for SOC 2 readiness. You put in all your policies and your docs in the Drata. Drata will connect to your Slack, connects to your email, connects all the tools you use to check if those tools are in compliance as an agent that runs our machines. It handles a lot of the technical machine IT-level hardware review, and then the policy stuff is somewhat manual. That's why you have to bring a third party to audit that.
The Drata then creates this trust center. And then I can add anybody I want to view my trust center based on their domain, or I can make it public, which I wouldn't want to make that public, but you can do it by domain. And that was my thought is like, why is there not a standardized trust center for all these vendors in the space that banks can go to, and then you just approve them by their domains? I'll just use names. It's like Sutton Bank requested to access your trust center. Sure, approve, and just like, give me the option to approve that. And then they can see everything I've already put in there. And then it reminds me every year to go in and refresh them.
Reggie Young:
Awesome, Lee. This has been a great conversation. I appreciate you coming on to give us a primer on all things bank data. If folks want to get in touch or learn more about iDENTIFY, where should they go?
Lee Easton:
Come to our website first. It's just goidentify.com. At the top, explore our resources. We love publishing deep insights on challenges that we're seeing, case studies that we've had. So definitely check us out at goidentify.com and look through our resources there.
Reggie Young:
Awesome. Thanks so much for coming on the podcast.
Lee Easton:
Thank you guys for having me. Appreciate it.
Alli Nilsen:
Thanks, Lee.