Fintech Layer Cake

The Future of Agentic Financial Crime Work with Greenlite Cofounder Will Lawrence

Lithic

In this episode of Fintech Layer Cake, host Reggie Young sits down with Will Lawrence, Co-Founder and CEO of Greenlite AI, to explore how AI is transforming background checks and employment verification.

Will shares how his team is building a platform that makes identity verification faster, smarter, and more reliable, from parsing unstructured data to detecting fraud in real time. Along the way, they discuss the challenges of scaling AI in high-stakes industries, the balance between automation and human oversight, and what it takes to turn raw data into trusted decisions.

Whether you’re in fintech, HR tech, or just curious about the future of verification, this conversation delivers practical insights you can use today.

The Future of Agentic Financial Crime Work with Greenlite Cofounder Will Lawrence

Reggie Young:
Welcome back to Fintech Layer Cake, where we uncover secret recipes and practical insights from fintech leaders and experts. I'm your host, Reggie Young, Chief of Staff at Lithic. On today's episode, I chat with Will Lawrence, the CEO and co-founder of Greenlite AI.

There's a lot of overhyped stuff in the AI space nowadays. Greenlite does not fall in that overhyped bucket. They offer a platform for automating financial crime workflows, helping companies efficiently scale AML, KYC, and risk work. I've seen a few demos of the platform, and the speed and quality of their automation have always impressed me. Greenlite's used by Mercury, Gusto, Ramp, Betterment, Coastal Bank, and many others, including OCC- and SEC-regulated institutions.

Fintech Layer Cake is powered by the card-issuing platform, Lithic. We provide financial infrastructure that enables teams to build better payments products for consumers and businesses. Nothing in this podcast should be construed as legal or financial advice.

Will, welcome to Fintech Layer Cake. Really excited to have you on today. I know you and I have been talking about this conversation for a while. In case any listeners aren't familiar, what should they know about Greenlite and what you and your team are building?

Will Lawrence:
Yeah. Hey, I'm so excited to be here. A long time fan, first-time caller. Is that what they say? I'm so excited about it.

Greenlite is the leading AI agent platform for financial crime. Teams, including regulated banks and large fintech platforms, use us to automate work that would otherwise be tossed to large teams of human analysts. Our goal is to help you scale compliance with speed and accuracy without just throwing more bodies at the problem. That's Greenlite in a nutshell.

For me, personally, I've been building in the risk and compliance space for a long time, previously doing this at Facebook Financial, followed by doing this work at Paxos. We're so excited to work with teams regulated by companies or organizations like the OCC, FDIC, Federal Reserve System, SEC. It's great to see this adoption of trusted AI agents in this really regulated space.

Reggie Young:
Yeah, definitely. Maybe a brief side quest, what is financial crime? I've noticed that people tend to correctly put together risk and compliance. I think financial crime gets at that bucket or broader idea. Maybe for listeners, in case they’re not familiar, let's double-click on financial crime. What does that all entail?

Will Lawrence:
Definitely. Financial crime is one of the big areas of risk and compliance for any institution that touches money. I mentioned Facebook Financial. We were a payments platform, and people would do donations or buy ads on them. We even had financial crime obligations. So it's not limited to just banks or insurance companies or SEC-regulated broker-dealers. Financial crime is pretty broad, but it is in the category of risk and compliance. What it's trying to do is prevent criminal activities that involve money. So it's not really the domains of fraud, nor is it in the domains of risk management. It's really stopping criminal activities.

There's broadly two big buckets of things that fall into the world of financial crime. The first one is sanctions compliance, and the second one is anti-money laundering. Sanctions is exactly what it sounds like. Individuals and businesses and geographies are restricted from doing business with other people regulated in certain markets. For example, in the US, OFAC, the Office of Foreign Asset Control, decides who you can and cannot do business with from a sanctions perspective. So you need to make sure you're compliant there. That's one category of financial crime.

The second and larger one is anti-money laundering. Anti-money laundering is a very large spectrum. It's everything from the categories of Know Your Customer. When, Reggie, you create that account for Robinhood, you're going to take your selfie and you're going to do a picture of yourself, that's a KYC measure. But anti-money laundering also involves transaction monitoring, looking at transaction patterns that take place on platform to make sure that they don't fall into a common AML typology. An example here is we know some folks are at a very large peer-to-peer payment network. They've identified certain typologies that might be relevant for their customer population that might say, oh, hey, if I see small transactions between midnight and 4 a.m. between people who are normally unconnected, that leads me to something like a drug typology, which is an illegal activity that might be taking place on platform.

That's the world of financial crime. It's honestly a fascinating world. People in this space really care about what they're doing. They're fighting criminals. They're stopping the bad guys. And they're doing that with hopefully good tools like Greenlite.

Reggie Young:
I love it. If this whole founder thing, this founder career doesn't work out, you should look into running annual compliance trainings because you do a way better job explaining the stuff than I think any compliance training I've taken. So great. It’s a great overview for folks who may not be familiar.

A lot of questions on financial crime tooling, stuff that we're going to turn to you. But you had a LinkedIn post recently that I want to ask about. You posted about how most AI agents are just rules engines with LLMs bolted on and not actually agents. Why not? What does that mean to you for just a rules engine versus an actual agent?

Will Lawrence:
Yeah. Oh, my goodness, so much to unpack here. We started Greenlite in the summer of 2023 inside of Y Combinator. We were one of the first batches that was 30% or 40% AI. Now it's 90% AI. So we were one of the first ones to see what was really working there.

At the time, my CTO did not let me say the word agents because it wasn't true, it wasn't real. It was essentially just a single prop. You send it over to a blob of information over the ChatGPT API, you get some information, and some people are marketing that as agents at the time. The world has shifted a lot. That was the state of the art back in 2023. 2025, the world has shifted and you can actually do really complicated things.

However, I find still in market, we're still in that world of ChatGPT calls, get a rule-based summary or description of what came out on the other side. And that doesn't really give you the full value of AI. It almost diminishes it. The reason why I think these kind of rule-based decisioning criteria are not agents is because while they're still useful, they don't let you solve the iterative jobs that are very common in our slice of the world.

I just mentioned that example, like drug typologies. That is something where you need to look for a couple of different patterns here, a couple of different patterns here, review the documentation over here, go analyze transactions, piece those things together, and then ultimately share responses. It's much more complicated. It's multi-step. It's iterative. You might identify something in a transaction pattern, like a counterparty. You might want to dig into that counterparty.

Can't do that in a rule-based world. You need to find information in one step and use it for another. All that to say is I just think the agentic architectures allow you to do things that you couldn't do a year ago or two years ago. And that's really the state of the art. However, we're still in a world where I guess what is marketed a lot is not the state of the art. It's just what's now coming off the shelf of some of the platforms that are marketing these types of services.

I should be clear, though, not everything requires an agent. In fact, a lot of things should not use an agent. I'll give you a really good example here. If you're doing an alert- so in our world, alerts are pretty common. Let's say you're doing a PEP alert, a politically exposed person alert. You don't want this advanced agent to go out and find 50 different branching patterns of how you might close out that alert. You want to use the tried and true method that is being used at scale by your current team.

Again, all that to say, don't use agents for everything. They don't solve all problems, but they allow you to solve a set of problems that were previously just unsolvable, which I think is so cool. Inside of Greenlite, we let you do rule-based workflows that use AI. It's helpful. It's really good to do those. We also let you build agents that actually tackle complicated jobs.

Reggie Young:
I love it. On the topic of AI, you talked a little bit about this in that answer, but what are some of the top compelling use cases for solving financial crime functions and problems? I’d also sidenote this by saying I'm really excited for, I think, the compliance and financial crimes potential for AI. I think one of the themes in the past few years is the barrier to entry in financial services has gotten higher and higher and higher for innovators and founders trying to launch fintech programs. The money you have to have raised to be able to find a bank partner, all that stuff has just gotten higher and higher. I think I look at a lot of that cost is driven by compliance operations and all the oversight that those banks have to take on. I'm excited about all the agentic and AI oversight potential because I think it can help bring down some of those costs. I'd be curious, what are some of the other compelling financial crime functions and use cases you see where AI can make a big difference?

Will Lawrence:
I share the excitement, Reggie. I'm super excited about it. Our mission at Greenlite is enable everyone in the world to access the financial products that they deserve. Ultimately, this goes back to that theme that we hear a lot in fintech of the underserved markets and the unbanked population. We're on the same page here. A big reason for it is the cost of services is too high. And we're so excited about the ability for agents to help streamline those operations.

I could just share my mental thought of how AI and agents will sit into the world of risk and compliance broadly. I like to think the last 15 years of risk and compliance technology have been broadly around detection systems, so detecting potential transaction monitoring violations, or detecting potential fraudulent documents, or detecting potential sanction exposure. Those are great. That's really important. That's the world of supervised machine learning, a lot of data crunching to identify this thing that you're seeing here is fraudulent or potential violation.

The next 15 years is going to be great. So you found the risk. Now what? The actioning, if you will. Again, if you find that potential transaction monitoring violation, normally send that over to a human team, and a human team could be 3,000, 4,000 people doing those reviews. We're excited about automation to add more investigative capacity and allow those humans to focus on higher leverage work. That's the broad spectrum.

So then I go, question for us is, our framework is, where's there a lot of volume of work, and where's there’s high complexity of work? A lot of volume work, a lot of complexity of work, where we spend a lot of our time and where I think is most compelling use case in financial crime today is high-risk customer reviews, at Lithic or any customer that you work with, there's so many different platforms, they might be working with their customers. Most of the resources risk and compliance go to your top 5% or 10% of your population. Everything goes right. And so how do you help in that the most hairy, complicated sides? We have some case studies in our website of some banks that start to use our enhanced due diligence product and help cut down time to do those reviews by 70%. That's awesome. That's a ton of capacity unlock. High-risk customers are like an area that I think is super compelling.

The second side I would say is on anti-money laundering transaction monitoring. In our world of financial crime, it's very similar to call centers where you might have L2 and L2 analysts handling different parts of the job. I know one might be that initial triage. Maybe they help you with 80% of your customer calls, but if they're challenging you escalate to a manager, if you will, to handle the more complicated.

In financial crime world, we spent a lot of time in Greenlite perfecting the L1 side. How do you do that triage side? That's great. But now we're actually able to help with L2 as well. So again, the transaction monitoring world,that might be you've identified there's a risk and now you want to file the SAR. That's the second part of that equation there. The end-to-end journey is what I'm pretty excited about. How can you do those in a really streamlined way?

Maybe the underpinning of all of this is, ultimately, I've never met a compliance officer or a head of financial crime or a BSA officer who says, I have all the capacity I need. No worries. I'm good. That's never the case. All the consent orders in our space would echo the same thing, where no one has the capacity they need to have a successful BSA program. So we're super excited about going to those spots where there's a lot of time invested or wasted in some situations that are also pretty complex. We go for the complexity vertical because that's where agents are going to be especially helpful, not the truly simple rule-based things. That's where we spend a lot of our time. So enhanced due diligence and end-to-end anti-money laundering investigation are pulling us immensely. That's really what we're excited about.

Reggie Young:
I love it. I love the high-risk customer example, too. It's a good illustration of success begets more work, different work and different problems. Having high-risk customers often isn't an indication that you are a high-risk product. It is an indication that you have product-market fit, and all of a sudden, you're getting a lot more customers. Statistically, you're going to have to deal with more higher-risk customers in that bucket. But when the bucket gets bigger, it's something that I don't think a lot of folks think about, is the skill. As we call it, it's a champagne problem where it's a good problem because it probably means you're getting good traction. But now you have to figure out, oh, we have to do a lot more of these highly resource-intensive risk reviews for these higher-risk customers.  I have an agentic tool, I'm sure it saves a ton of resources there.

Will Lawrence:
Yeah. That broadly touched into the world of onboarding as well, whereas I think we've seen a ton of KYC and KYB has been top of mind for the last 15 years. The state of the art, if you go out today until KYC, KYB world, is automated document processing, like OCR-style reviews, and selfie checks on the identity verification. And then if you're doing KYB, it's checking if they're in the sector or state. That's broadly the best in class as it relates to KYB, KYC.

But then you do anything slightly more complicated, like onboarding a hedge fund. Imagine onboarding a hedge fund to a bank. You can't just do the sector or state check anymore. Those are the types of use cases we're also being deployed to help with a lot. They share a lot of commonalities with the high-risk customer reviews, a lot of documents, a lot of financial analysis, a lot of research on the customer, a lot of EBOs. That's another kind of category we're super excited about.

Again, why I highlight the high-risk customer review is it's complex use case. That's where I think there's going to be tons of value. It's not the five-minute checks, it's the six-hour reviews, that are going to be where the most values can create it.

Reggie Young:
Yeah, the hedge fund examples are really good, another straightforward KYB situation for sure.

Banks are just notorious for adopting new innovative technologies right off the bat. They're super fast. I'd be curious, as you're talking to banks, and you mentioned a lot of the institutions you work with are pretty regulated by some heavy hitters, like OCC, SEC, so they have these heavy-hitter regulators in mind as they're thinking about new technologies and everything. When you're talking to banks, how are they viewing or responding to agentic financial crime tools? Are they too spooked by risks of hallucination or anything like that? How are they approaching it?

Will Lawrence:
Maybe not to get too philosophical, what is it? What is agentic? It's essentially just a new set of models to help you do a certain job in a financial institution. I say and frame it like that because there's always been new models coming out to help do incremental things. Rewind 15 years ago, a rule base like, is Reggie Young this Reggie Young on a list, that would have been considered AI 15, 20 years ago. Just the state of the art just moved, and we've gained regulatory comfort.

In our world of banks specifically, this is where the world like model validation and model risk management come in. There's a lot of guidelines in place on how to manage the risks associated with using technology for decisioning or for actioning a common use case. I like to start there because sometimes in AI, we’d just say, yeah, a completely different thing. There's actually a lot of frameworks in place already to manage technology risk. So we always start there.

The three ones that we pay the most attention to, or the most relevant to our industry, are OCC-level guidelines, Federal Reserve regulated guidelines, and DFS actually has really good guidelines here. That's OCC 2011-12, DFS 504, SR 11-7. These are really useful guidelines for us to pay attention to. That's where most of our banks start their evaluation of Greenlite, with those guidelines in place.

So what do those guidelines require? They require everything from strong model governance. Do you have good eyes across all parts of your system? Do we have the right checks and balances in place? Do we have the right controls? I think that's a really critical part. You have risk management. Not every model is going to be perfect, and that's understandable. That's the nature of these things. Neither is every human. So how do you handle the risks that are generated by those models? That's another one.

Transparency and auditability are so critical. How can we explain what goes into this and what comes out of this? If I put the same input in 10 times, I get the same output in 10 times. Those are the types of things that we spend a lot of our time doing. We recently announced our Series A, and part of it, we announced our vision towards strong model validation and model risk management. We call that our trust infrastructure, which embeds those types of guidelines and checks into the foundation of every agent.

All that to say is banks are using what they know as they should to evaluate agentic solutions in the same way that they might, like a new transaction monitoring system. And that's the right way to be thinking about it. If you ask a team and say, hey, how are you model validating, if they look at you confused, that's a sign that might not be a good fit for a bank quite yet. I'd like to say it's never been easier to make an AI demo, but never been harder to make a bank-grade production system because there's so many variables floating around. That's how we approach it, and that's ultimately what helps us gain the confidence of banks of all shapes and sizes.

Reggie Young:
I love it. Your last comments about easy to make the AI demo, but doing the actual substantive thing is a bit different, it feeds nicely to my next question, which is, I've heard folks say that a lot of agentic workflows, they're easy to build in-house and fintechs don't need to go to a vendor. And there are stuff that fintechs and platforms and whatnot are going to be able to build themselves pretty easily. What's your response to a take like that?

Will Lawrence:
I don't know it's controversial as a vendor, but I generally agree, you should be building a lot of this in-house. The value is one thing, but then teaching your team how to fish, if you will, is really important. I'm a pretty big advocate for building as much as you can in-house. That said, if we divide the world into where can you use AI, or dare I say like copilot-style experiences where it helps you inform your decisions or helps you do something a little bit faster, I think a lot of those should be built in-house. That might be a little spicy of a take because I think that's what most AI products are today in market. They're things that help you make summarizing alerts. Okay, that's helpful, but I can toss in ChatGPT and do the same thing. I think you should do those things.

Now, it goes when the architecture is different and it's agentic architectures. The main ideas behind agentic architectures are iterative loops, shared memory across context, and maybe even tool use, using all the different tools available inside of a base LLM, but also your internal tools. That orchestration gets quite complicated quite quickly. I know because we've been seeing the evolution over time, and we've spent so much time just getting our orchestration correct. It's like a conditional statement. When you start to layer on steps, the margin of error just goes up so much because every step could be wrong. So having them correctly vaulidated is really important.

So all that says, I would try and build a lot of those baseline cases internally. I think that's awesome. But then when you get to the complicated end-to-end jobs, the more complex the use case, that's where an agentic framework becomes helpful, and that's where the validation, the testing, the trustworthiness there really helps. And that's why we are so excited about the trust infrastructure for us, because it allows any use case you build on top of it to be adherent to the guidelines that are set out by banking regulators. That's really hard to do.

There's a last key component here that I'll say. I was talking to the head of AI at a very large private fintech. They also only have so many AI resources. They only have a handful of engineers who know how to use these things. I think everyone I keep seeing wants to use that for these very differentiated things to drive business growth, and potentially use partners to help with other parts of operational efficiency. So that's another key part, how do you manage your own resources internally? If you had infinite, build everything, totally get it. If you don't, which is the reality, work and partner with someone who's best in breed to help you scale there while you focus your resources on the most strategically important thing for you.

Reggie Young:
Yeah, I love that last point. I think it's very easy right now, talking with friends that work at fintech companies, everybody is like, how do we throw more resources and use AI more, but you still have the core business to build. You got to do that. If AI complements and accelerates, that’s great, but you also can't invest so much in hiring pure AI-oriented engineers that you lose sight of the core business that you're building. I think there's some funny tension there right now.

Will Lawrence:
At Lithic, I'm curious, presumably, you have the same conundrum everyone's having, right? You're a fintech. You're thinking about where you invest your resources. How do you think about it?

Reggie Young:
It's actually a good question. This is very timely. We're gearing up for our annual company off-site, and part of that is going to be an AI hackathon of non-technical teams. What are the routine, tedious tasks that you wish you could automate? Let's put you in a pod with some engineers that are AI oriented, build a quick MVP using Claude Code, using other tools, stuff that doesn't take a lot of time, adds a lot of value, quality-of-life improvements, like resource efficiency. On the non-technical side, I'm a huge fan of seeing is believing. AI is so new that it's hard to know what can be done with it.

We talk a lot about internal education. We just have an internal AI session of, here's how you can use some basic guys. We're not using it for core- or I guess the stuff I'm talking about now is the operational efficiencies that we have. Oh, summarizing meetings, like that, those are easy things, like preparing deck, Version 1s of deck, like that sort of stuff, that we just find a lot of education helps.

But yeah, I'm curious to see how this hackathon goes because I think a big part of it will just be connecting the technical, non-technical teams and building more connections there so that the flow of, hey, here's something we could automate to, okay, now it's implemented and built and up and running, hopefully that increases. We'll see how it goes. It's definitely like learning new space. I think all of us would love to have a whole roster of AI engineers.

But it's also moving so fast that within six months, the need for as many AI-oriented engineers, you're going to need half the headcount you did now in six months. It's why no one wants to buy an EV right now because they're waiting for the batteries to congenially get better. And so we're just in this, like, oh, it's going to be better, it's going to be better, or it's going to be more efficient and easier to ship stuff for non-technical folks.

Will Lawrence:
Again, I agree that there's so much opportunity for people to build internally. But I imagine, even in this hackathon that you're all doing, you probably use tools like, I don't know, Zapier, or Retool, or something to build the UI. I think, what do those products do that are really great? It's not like a builder's spot. It's like a build with Retool or build with Zapier. That's where the best AI tools will be as well, and that's how we see Greenlite. Our objective is help you build the things that are going to be most useful to your compliance program as efficiently as possible. I'll give you an example.

I remember talking to a team. A lot of finance crime is source of funds and source of wealth. Where did their money come from? In theory, you could spend six months trying to figure out, how do you connect all the right data sources there? How do you figure out how to analyze transactions on platform? How might you look at bank statements? Or you can just use our off-the-shelf source of funds module. We've allowed you to build that inside of our agent builder to use any different configuration of source of funds really easily and available via an API.

This goes back to our conversation of where will there be AI agent companies. I broadly think there'll be, in these areas, a lot of depth and complexity. I know the regulatory environment is always hard to manage. What if you can abstract a lot of that away and just use it as a tool inside of many different workflows that you have? That's what we're seeing teams use Greenlite for. It's cool. They come in to do three or four use cases and build in 15 to 16 use cases, and just deploy it at different parts of their program. That's awesome, and that's what we want to do. So build with versus build versus buy.

Reggie Young:
Yeah, I love it. One of the concepts on this thread of some things are better to work with a specialized vendor on versus building in-house and AI, I love to click and do a phrase you used with me when we were prepping for this episode of agentic icebergs, because I think it's a really potent phrase that I think gets out there. What is the agentic iceberg?

Will Lawrence:
Can I share my screen? No, I can't. Can I? Is that okay? That's okay. I can talk about it. I'm totally okay to talk about it.

Broadly, when you use a platform- well, I'll just say for Greenlite specifically, in our world here, when you use Greenlite, you see this cool AI stuff, which is awesome. It's great to see value being generated. But what happens behind the scenes is all the work to make sure that information is relevant, accurate, auditable, and high quality. Every time you make a call, let's say you ask Greenlite, hey, Greenlite, find source of funds for Lithic. Greenlite would do a lot of work in the background to complete that job. Then we have evaluations, evals are what they're called in AI world, to make sure that the quality of what is being retrieved is high. That means, is the information relevant? Is it from credible sources? Is there an up depth of information there that is useful, or to just have a small data point there? Who are the speakers against it? Is there any bias that you might detect from there? Is it referring to the same Lithic, or there's 15 different companies called Lithic?

Those checks in place that we have to make sure that what we surface is great, the user doesn't see that. They just see the end output of it. They don't see all those things that take place there. Other things that they don't see is auditability matters a ton in our space. Great, the answer matters, but the answer in compliance world is 5% of the work. It's 95% showing your work. How did you do that stuff? How did you make that calculation? How did you decide that wasn't relevant or not relevant?

In regulated spaces, you can't just go to ChatGPT and toss information out because you don't have good data provenance or understanding of how that information was calculated. In Greenlite, we make it very explicit. One of our mottoes from a product development perspective is no black boxes, so you should always know how everything happened on Greenlite and log that.

I think this is where we're really excited about, the depth that you can bring to AI here in these regulated spaces is by putting in those vertical-specific needs. In our world, it's validation, auditability, traceability, data provenance. That's why people choose to build with Greenlite, even though they could theoretically go build their own AI platform themselves for risk and compliance, because it's time to value. You just get started much more efficiently. You just know it's been used by other platforms at scale, and this type of auditability is just baked into the platform.

Reggie Young:
I love it. I think there's agentic iceberg and agentic ice cube parallel here. It's like, I can go make ice cubes in my freezer overnight pretty easily. I can't go make an iceberg. That takes a lot of time and many millennia to accomplish.

Will Lawrence:
Ice cubes. I love it.

Reggie Young:
I've chatted with folks that think compliance and regulatory tech or fin crime tech, this general area is this sort of backwater in fintech that's not going to be a big market. There's not a lot of potential there. What are they missing?

Will Lawrence:
I think they were correct until about two years ago. It feels like we spent a lot of time on faster horses in compliance. It's a marginally better matching algorithm for a name screening solution. I think that's fine, but there isn't a lot. That’s why there's very few massive companies in the compliance service space, barring the ones that were created 20, 25 years ago.

What's really changed is language models help you work with unstructured data, and compliance on any bank is the home of unstructured data. I like to joke that in financial services, like the front office got completely transformed because there's so much access to data. Let's go to a trading floor. Now you can do a lot of the trading floor work algorithmically because it's numbers being crunched and events and momentum and changes. A lot of numbers there.

In compliance, is Lithic risky is a very hard question to answer. There's no data score for that. It requires crunching information that's in PDFs or in unstructured number formats or broad web research. There's a lot of pieces that need to be put together. All that says language models allow you to now make sense and automate unstructured work, which I think is crazy.

So then if we talk about, why is that a big transformation, compliance is one of the few areas in a financial institution that's still extremely labor heavy. It's mostly throwing bodies at the problem. And it's why if you're a fast-growing fintech, you're probably hiring hundreds and hundreds of people just to do repetitive jobs. There's an opportunity to use automation to solve that. I think LexisNexis had this report actually, where it's like 33% of our risk and compliance budget goes to technology, and 66% goes to labor. Of that labor, a vast majority of that's operational labor, people that do repetitive things. I think that's where the TAM, from investor speak, doubled effectively, or doubled or maybe even tripled, because you can tackle a lot of that operational labor. That's why it got really exciting.

Again, that's the side. And then we go back to like, what can I actually do for the world? If we can actually automate a lot of that manual operational work there, more people get access to financial services. More people get access to decisions that they need critically, like a student loan being approved or a mortgage being approved. More people around the world can get their first bank account or their first financial platform. I'm so excited about that.

One of our first customers at Greenlite was a company called Sling. They do stablecoin peer-to-peer payments. They're serving dozens of markets at scale with a tiny team. They're able to do that because they use automation platforms like Greenlite to scale really efficiently, and that's so exciting. You couldn't do that before. It was impossible three years ago. That's what gets us out of bed.

Reggie Young:
Yeah, I love it. I love seeing the trend of being able to build larger companies with fewer folks, obviously platforms like Greenlite and a lot of other great other functions that similarly supercharge teams now. So definitely an exciting trend.

Will Lawrence:
If I may, Reggie, on that one, it always reminds me of this one interesting story, ATMs and the bank teller. Everyone thought ATMs were going to be the end of bank tellers and there'll be no people working at the branch staff and just be the machine. Fast-forward 20-something years, there's more bank tellers today than there are 20-something years ago. You're just doing higher-value services, like advisory, and helping the people who really need help. Exactly the same thing is going to happen to compliance because you're going to just have more financial institutions, more financial products. That's going to create more need for risk management, which is going to be a very different nature than what it is today.

Reggie Young:
Yeah, definitely. You've mentioned to me before that the UX of AI is changing. I've heard this from one other person who is super sharp on all the cutting-edge AI stuff that's happening. So I find this to be a really interesting thread. What does the UX of AI changing mean?

Will Lawrence:
If you were to ask someone on the street in San Francisco- I’m thinking Wisconsin right now, I don't think people have a perspective on what AI feels like today. But the dominant way of how AI feels is like a chatbot, like a UX, something like that. I think it's a great starting point. That's awesome. But then you think about if AI is trying to do the same work as a human, how do you interact with the human team? You might have Slack messages with them to tell them details, but you probably give them documentation on,, I just onboarded a new sales team member today, give the documentation on who to go target, how do we think about it, etc. You give them information. That's another part.

You regularly check in with them. You maybe even have one-on-ones with them where you share your verbal information about how they should improve their operations or how they might change. Of course, then you're performance managing and correcting them over time. I think that's what AI agents should feel like. They should feel like an employee. Just imagine for a moment something that lives alongside you in Slack that's doing repetitive work, escalating things, hey, Reggie, I got a little stuck on this thing. What do you think about this? But other than those checks for areas they're blocked in, they're just doing autonomous work in the background. That's great. That's where I think the interaction model should be.

In terms of what does that look like, we have this strong belief here that agents just live inside of your existing tools rather than be a tool or not really a tool. It'd be a failure state if you have to go check in for every single task that your agent is doing, and this goes back to why I don't think a lot of those products are agents. Agents should just work exactly in your existing tools. If it's a sales team operating in Salesforce, your agent should be doing things inside of Salesforce.

Ideally, the UX is pretty silent in the background, and you only interact with it when you want to coach or approve or shape or structure. That's what gets really exciting. I don't know what that looks like. No one knows what the UI of this will look like. But I think you'll feel very similar to how you work with a coworker today. It's just we're not quite there yet.

Reggie Young:
Yeah, I agree with this. It's hard to call exactly what it's going to look like. I was chatting with an engineer recently who's expecting that a lot of his work is going to be done based in the Claude interface, not just having chats with Claude, but Claude is out doing these things, coding and other programs. It's tying together all of these various platforms that he otherwise would have to go into one central interface for him. It's not a hundred percent there yet, but I do imagine you're going to have a homepage of your AI tool that is agentic and can connect with all these other things, or you're going to, to your point, have this sort of invisible agent in all the separate tools that you work with. It's going to be exciting to see how it all plays out.

Will Lawrence:
Yeah, totally. Actually, Claude Code and Cursor and the developer tools here are really interesting because I don't even think those are the final state of where we're going here. I think they're going- what's that, skeuomorphic? They're going towards the thing that we already know and are familiar with. It's like the first car was that- what was the driverless carriage? I keep forgetting what that thing is called. I think that's still where we're at right now.

In the future, ideally, maybe you just assign it a PRD in linear, and it just does the work. It works side-by-side with you is where I'm super excited about in more natural language. And this goes back to our idea here. If that's the case, then it will just be developers who could benefit from Claude Code or something similar. Anyone who can write a great document could benefit from it. That's why I'm super excited about the tech.

Reggie Young:
That's what I'm excited for. When even me, a lawyer, can go and code, it'll be great.

Will Lawrence:
That's a scary world. I don't know if I'm ready for that world. Lawyer-made software, oh man, oh man.

Reggie Young:
Massive disclosures on all that software.

Will Lawrence:
Yeah, exactly. 90% of the UI is disclosures. I love it.

Reggie Young:
Yep. Awesome. Well, thanks so much for coming on the podcast. If folks want to find out more about Greenlite or get in touch, where should they go?

Will Lawrence:
Yeah, for sure. Super excited to chat with anyone who's interested in these topics. Greenlite.ai is the best way. It's Green L-I-T-E dot AI. Or find me on LinkedIn. I always love to chat with people who are in this space.

Reggie Young:
Awesome. This has been a great conversation. Thanks for coming on.

Will Lawrence:
Thanks, Reggie. I appreciate it.