Fintech Layer Cake

Taktile Cofounder Max Eber on LLMs and Risk Infrastructure

Lithic Season 3 Episode 10

In this episode of Fintech Layer Cake, Reggie Young sits down with Max Eber, Co-Founder of Taktile, to explore how modern fintechs are rethinking risk infrastructure. Max shares insights from building a flexible decision engine used by leading lenders and fintechs around the world. They dive into what it really takes to customize underwriting, how Taktile is approaching AI and LLMs in production, and why success often hinges on enabling teams, not just engineers, to iterate faster.

Whether you're navigating credit models, fraud risk, or just curious about what LLMs can actually do in a fintech setting, this episode delivers a practical, high-level crash course on decisioning infrastructure at scale.

Taktile Cofounder Max Eber on LLMs and Risk Infrastructure

Reggie Young:
Welcome back to Fintech Layer Cake, where we uncover secret recipes and practical insights from fintech leaders and experts. I'm your host, Reggie Young, Chief of Staff at Lithic. On today's episode, I chat with Max Eber, the co-founder and chief product and technology officer at Taktile. This is part two of a two-part series with both Taktile co-founders, and I recommend tuning into the first episode with CEO Maik Wehmeyer, which covers why legacy risk decisioning systems are such an important bottleneck in financial services and much more.

In case you're not familiar, Taktile is a next-generation risk decisioning platform. Their tooling helps fintechs and enterprises build, monitor, and optimize automated risk decisions across the entire customer life cycle, from oddboarding and credit underwriting to fraud and compliance transaction monitoring. Taktile is used by industry fintech leaders like Mercury and Zilch, as well as enterprise institutions like Alliance and Rakuten. 
Max and I get into the technical weeds of what the platform does, including where the best opportunities to leverage LLMs and fintechs are, how enterprise adoption compares to fintech adoption, and much more. 

Fintech Layer Cake is powered by the card issuing platform, Lithic. We provide financial infrastructure that enables teams to build better payments products for consumers and businesses. Nothing in this podcast should be construed as legal or financial advice.

Max, welcome to Fintech Layer Cake. Really excited for today's episode. Taktile has been taking the fintech space by storm, and I know you are deep in the weeds in some of the AI/LLM stuff, which I definitely want to get into in today's episode. Maybe a good place for us to start is what makes Taktile’s approach to decisioning infrastructure different from traditional solutions that banks and fintechs were using before the platform existed.

Max Eber:
Yeah, great to be here, Reggie. Taktile solves a very old problem, but it's a very new solution to a very old problem. And the old problem is that, as a company, in financial services, actually any company, you make a ton of automated decisions. And the more digital we go, the more things will be instant. Consumers also expect them to be instant. So you'll have lots and lots of decisions. Who do you approve? Who's on a sanctions list? Who's a money launderer? Who's committing fraud? Who should get which loan? At what price? For which limit? An endless number of decisions you have to make as a business, and then the question is, how do you do that?

Historically, you had basically two choices. One was you just build everything in-house. So you have developers, write backend code, encapsulate the logic, and then you run that. Or you buy one of these heavy, old-school rules engines, decisioning platforms that, if you speak to modern fintechs, they don't even know they exist, but they exist. In the large bank, you will see these folks around.

Both these approaches don't really solve the problem. But the problem is that you have the domain experts that they really know how to make the decision. They know which customers they want to onboard. They know where the good risks are and where the bad risks are, or they know where the attractive customers are and so forth. But these guys are not the ones who actually implement the logic. They're not the ones who run the logic. They're not the ones who see the logic, who can debug the logic. They're very far removed from the actual decisioning that takes place.

And then you have this nasty translation layer, which is full of delays and miscommunication and misunderstandings. When you talk to people who build these things in-house, usually, what works well is this works well from an engine perspective. Latency is great, it's cheap and so forth, because it's well architected. But it's very far from the domain and the users. That just leads to bad decisions, because you end up iterating very slowly. Every change takes a long time. Things get lost in translation and so forth. So that's the in-house systems, which you see a lot also in fintech because fintechs, they have good engineers, especially early on, they have a lot of capacity. At some point, as they scale, they realize they should spend this time more on customer-facing. You see a lot of in-house-built risk platforms in fintechs. 

And then on the other end of the spectrum, if you go to a large global bank, they will have probably multiple vendors. The funny thing is the IBM of our category is still IBM, actually. IBM will still be a decisioning platform. You can imagine what that's like. It's very heavy, extremely expensive. It's the opposite of modern software. So what we set out to do is give people a place to build, run, and optimize critical customer decisions, and build it in such a way that engineers will also love using it, like just a modern SaaS product with all the things you expect in a modern SaaS product. That's what Taktile does. It's simple, actually.

Reggie Young:
I love it. I love the modern solution to an old problem. And yeah, you're right, there's so much nuance in these decisions. If you're running a fintech program across its borders, you've got a whole different set of compliance regs. You've got risk ratings. The rules you apply to some that’s high risk is different than low risk and all that fun stuff.

Max Eber:
Maybe one thing to add here is, and I think it's quite unique to Taktile’s approach, is we're not a point solution. So if you go to, for example, the risk domain, which is, I would say, 90% of where we play, maybe 80%, 90%, you can buy an onboarding solution. Great. It won't work for credit. Then you can buy a credit decision engine. Great. But it won't work for fraud. Then you buy a fraud solution. Great. But it won't work for AI/ML. And then you have the AI/ML solution, great, but these things will not work for collections.

We think of ourselves much more as a platform for any critical, high stakes, high-volume customer decisions. So we want to unlock all these use cases and give you the Lego blocks. Because in the end, the Lego blocks are very similar, integration, great rules engine, machine learning, AI, case management. And then you can put them together in different ways and then unlock all the use cases, which is the magic.

Reggie Young:
If fintechs aren't careful, they can end up spending on a lot of vendors that effectively do the same thing just for a different solution. So it makes sense.

How has the product vision of Taktile evolved since you co-founded the company back in 2020?

Max Eber:
When we started the company, we were much more machine learning centric. Both Maik and I, we started our careers during the heydays of the first AI machine learning boom, sort of 2014, ‘15, ‘16 when data was the new gold. That was the slogan that everybody had. We were in the thick of it, building models, building data products for big enterprises, insurance companies, banks, and so forth. We're very much embedded in that machine learning space, and it was before the cloud providers built the machine learning platforms. So we started out with this idea of let's build an operating system for deploying machine learning models into critical businesses for critical decisions.

And then, over time, we realized actually the model deployment is one thing, but we got to be more end-to-end. We've got to be broader. So then we added on the rules engine. We added on the data integrations. We added on the case management. And suddenly, we could actually really provide a solution rather than a little puzzle piece. Because the issue with our initial product was it was great, beautiful, but it could only solve a small piece, and then you would still need all the other vendors, and it was a mess. Once we went end-to-end, the business really took off.

Reggie Young:
Yeah, I love that. There's this arc of great point solution, and then you learn to package everything and bring customers everything they want to where they are, which makes sense.

I have to go down a side rabbit hole, a side quest for a second. How is the modern, the current, the modern, the past 18 months of AI boom, how does it feel compared to what you experienced when you and Maik were first digging into AI/LLMs? Are there big differences you see? Or is it just like an acceleration of the stuff you saw back then?

Max Eber:
Yes and no, I would say. The machine learning boom, I would say, the first iteration of this, it was more incremental, because it went from, say, I don't know, a linear model to a boosted tree. Some use cases, that was actually very significant in terms of uplift. But it was still, you're talking, okay, your AOC goes up by x and so forth, but it wasn't unlocking completely new ways of doing things. It was always, you could have done this with a traditional method, and now you have better methods that are faster, stronger, cheaper, more accurate, and so forth. So it was more of a matter of degree, I would say.

I think the AI world now, you finally can work with unstructured data, which we couldn't do back then, really, and there were some ways of doing it. Now we can finally drop in a PDF with 200 pages and say, reason about this PDF, which is incredible. So now we're completely unlocking use cases we couldn't see before at all.

We're working with a large global insurer on automating the claims journey. And there, these things were not possible, even a year ago, probably. And they're still not perfect, right? The models are still not perfectly accurate. But you can really see every three months, the generations go up, every three months, these things become better. That is a revolution.

Reggie Young:
It's fun to see. Yeah, the big PDF problem is a fun one. Hot tip for listeners, the card network rules, they have extensively long PDFs of all the rules. AI is great for parsing and getting answers to the questions you need instead of reading 700 pages of rules. This is a great segue to talking about AI implementation.

Maybe a good place actually to start before we dig into the potential applications that Taktile is using LLMs for is the phrase decision workbenches. I think it's a pretty core concept for understanding not just Taktile, but any good risk or compliance decision flows. Maybe for our listeners who aren't familiar, what is a decision workbench, and how does Taktile leverage LLMs with them?

Max Eber:
Yeah, great question. The way we think about the decision workbench is it's a place where you build, you run, and you optimize, and there's a loop to do that. That can mean you have a credit policy, you build the credit policy, but not only are you building the credit policy and then dropping it somewhere, but you actually can run this in production. So you see your credit applications run through the platform.

And then the magic thing, which the vast majority of engines out there, also in-house systems, never do is help you optimize. So then look at the historical traffic and say, what if I change this parameter? What if I retrain this model? What if I tweak and tune this? Lets you really tinker with this and optimize, that is magical, because now you're not selling, oh, I speed you up, or I save you half an FTE and all that, which is, yeah, it's nice to have, but you're selling better decisions, right? That has a huge impact on a scaled business. That's what we think of a decision workbench.

And then once you go and inject AI into these things, there's subtle changes, but in the end, it's very similar. You have some inputs and outputs. You have some expectations around this. And you think about how you evaluate that or how you find quality. And then you tweak and tune. And there's a lot of tinkering in this decisioning world, which is an important part of it.

Reggie Young:
Yeah, there is. And it's also actively evolving stuff. The other side of AI is all the fraud threats that are now coming in through AI, and so you need to be nimble and fast and update your rules accordingly. It's no longer like, here's a broad set of rules that'll keep you safe most of the time. Brave new world.

Max Eber:
I think one thing that helps structure the discussion around AI and decisioning is to think about, where does the AI come in? The most obvious one is, okay, you supercharge the decision itself. So you use AI to decide whether this thing is fraud, or you use AI to decide whether you should give this in that limit. There's a second layer, which is you supercharge the analyst. For example, you could say, help me optimize this decision, or I want to raise my approval, what are some options, or here's a PDF credit policy, implement this for me, so I don't have to do it, things like this, so more of a co-pilot type flavor.

And then the third thing is supercharging the human agents, the operational people, because if you look at a large bank, they might have five or even 10,000 people doing risk ops, re-evaluating KYC, looking at documents, checking flags that some system throws up and so forth. There are a lot of people doing operational work and helping them do their job better, faster, and concentrating on the high-impact cases and doing all easy reviews quickly and so forth. That's another layer where we think AI is useful. And I think those three buckets, supercharging the analyst, supercharging the human agent in the risk ops teams, and also the decision itself, this is the way we think about value from LLMs in decisioning.

Reggie Young:
Yeah. I like that framework a lot.

Max Eber:
-in more detail, but I think that's a useful deep dive to do.

Reggie Young:
Yeah. That's definitely a helpful framework. I'm going to have to start using that.

This kind of segues a bit into the next question, which is that there's a lot of hype about AI and LLMs, but I think Taktile is actually leveraging AI quite extensively and quite well. What are some of the highest-impact applications of AI that you're seeing in the product today? Maybe that ties to your three-framework if you're seeing it in certain areas more than others.

Max Eber:
On the hype thing, we've tried to take a positioning of anti-hype on this. I don't know if this is actually showing. We actually publish things like, oh, AI without the hype and so forth, because we feel like, especially in risk, people are very averse to the hype. Risk folks, their job is to see through bullshit, right? They have to detect who's lying, who's a fraudster, and so forth. So they really don't love the hypiness of it. we also don't love it. So we always try to be much more down to earth and really talk about the stuff that's actually working.

Where we do see customers using it successfully, I think one of the killer use cases is intelligent document processing, so call this OCR plus, but using documents in a better way. I think especially B2B underwriting, for example, is a space where, if you can read all the documents, you can just massively- either you could do more, you can look at more information than a human would, or you do it faster, which is good because you get back to the customer more quickly, which is good for the experience, good for conversion, and so forth.

We've seen customers adopt web search, like agentic web search, to get more risk signals. So very basic, you go to Perplexity, you search for a company and say, hey, give me a summary of all the worrying comments that you find about this company's product. You can do this by hand, of course, but you can also do it by API and embed it in a decision flow and then throw out a structured piece of data that you can run through the scorecard and then bubble up for manual review when you find sketchy information.

One thing that was a recent case, all the recent information tends to be not reflected, neither in the accounts, nor in the credit report. Dun & Bradstreet will know. What's a good example? I don't know. Deel recently got sued by Rippling. Okay, potentially big problem for the company. Do you want to give them a massive loan right now? For sure you want to know that, but Dun & Bradstreet won't know, the bureaus won't know. Web search will uncover that really quickly and really easily. So that's a super powerful use of AI. In general, I think all the use cases where you can verify what the AI does are good. So this is one of these cases where you just put more information and then verify.

I think the same goes for all the supercharged agent cases where we see limited sort of full automation. We see a lot of copilot auto-complete suggestions and so forth. You do all the legwork. You propose what's going on. You summarize cases. I think summarization is an amazing use case, and you just make people faster and then they can focus on the more important stuff.

Reggie Young:
Yeah. I love it. I think about all the time that I am now saving every week, but those sort of small tasks where it's, oh, this isn't a sea change, it's not like I'm cutting out two days worth of work. But if I add up all those small tasks that used to take me 45 minutes, now take me 5, it actually does add up to a day and a half easily.

Max Eber:
For sure. And a lot of these things are boring, actually. They are boring cases, but they help. If I can take 10% off of all these things, it's still 10%, which is amazing.

Reggie Young:
Yep. It compounds. How about forward-looking, what are some of the untapped applications that you see for LLMs in financial services in the next few years, either for Taktile, stuff you're thinking about that you're willing to share publicly, or fintech more broadly? What do you see as some of the untapped potential?

Max Eber:
I think the framing here is interesting because by and large, the adoption of LLMs in financial services is close to zero. If you go outside of forward thinking, early-stage tech companies, and there's very little adoption. You walk into any global bank today, you ask how much AI they have in production. Maybe there's some nice innovation use case somewhere, but in the core of what they do, the vast majority of these banks have zero or close to zero. In terms of untapped potential, I think everything. I think there is this overhang of actually implementing what's working already.

One thing is the same old view of, oh, my God, programming will be solved in 2025 and things like this, which may or may not be true. But even if progress stopped today, we'd have a lot of useful applications to just implement now. So I think there's endless opportunity of doing things. It will just take time because adoption is a larger part when you have a combination of a lot of legacy, very large scale, and the criticality of banking in general or risk in general. I think all these things point in the wrong direction, but the value is also pretty big. So we're bullish on adoption.

One thing I find very interesting is all the interface questions. Now everybody puts a little bit of a copilot in the bottom right of their SaaS app. That's cute and it helps a little bit, but what's the right interface for AI and also for a customer? An app on your smartphone, is that still the right way of doing things? And with a particular voice, could be interesting. We have a customer that works with our product and also some other AI products to do- they call customers, basically. Instead of calling them, the human, they call them with an AI agent. And 80% of the calls they do is now automated by AI for connections and operational stuff. It's quite crazy, but it works.

You see these calls and they're actually pretty good. It's not creepy or anything, but they actually work. I think voice is one thing, but they could also be very different ways of engaging with an app in the future. You can imagine, for example, generative interfaces where, depending on the context and what you're trying to do, the AI actually generates the UI on the fly. It's like this. That's pretty exciting, but a bit more out there, I would say. I think it's not what a lot of people are working on right now.

Reggie Young:
Yeah, I know there's some interesting crossovers with AI applications and fintech and voice specifically stuff. It's a very nascent market that I think a lot of, based on conversations I've had with founders, the prospective buyers, they just don't understand. They haven't wrapped their head around the value and the time savings yet. It'll definitely come, but.

Max Eber:
I think one space that's really interesting, which we've done some work in, is the intersection of voice AI and then decisioning. How do you teach the agent to actually make pretty rigid decisions? If somebody is KYC'd or not, you don't actually need to take a risk on this because we know how to do KYC, we know what the rules are, and so forth. So you need to give the agent the tool to do KYC.

And then we've worked with a voice AI company called Parloa that's both active in the US and Europe, where we collaborate with them on tool calling. Basically, they use our product to build really structured, rigorous tools. And then you can use the agent for more the fluidity of language. So they talk to people, and then the agent actually- you try to make the agent do as little as possible of the actual decisioning. And that gives the businesses the control they need to say, before you even talk to this customer, you have to do KYC on this customer. Or if you want to steer this customer to a provider that we work with, here's the steering logic. And suddenly you have norms.

The idea that, oh, there's an agent and I give them 20-page instructions and the agent just magically follows the instructions, maybe in a year that'll work. But as of today, the accuracy you get from these naive agentic applications is not where it needs to be for critical applications. So I think that's a really interesting idea of you use the LLMs for the fluidity, but then all the critical decisions you make in a much more structured form through a tool call to a decisioning platform. And then you have backtesting, you have analytics, you see what's going on. You can use all the good things we know by decisioning.

Reggie Young:
Yeah. That kind of gets to something I was thinking of, a separate rabbit hole to go down around how do you get folks in financial services comfortable with a risk framework that uses a risk decision platform that uses LLMs? I'm thinking about banks that were hesitant to use cloud-based infrastructure in the not-so-recent past, and then you throw AI and LLMs on top of that. You already hinted at the anti-hype AI, which I think is probably a smart path of like, this is a thing that is part of our platform, but it doesn't define us, we're not chasing the hype. But are there other conversations or tools? You were just talking about the controls that you put in place, right? Are there things like that that you found effective in getting folks comfortable?

Max Eber:
I think one small detail here is people don't use AI to actually make the decision. I've never seen this anywhere, whereas, oh, hey, AI, here's Reggie. This is Reggie's address, and here's my credit policy. Now go figure out whether Reggie is eligible for the credit card. There's no simple company that does that. And it also would be kind of insane to do it for many reasons. But that's the stereotype of, oh, banks use AI and the AI terminator will do all these terrible things, which nobody does. And you don't need to do it to get value. You shouldn't do it.

Instead, what you should be doing, which is some of the things that we've built or are building is, okay, use AI to take the credit policy, then create a structured- either you create code in your backend, or you create a decision flow on Taktile, then audit the code. It just makes it faster. You audit it once, and then you run it forever. You've taken the whole non-deterministic nature of the AI out of the equation, but you still get all the value because you've done it very quickly and you want to change the view. It's better for compute. It's better for latency. It's better for auditability. You just audit it once. I think that's generally the idea, is you just find ways of avoiding the problem with the mistakes by having the right audit at the right time. And I think that's the way people get comfortable with it.

One other way to think about this is, you can also think that it's risky not to adopt, because imagine all the additional checks we could make. In a world where an LLM has like a million, 2 million, someone thinks there's 10 million tokens in their context window. You can look at a lot of information in a second, which humans are pretty bad at this. So not giving them at least a copilot, that's a problem too, right? I think that's one way to get risk people excited, is say, just keep doing what you do today. Do this on top, and you're going to find more stuff. Or then you say, keep doing what you do today, but you and I would both know that 40% of things you're doing are actually not very useful and pretty simple. So let's take them out and then use that capacity to spend it on the things we both agree are really valuable and nobody has time for or budget. I think that's also a way of framing it that makes a big difference. 

Reggie Young:
Yeah, a long enough time horizon becomes expected. I can imagine a decade or two, bank regulators being like, oh, you're not using LLM copilots to help do your transaction monitoring?

Max Eber:
Exactly. 100%. In general, the culture, of course, is a bit conservative there. We've also seen innovations that did go wrong, but I think then saying, oh, you shouldn't use a computer, that'd be also crazy. So we need to move on and do it in a responsible way.

Reggie Young:
Yeah. We'd love to spend the last chunk of the podcast talking about enterprise adoption, because Taktile has had a wonderful problem of not just getting picked up by high-growth company customers, but also enterprises like Alliance and Rakuten. How does working with those types of enterprise differ from working with fintechs?

Max Eber:
Yeah, great question. I think, in my head, there's basically new enterprise and old enterprise. We also work with very large fintechs, which has been amazing for us. They have some of the aspects of old enterprise, and they have some of the aspects of very early-stage companies.

I think one thing that's very different if you go to a large company is that they often look for cross-cutting decisioning platform, which is in a way much more closely aligned with our vision. Because if you are a Series A company and you just launched your product, then, okay, you need to onboard customers, you need to have onboarding. Great. But you might not have 13 teams working on 45 use cases and then needing to reuse some logic and so forth, which is what we have always built for, because we wanted to be this horizontal tool or horizontal platform where you get the building blocks for all sorts of decisions.

You talk to platform teams and enterprise that build tools for their business users, and they're very excited about this because that's essentially their job and it's their problem, that they have to enable all these different use cases, and they don't want to be dealing with 400 vendors to do that. In some ways, the larger the teams get, the more aligned it actually is with what we're trying to do and what our vision is for the product. So that's extremely exciting, I would say. And this is very aligned with their missions as well.

I think the other thing in enterprise is they care a lot about governance, about logging decisions, about tracking, about versioning, which, again, is something you get from a decisioning platform out of the box. If you have to build all of this in-house, it becomes quite painful, like change management, or now I need to think about how I diff my things and display that to a stakeholder, or giving people access to inspect particular decisions. It just becomes a lot of things you have to build, and nobody actually wants to build that in-house.

I would say all that is on the beautiful, great, wonderful side of the house. Of course, everybody knows enterprise is also more work. The scale of it just makes it- I think scale, risk aversion, legacy, I think all these things do slow the implementations down, so time to go live does tend to be longer, which is okay, but the value is also higher because you're operating on 100x the scale, 100x the decisions. We're fine with that. We like working with enterprises just because, in the end, we care about having the most impact in the world. I think a proxy for that is how many decisions do people make with your platform? I think enterprise is important to make as many decisions as possible.

You want to have a good customer mix, right? If you had only enterprise, if you had done enterprise from day one, it'd be really hard to build a terrific product, just because the cycles are slow and you end up overfitting on this one enterprise because they are 90% of your revenues. It's very hard, I think, to be a product company if you do enterprise too early. And I think for that, working with start-ups is fantastic. You get so much feedback. I'm still really good friends with all our early customers. I think that's terrific. But I think you also want the scale of enterprise eventually.

Reggie Young:
Yeah. What does that implementation process look like when you're, I want to say, educating enterprise functions about this tool? Because as a start-up, I feel like the person buying the tool is probably the one that's going to be using it versus enterprise, the decision-maker can be very detached. I imagine you have to go and educate a lot of people. How does that process differ for a fintech start-up versus an either old or new enterprise customer?

Max Eber:
I think it doesn't differ that much except that in enterprise, you do it more often. In a fintech, you would do very short, very structured POCs where you say, okay, let's talk about your system, what do you want to achieve? What are your pain points now? What do you need to get off Taktile? And then let us prove to you that you can do this. And we do two weeks, four weeks, really quick, focused processes, just prove to you that you will get this value. And then we sign a contract, and then we'll onboard you and go. The timelines are very short in this, but we do train people.

We have a couple of structured training sessions to educate them on how it works. Let's implement this flow. And then we do some co-development where we implement their flow together with them instead of a joint officer, for example. Depends a bit on the segment and how self-serve they are or want to be, but roughly there's a bit of training. But we're talking the first couple of weeks of an engagement.

And then in enterprise, what we find is there just are so many use cases that you might have to redo this every now and then, because every new team is like a new customer effectively. So scaling that, I think we have some demands for also doing that in a more self-serve motion where we have an academy and then we just give people the access to the academy and you can go through certifications. I think that's what enterprise companies often end up doing.

If you look at companies like UiPath or even- Salesforce is different because they lean so much on consultants, but a lot of these enterprise software companies, they try to build internal champions, internal centers of excellence, and help you educate your people in becoming power users of the platform. As a product person or as a designer, I want the thing to be so intuitive. You don't have to do that, but there's a limit to how much you can do in product. So there will be a bit of training, I think, eventually, too.

Reggie Young:
Yeah, makes sense. My favorite wrap-up question is what have you been thinking about a lot lately that you think people in fintech aren't thinking about or talking about enough?

Max Eber:
We briefly touched on this before, but I think one of the most interesting questions is what is the human machine interface in the AI age? I think that's one of the things that really keeps me up. What we have is obviously not the right one. I don't know what the answer is, but I know that what we have today is not the right thing, and figuring this out is so powerful. I think that's something that very few people actually think about.

There's all the hype around agents and so forth, and I think it's great, but there's still this question of how does this human get involved in that world, which I think is the really interesting bit for adoption, for getting real use cases into the hands of real people. I would say that's probably my number one big question that I think I care about a lot.

Reggie Young:
Yeah. It's a fun one. Is it going to be a button on our shirt that we carry around? A few attempts at that. I haven't tried.

Max Eber:
There was the [joking] guy of fake minting with his little gadget. I do feel like voice-only doesn't cut it because I think the visual element really helps to communicate stuff fast. I'm not the kind of person who learns by watching YouTube videos. It's just too slow. Reading is so much faster than listening. I think you do want to keep that speed and the bandwidth of visual media, but somehow we can do more by having these generated dynamically depending on what's going on.

Reggie Young:
Yeah. It'll be fun to watch, AI tools and interfaces develop over the next few years.

Max Eber:
Yeah. I think this is this phase where we're still really figuring stuff out. It feels like we kind of understand it, but it's like saying we understood the internet in 2002. Of course, we even understood it 10 years earlier or 20 years earlier, but not quite right. We hadn't figured out all the patterns and the things that do end up working. I think it'll look very different in a couple of years.

Reggie Young:
Yeah. That’s interesting. I was talking with a friend this weekend that works at a drone start-up company that's doing decently well. they recently signed with one of the large AI providers. His comment was like, I was a little underwhelmed because they came in and I expected them to be like, here's some use cases for great things. But instead they were like, tell us what you figure out because this stuff is so new that you can tell people, use this to summarize a long article. But the really interesting stuff is the stuff that even the platforms don't know is going to happen. So it's an exciting time.

Max Eber:
Yeah. I think they are very far away from the solutions actually, and they know that. If we talk to people at Anthropic or OpenAI, they know they're not going to be the ones to figure out how industry X in vertical Y will adopt AI. They need to know a little bit, but they will go through other vendors who embed their products to actually do that.

I think we saw the same in cloud, for example, like the cloud vendors, they always had this idea of, oh, we have a vertical industry team and so forth. But they also know that in the end, they need one layer in between that builds the applications, the ISVs that make the cloud useful for end customers, because there's just too much gap between very horizontal, basic blocks and what people end up needing day to day.

Reggie Young:
Yeah. My friend was very frustrated that he didn't get a crash course on here's the applications of AI. I similarly viewed it as like, that's a good thing. The platforms know they shouldn't be limiting your imagination.

Awesome, Max, thanks so much for coming on the podcast. This has been a wonderful conversation. If folks want to find out more about Taktile, taktile.com, T-A-K-T-I-L-E. So they should go check it out if they're not familiar, which they probably are given how much you all are taking over fintech.

Max Eber:
Thanks for having me, Reggie.