Fintech Layer Cake

Imprint CTO Will Larson on Fintech Engineering Strategy

Lithic Season 3 Episode 14

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 42:26

What does it take to run engineering at the heart of modern fintech? In this episode of Fintech Layer Cake, Imprint CTO Will Larson unpacks lessons from leading engineering teams at Stripe, Carta, Calm, Uber, and more. Will shares how fintech engineering differs from SaaS, the tradeoffs executives face when joining new companies, and why the art of leadership lies in balancing seemingly incompatible frameworks, moving fast while preserving reliability, innovating while managing legacy systems.

He also dives into the hype and reality of large language models (LLMs). Will explains why today’s “AI engineers” may simply be tomorrow’s product engineers, why prompt-crafting is more artisanal than strategic, and how agentic tools could redefine workflows. From the shifting expectations of tech executives over three decades to what really drives innovation inside scaling companies, this conversation is equal parts practical and forward-looking. For anyone navigating fintech engineering or curious about the future of AI in product development, Will offers a grounded perspective on what matters most.


Reggie Young:

Welcome back to Fintech Layer Cake, where we uncover secret recipes and practical insights from fintech leaders and experts. I'm your host, Reggie Young, Chief of Staff at Lithic. On today's episode, I chat with Will Larson, the CTO of the modern co-brand card platform, Imprint.


Will has extensive experience running engineering teams at fintech and SaaS companies like Carta, Calm, Stripe, and others. You may also recognize his name because he's published three acclaimed books on what top-notch software engineering and leadership looks like in practice, and there’s a fourth one coming soon.


I was excited to get Will on because Lithic works with Imprint, and we've seen firsthand the great builders and sharp team they have. Will and I cover differences in engineering at fintechs compared to more traditional SaaS companies, what's hype in reality with LLMs, his thoughts on practical strategy, and much more.


Fintech Layer Cake is powered by the card-issuing platform, Lithic. We provide financial infrastructure that enables teams to build better payments products for consumers and businesses. Nothing in this podcast should be construed as legal or financial advice.


Will, welcome to the podcast. Really excited for this conversation today. I very infrequently get technical folks on this podcast. I think I've only had maybe one other CTO in the three years we've been doing this. Excited for our conversation.


A place I'd love to start is engineering in fintech. Is there anything critically different about running an engineering-oriented fintech company versus non-fintech, say, more general SaaS-type company?


Will Larson:

I'm fortunate where I've gotten to see a handful of different companies over the course of my career. I haven't just been only a fintech kind of specialist, although certainly Imprint, where I am, that's a fintech for sure. Carta, it's a fintech, it's a different sort of fintech, but Carta definitely is a fintech as well. And then Stripe obviously is the original 2010s like Valley fintech for sure. So I've gotten to see that. But I was also at Uber. I was at Dig.com, which is more like a social media. I was at Calm, which is more of a consumer mobile app.



I think that the biggest difference to me is that in fintech, there are constant availability concerns in terms of being up 99.995 or five nines of availability or whatnot. And you're working with both really sophisticated modern partners who have great APIs, who have designed things in the last 5to 10 years. You're also working with partners. Maybe it's banks, maybe it's the technology behind the banks that are dealing with tech stacks that are older than we are.


And so it's this really interesting spectrum of incredibly novel, thoughtful, sophisticated systems, and just these systems, these workhorses that shouldn't still exist but have been running the most important workloads in the world for 40 to 50 years and haven't really been successfully modernized. And so I think that when you come into something like Calm is basically a content distribution platform of really good content that helps people, and that's amazing. But there's just not that many partners you have to integrate with, and there's none of them that are running a COBOL database. But in fintech, you don't actually have that privilege. You might be integrating with a COBOL database on any given day, and that's pretty different.


Reggie Young:

I love it. I usually get the answer, oh, there's more regulation. Every industry, even if you're in traditional SaaS, it's like you're still dealing with privacy regulations or whatever. It's very important to check those boxes, but I think you're right that availability and legacy infrastructure are definitely two defining traits.


This might be too nebulous of a question, so if we were to swat it down, but I'd be curious, how do you think about the shape of an engineering org at a company? You've been CTO at several companies. When you join, you get a lay of the land on the legal side that's, okay, here's a regulatory function, and scaling a regulatory function is different and depends on the business model compared to the more commercial contracts-oriented function. I'm curious if you have a sort of tried-and-true mental model for getting a lay of the land of an engineering org.


Will Larson:

I think one of the biggest ways executives fail when they come to new roles is they're like, I got this, I know how it was done, and they just replicate their prior company. And then that's often replicating that two companies ago. And particularly, when you're coming into Imprint, it’s 150 folks right now approximately. And then if you take Meta or Google or something and try to apply it, it's just so many orders of magnitude apart that you can't possibly- 150 folks at Meta is a small team. It's not the company. And so applying playbooks can be pretty tricky.


I think the first thing to me is trying to understand the lay of the land within the given organization. But typically, there's three different skill sets that I think are meaningfully different. There's product engineering. There's data engineering, including the data science side of things. And then there's infrastructure. And so I think making sure you have a clear kind of person who is thinking through the breadth of each of those and then figuring out, what does the company actually want from you? I think in some cases, I could come into a company where the reliability is having a bit of a moment and there's a real reliability kind of crisis. And then where they want you as the new lead step in is really focusing on the infrastructure side.


But most of the time, what companies really care about, first and foremost, is doing a reasonable job on data, doing a reasonable job on infrastructure, and doing an extraordinary job on the product engineering side. That's really where you get differentiated outcomes for your company. You have to do a good job at the other two, but no matter how good you do at data and infrastructure, if your product isn't something special, you're toast. Almost always it's getting leadership involved in the other two and getting as much of your attention as humanly possible on actually innovating on the product engineering side.


Typically, I think that's where you get pulled as an executive. I think that's the place where you usually can never bring in a leader that could run that for you. That's actually your job. That's what people care about. That's how you're evaluated. And when escalations come to you, 95% of the time, it's coming from the product engineering side of things.


Reggie Young:

Interesting. That's a useful framework. I love your intro about the bias towards action that new leaders joining a company- a while ago I was reading some piece of advice for leaders stepping into new companies. There's some point along the lines of sometimes the best action is no action, which is a counter to all your instincts when you're coming in and joining. You're like, oh, we're going to fix this, we're going to upgrade everything. You need like a minimum six months just to understand what's happening and why.


Will Larson:

That's true. It's an interesting trap, though, because you think about your new executive. You come in and they hire you- all external executive hires, you never get hired into a perfect situation. They're always hiring you because there's a problem they need your help with. And so they need help. You come in. You're like, it's okay, Reggie, I got it, I'm just going to sit here and observe for six months. There's no tolerance for that. So it's this interesting juxtaposition of these books, which are like, hey, take three months and just listen carefully.


And then the expectation, which is they usually hire you because there's a pretty meaningful thing that they want you to come in and help with. Maybe it's a problem, maybe it's an opportunity, but it's never maintain the status quo, right? And so if you actually follow this advice, you get pushed out as an exec almost instantly because you're not actually living up to the expectation. So you have to take the advice and figure out what are some things you can do despite the fact that you don't really understand the circumstances particularly well that are actually useful.


What I love about executive roles is operating on different layers. You have to show momentum. You have to show progress. You have to show impact. You also have to not screw over the team. You have to actually help the team as well. And so it's only by operating on three or four layers at once that you become an effective executive.


Reggie Young:

Yeah, I love that. That's very accurate pushback. I think the more time I spend with the Lithic leadership team, there’s this clear concept to me that I see all the time every week that's like, you often have two competing sets of advice or framework for any given situations. And what you just did is a really good illustration of how do you square some of those that seem inconsistent, especially in fintech. Oh, you need to move quickly but also not break things. They're all seemingly incompatible on their face. That, to me, is the art and science of good leadership, is being able to square all of them.


Will Larson:

Yeah, exactly. The classic one is date, features, quality, pick two or whatever, is the classic triangle I heard in my first year or two in the industry. That's not true. There's no trade-off between velocity and quality, but there is a trade-off if you have a ton of technical debt between velocity and quality. And so there's a lot of layers to these things. And if you just take them at face value, you make really boring decisions, but you can get all of it if you're really careful in the details. But if you're not in the details, it's a hard time.


Reggie Young:

That's a great segue to my next set of questions on the new era of tech leadership. You and I have previously chatted about the role of execs and leadership members at tech companies changing over the past few years. Let's dig into that. How have those roles changed and why?


Will Larson:

I've been in three different decades now of the industry. I entered in the 2000s. I was thinking about this a while ago where, when I worked at Yahoo!, I was an entry-level engineer- first a contractor, actually, hired to help, and then an entry-level engineer coming out of college. My manager was a director, where I reported to, and we had two one-on-ones in two years. The first one, one, he was basically mining me for criticisms about a colleague that he was trying to manage out. And then the second one, he was trying to understand why I'd give a notice and why I was going somewhere else. That was the two one-on-ones.


He was viewed as a really effective director. I have no actual evidence that he wasn't. I think he thought of his job as my job is to figure out what should the team work on, and then establish alignment within the company overall, like this is what they should be working on and this will be valuable work to complete. And so he was out there mining for something that we should work on, like a big project that would matter a lot, and then making sure we knew what it was and making sure that we were moving on it. But my career, my development, that's just not what he did.


Then we have this hard segue to the next kind of the 2010s where there's more competition for hiring engineers again. Retaining engineers was getting a little bit harder as compensation was going up on the industry at large, particularly in the Silicon Valley area. And all of a sudden, there is this meme, which is, great leadership shouldn't be in the technical details. Great leadership is supporting the team, the structure, the organization. And the number one piece of advice you got as a new manager is, hey, you need to stop writing code day one.


And so there's this really fascinating juxtaposition, so era one for me, on senior leadership finds opportunities and aligns the team to that. Era two, coaching the team, helping develop their careers, getting the team motivated and excited, aligning the details of the team, a ton of hiring. Then this next era, total shift. It's not the prior era and it's not the two eras before. Now it's getting into the details, running lean, developing mastery of the new definition of AI and how that applies to product development, also the execution of teams internally as well.


And so the biggest thing is these expectations just rapidly shift, and the expectations are going to keep shifting. And so, hey, right now we're in this really lean moment. But if the zero interest rate policy came back where all of a sudden funding is a lot more available, I promise you this expectation around small teams is going to shift again. And so it's just the challenge of being an executive or any senior person is you have to understand how to be successful in the current situations, but also prepare for the fact that the next one could be anything.


And so if we take the thought leaders out there who are telling us how the role works, if we take them too literally, we're going to get in a lot of trouble because the actual expectations just keep changing a lot over time. Really, what we have to do is stay true to our craft, not just as managers, but actually the core function we're managing, and try to adapt to the moment, but not lean too far such that we're not ready for whatever wild thing will come next.


And the last thing I'll say here is crypto is the best example of this, where crypto was huge, then it was nothing. And now stablecoins are really big again as they're proving out their use case as a real meaningful mechanism to actually do payments cross-border and for other things like that. And so if you just take the top of the mind, the top sound bite, there's always wrong. You're in the wrong position a hundred percent of the time. You really have to think about what makes sense to you, what is truly useful, and listen a little bit to what's exciting now, but only a little bit. You can't listen too hard.


Reggie Young:

Yeah, you listen to what's exciting now as an indication of almost the reality three months ago maybe, and then have to think for yourself about what's the reality in three to six months. I love that.


Let's say you're applying this now, companies operate more leanly. How does a company maybe adjust to that new reality? What does that mean for a leadership team? I guess the broader question here maybe is when those shifts happen, what are the levers that companies have to pull other than just team size?


Will Larson:

I think seeing what's happening early is really powerful. I think that's one. And also understanding where differentiated skills are and if they're actually truly differentiated or if they're just like, it's a transition. Like in economics, there's structural and temporary kind of unemployment, right? And it's understanding where the reality is. Is the industry truly changing, or just the skill set is shifting a little bit?


So I think writing software with LLMs, right now,  in certain pockets of the industry, this feels extremely novel. But I think if you look at early-stage start-ups, this is just a new normal. And if you're an AI engineer today, it's extremely likely that in three years, this is just what it means to be a software engineer, and there is no AI engineer construct. That's maybe a little bit of an exaggeration. But if you think about the machine learning engineers of five years ago, a lot of them are now being asked to be LLM experts.


The actual expertise of using the more classical ML models has literally nothing to do with building products around LLMs. So we've done- the word’s the same. I see teams trying to use the prior folks to do the new thing. But using LLMs well is mostly just product engineering. That's it. Every product engineer in three years that's doing really well at leading companies is just going to be really good at using LLMs to solve certain classes of problems. Every engineer at a reasonably modern company is going to be using LLM tools to write parts of their code. And it's just like another tool. Just modern engineers are using debuggers as part of writing their code. Modern engineers have logs are getting aggregated as part of writing their code into a Datadog or whatever it might be. There's just a lot of modern affordances for writing good software. LLMs will be one of them.


As you think about that, that's the commodity. Then internally, with your organizations, hey, I need to train everyone on how to actually use these tools. Then I also need to make the tools accessible. I think one of the preconditions for using LLMs, et cetera, is making them accessible. How do you actually make it easy? So any kind of login, you use ChatGPT, you use Claude, you use Gemini or whatever that internal preference is easily. I think we're increasingly getting there as an industry.


The next thing that I'm really excited about is, how do teams actually have good agentic runners internally? This isn't a solved product yet. Excited to see where the industry starts landing, but that's the next layer. I think that the ability for prompts to drive a massive amount of personal productivity for you, it's like a little bit on the border. It's really useful for some cases, but in a lot of cases, what you really need is the agent or it's getting the input. It's calling a bunch of tools, and then it's deciding a variety of output steps or tools that create downstream impacts.


Right now, I don't know of many companies that have a good framework internally. And that's one of the things we're playing with at Imprint, how do we actually get effective workflows so that folks can build agents? And there's a lot of tools that are almost there. I think n8n is a really strong tool. And I think that's probably the closest I see out there right now. But getting all the nuances, it's just really hard with any of the off-the-shelf tools right now. And I think that will be the next thing that we see companies do. Right now, we have companies, including Imprint, that’s hiring an AI engineer right now. And really, that is a product engineer who's going to come in and help us find the ways to build these internal-facing products, to build great workflows for agents and tool usage to accomplish day-to-day tasks that are not superficially helpful, that are genuinely helpful. And that's where the innovation is going to be for the next year or two.


But even that, three years from now, I expect some two or three products will win, or the core OpenAI, Claude, Gemini products will expand to having more agentic support. And then I expect this to be widespread as well. I don't think this is a persistent mode for us. Every company has to do it now to stay relevant. But in three years, I'd expect every single company to be operating this way. And if you're not, I think it's atypical rather than the norm.


Reggie Young:

Yeah, I love so much of all of that. I love your LLM machine learning blurry lines. I have this mental model of, does the work require a PhD in machine learning? Because that is one bucket. And there is, no, let's leverage the OpenAI APIs and those sort of things to build with, which is, to your point, that's more like product engineering using AI-based tools.


We just did a hackathon at Lithics Retreat. If I reflect back on the six different projects, all of them are agentic problems, which is, to your point, that is the real value unlock where we can save so much time and energy. Excited to see those tools take off a bit more. Yeah, they don't always catch the nuance, so that's the hard part. But if you can still get 80% of the way, you can save so much time now. And in six months, it's going to be a very different world.


Will Larson:

It is interesting, and I do think this is a place where there's two components. Once you have agents or you actually have great tools pulling data in and for taking actions, then getting the right model matters a lot. And so I think the ChatGPT-4.0 versus 4.1, 4.1 is a relatively rigid instruction follower for dynamic prompting purposes maybe isn't what you want. But for actually building an agent that follows a complex, very articulate prompt, it's exactly what you want.


And so I think picking the right one of those, that's where the expert model picker, someone who's like a master of wine selection or something that kind of knows the little nuances- and these nuances will go away over time. In principle, we shouldn't need to be able to pick prompts for models to get the best outcome. But I think today, we do need to. In the future, hopefully, there'll be another layer of model selection that can look at your prompt, select the right model. That's probably optimal for it. But I think, one, today, get the agents, then model selection matters a lot. Then two, today, prompt editing still matters a huge amount. And I think that a lot of this is very artisanal, like craftsmen going in and modifying the prompt. But my expectation is that in the future, this stuff gets commoditized as well, where it can do more and more.


A lot of the thinking models are effectively doing meta prompting on your initial prompt [inaudible] the quality of it. And I expect that to become more of a standard thing that just happens rather than everyone gets really good at prompting. I don't think that's super likely. It's more likely that the systems just make the prompts better despite our deficiencies as people.


Reggie Young:

Yeah, I love that. I have this grand hope that meta prompting will force most people to become more articulate because the lawyer in me really wants that. But you're probably right. I think it is probably the platform becomes the meta prompt creator for you.


Will Larson:

Yeah. It's interesting, right? Because I grew up right at the dawn of search engines. And so I was there as everyone learned how to use search engines. And the way that I use search engines is basically the simplest search engine kind of thing, is term frequency-inverse document frequency, TF-IDF. What are some weird words that will be in the thing I'm searching for, which is basically a simple way of thinking of TF-IDF. And I've gotten really good at that. So there's this learning curve.


But I think most people never got that good at searching. You still see people typing sentences, and just the search engines got a lot better at turning bad queries into good queries. And I think the same thing is going to happen at prompting. The idea that we're going to teach the entire world's population to get good at prompting, I don't think that's true. But instead, I think we'll teach them the general concept, and then the system underneath will get better at turning those prompts into something actually good.


Reggie Young:

Yeah, I love it. I think of some co-workers and I who have laughed about the sort of prompts that we give to an AI platform. I think it's true for search engines, too, but it's never capitalized. There's no punctuation because that doesn't matter. But we've been laughing at some of the stuff we submit because it's stuff we never say to a person. It's the same in search engines. But you're right. There are going to be these subtle differences of the sort of strings that you put into an AI prompt that don't necessarily need to be full grammatical sentences either.


Will Larson:

A hundred percent. And then the interesting question is there's a lot of stuff for people at the forefront of it, where there's stuff that we knew or believed was true two years ago. It's, try to be careful this time, actually think deeply, which I think increasingly don't do anything, but they seem so important two years ago, and people just have these lingering ghosts in their prompts. It's a funny time to be doing this.


Reggie Young:

Yeah, definitely. Is there anything else that you think is too hyped about AI? We wanted to chat on what's hype versus reality with AI. I think we just covered a lot of good thoughts there, but I don't know if there's anything else you want to add on that hype-versus-reality question.


Will Larson:

If we look at the OpenAI GPT-5 rollout, it's a really interesting one because it captures a bunch of what we've described. First, the thesis there is that people just aren't that good at selecting models. So they've condensed into one model and with a few different variations that tries to pick the right model for you. And then power users are upset because they're used to a level of control. Again, they're people who are building true agent-based products and they're like, hey, GPT-4.1 really outperformed most others because it had a huge context window and it's very literal at instruction following. But that's the 0.1% of folks that actually know this. So I think we see the collapse.


And the other thing that I think we see is as these models get more and more powerful, there's a real emphasis from OpenAI on reducing CPU usage to the minimum amount they can tolerate and then allocating the CPU usage to more valuable queries that are more complicated. And so I think there's this interesting alignment here between most users don't know the type of model to pick, and then the providers really want to reduce the amount of CPU utilization they have by driving the cheaper models. So I think there are some interesting things happening there. But I think it means to me that a lot of the moat today, model selection, being really good at meta prompting, probably going away over the next N years. It's just that it's not gone yet, so it's still a moat today, but it will probably disappear over the next couple.


I just think a lot of this is going to come back to the best AI products of 2025 are good products that also have AI. And I don't think there's going to be a lot of amazing AI products that are amazing because of AI. There's going to be amazing products where AI is useful. And there's a bit of an era still today where, ultimately, the industry is driven by how the discounted future cash flow valuation model- and discounted future cash flow today includes different discounts depending on whether you say the word AI frequently enough in describing the product and company you're building. But that's going to disappear over the next couple of years.


I think that's exciting because, ultimately, the best products are going to be the ones that do solve the most important problems for their customers and for their users. And that's the good for the industry. Same thing with crypto, right? I think where there was this huge multiple valuation for crypto a couple of years ago, that's really gone away. But now we're seeing these stablecoin businesses get valuable, not because there's crypto, but because they solve a really meaningful problem that genuinely helps someone.


I think it's already starting to balance out a little bit in the industry and the valuations as it all should be. I think people can be skeptical about hype. But ultimately, I think the hype brought a ton of investment into this new kind of nascent opportunity, got it funded really well, so it has the ability to actually explode and to see the shape of it. And now it's normalizing a little bit in terms of turning into just the focus on genuinely valuable products that solve real problems for people. So I think the hype in both the crypto and the AI cases, I think it's actually been really good for the industry and for consumers. It's just like the first couple of years are always a little bit of a mess.


Reggie Young:

Yeah, I love the idea of an AI sommelier. I have this analogy that I've been turning over my head. There's the kind of casual place that I might go get dinner at on Divisadero in San Francisco, where they have two wines and you don't really need a sommelier for that. But if you want to go to Fool's Errand on Divis, they have a bunch of different wines and you want to talk to somebody that knows all the nuances. There are certain settings where you want that sommelier versus not. I just hope I see some AI sommelier job listings before it gets totally relegated.


Cool. Looking at Imprint, or just the industry, are there secondary effects that you're seeing or that you expect this LLM advancement to have on tech companies? I think the obvious answer is you can scale, you can do more, especially once you get to those agentic tools being very good, then you can grow with fewer folks. But are there other impacts you're seeing on strategy operations, anything like that?


Will Larson:

I do think there are a lot of roles within companies that I think are going to benefit a lot from LLM-based products. And so, again, going back not to the hype, but where LLM is genuinely useful, I think places where there's high-quality text input as we get better at designing data sources, better at creating tools. I think customer support is a place where I think we're going to see a lot of Tier 1 support. And for the Tier 1 use case, agents are actually really ideal that there's no phone line, you can get voice, you can get chat instantly, and that chat can speak whatever language you want it to. It can be fluent in the language of your choice. And then still in your Tier 2, Tier 3, Tier 4, whatever tiers you're going to have.


I think that's a place we're going to see a shift in just how customer support is done. I think human customer support folks, even more important, even more of a detailed technical specialty that you need specialists running. But it does mean that the first kind of tier, probably we can actually solve just faster, better user experience using some of the agents. I think it was like places like that where we're going to see efficiency.


Thinking about finance, for example, a lot of the efficiencies for finance are already captured in these tools like Zip and the hundreds of others that have tried to capture it. I think for Zip internally, there's probably some LLM opportunities. But for you as a finance team, in terms of can you make your finance team radically more effective with LLMs, I'm sure there's some cases, right? But I'm not sure what they all are.


I think your background on the legal side, there are a lot of legal documents that are just very minutely different where I think the right kind of LLM workflows could be really valuable. Also, a place where getting it wrong is very valuable or expensive. And so you have a human-in-the-loop sort of style solution where you can get initial reads, because human lawyers make mistakes, too. They're just reviewing hundreds of pages of Google Docs with changes shown. It's just really hard to do effectively, even for the most detail-oriented person. So I think there are little opportunities for improvement there.


In general, I'm a believer that we're just going to see less very basic work in terms of unable to spend our time more on the interesting problems of the world. I don't see anything that's going to stop that from happening. It’s definitely going to take us a few years to get there. But again, SaaS has actually done a really good job of doing that already, but it hasn't solved everything. And so the interesting questions are, why do we think LLMs radically improve a lot of functions where SaaS hasn't been able to? SaaS in general has been able to do a huge amount.


I'm not sure there's a massive amount of opportunities everywhere. But I do think there's lots of places, I think the customer support one where it's mostly a text and voice medium for communication. I think that's a huge opportunity there. But even that, if you came into a customer support role trying to increase deflection rate, one of the first things you actually want to do is go improve your documentation. And improving the quality of your documentation isn't just trivial with LLMs. You can use LLMs to make it easier. I think that's totally true. But again, there's lots of things that are actually the most impactful thing, like first steps or second steps that don't just go away because of LLMs. You still need a level of mastery to do it. And it still takes time to get the data in to actually do the right prompt to get the right data out.


Reggie Young:

Yeah, I love it. The support example is so good. I acknowledge it a little bit on the leadership team. If you look at a CEO's job, they deal with the gnarliest escalations. It's like all the problems that weren't easy to solve get escalated up the chain. And it's almost like other functions are not going to have that same phenomenon much more frequently. The first tier of human customer support is going to have to deal with more thorny- which is ideal because you probably don't want them advising on how to reset passwords. You want them dealing with the nebulous and gray problems that are going to have the higher impact probably.


Will Larson:

When I worked at Uber, there was an employee I worked with, Woodrow, who I hired and I'd worked with at a prior job. Basically, what he did was he just responded to infrastructure, engineering questions in Slack really quickly. Someone actually thought he was a bot, and this was 2014, because all he did was just reply very quickly to questions. That's not all he did. It's most of what he did. His motto was we need jobs for humans, because everyone just kept saying he was a bot, even though he wasn't very much not a bot. Jobs for humans here, we're not getting rid of the humans. They are, to your point, exactly getting more important, not less.


Reggie Young:

Yeah, I love that. It's funny. I've had that phenomenon, too, where I'm like, wow, that support person responded so fast. I'm not sure that was actually human. It was. They just did their job so well.


Awesome. You have a new book coming out November 25th. In case folks aren't familiar, you published a handful of engineering books, both on Stripe Press, I think O'Reilly, as well as self-published. I would love to jam on engineering strategy a little bit. It’s such a nebulous rabbit hole that you can go down infinitely to a very diminishing returns to effort there the farther you go down the rabbit hole. You can read good strategy, bad strategy, and come out with a checklist and attack problem. But every company is different. Every problem is different. Strategy is just such a hard thing to get your hands around from. I'm talking from a general business strategy perspective. So I would love to chat about engineering strategy.


How does engineering strategy compare to strategy for the overall business? I'm thinking about, okay, what products do we build you want to kick the tires on, how much does it cost to build that, what's the ROI that you expect, those sort of things. But what about as an engineering org? How does strategy work there compared to the more general business strategy?


Will Larson:

New book, Crafting Engineering Strategy. Two things I've tried to do that are a little bit more opinionated in that book. The first one is when you talk about strategy, something I just hear over and over is you talk to a disaffected member of your organization. They're always going to be like, we don't have an X strategy. Engineering is like, there's not even a product strategy at the company. And then you talk to product, there's no business strategy, and there's not even an engineering strategy. It’s like every function has this cemented view that no other function has a strategy. As functions get large enough, engineering will be like, there's not even an engineering strategy either. And so it's a fascinating thing where like, how can it be true that no one thinks they have a strategy, but all these companies are moving along somehow.


The first idea I wanted to put out there is I think all these companies, when we say, hey, they don't have strategies, what we mean is it's not written down and I don't know what it is, or we don't agree on what it is, but there's always a strategy. The idea there is no strategy is actually really distracting, because there always is a strategy, but maybe it's a bad one. Maybe we don't agree on what it is, but there's always a strategy. That's rule one.


The second rule is that strategies are always secretive internally. And I think most of the time it's accidental, because I don't actually think strategies are like secret magic things. Most of the time, you can go read a company's job postings, like Intuit, their strategy. You know what they're hiring for. There's a bunch of stablecoin jobs they're hiring for. I wonder what they're doing next. You can just read the jobs page and figure it out.


But we never get to see them. You only get to see the ones internally. And then it just projects on what- I bet all these other companies have these magnificent documents, and we just don't have anything, like we're really bad. But I don't think that's true. And so the second idea in this book is I actually tried to pull together strategy documents from real companies in terms of articulating the actual things we did, and I think mostly drawn from my own experiences.


Just to give people examples of what were the strategy topics that we had at these different companies and how do we make the trade-offs, just trying to give people reference points, so we can move it from we don't have a strategy to, here are some strategies that were actually really effective and maybe they're not what you think they are. Then we can talk about how do you actually write one yourself, because another thing that comes up is, hey, I need to do an engineering strategy, but Reggie never wrote the business strategy. So I can't do an engineering strategy because there's no business strategy. And so there's all these ways people really trip themselves up.


And so the entire book is here's what strategies look like in practice. Here's a bunch of them. They're probably not like this mythical document you thought they would be, but these are all really effective. And then every reason you have why you can't start now is a lie. Can you do strategy if you're not an executive? You absolutely can. And so I have this idea of strategy altitude.


Some strategies are really high altitude and say, hey, we will not have new business units. We will only go after these opportunities. We will only go and beat domestic markets. We will not go in. There's all these things that are just super high level. But a lot of strategies are pretty local to the team, like how do we structure our code? Do we deploy on Fridays or not? As the CCO, I don't care if your team is deploying on Fridays or not. That's a decision that you can make many different ways and honestly isn't really that interesting to even think about.


I think figure out the altitude you're allowed to play on. And then, hey, I don't know what the product strategy is. And then your strategy is we don't know what the product strategy is. So we need to be flexible to deal with the fact that we might go a couple of different directions. And that's the problem to be solved. So just thinking through not taking these uncertainties as bloggers to start- if your goal is to not find a reason why you can't start, it's inefficient to start, you can just never start. But if you actually have the goal of doing something impactful, you have to figure out ways to turn these uncertainties, these ambiguities, into enough structure that you can make progress anyway, because there's always ambiguity in any interesting problem you work on.


Reggie Young:

Yeah, I love that. I never thought about this, but I definitely envision that every company has an MBA case study document that outlines like, here are the three concise bullets that clearly articulate what our strategy options are, and we chose A because XYZ, and we feel certain and this will be a permanent decision. No, it's so messy. If it's explicit at all, you have to take it with a grain of salt because it can change in six months. So I love all that.


Where do you see engineering strategies fall short the most often?


Will Larson:

The biggest challenges I see for engineering strategies are, first, new execs coming in and just copying the prior one, because then it just misses all the nuances. It's really hard to do something effective. Second is that there's this idea of making a mandate but not actually seeing if it does anything. I think a lot of times there's this executive view, which is like, hey, my job is to write the strategy. And then I'm good, but no one's actually doing it. I can think of one company I worked at where we ran agile. There was a meeting talking about it. And then we never talked about it again. One of their accomplishments was rolling out agile that year, but no one did it.


I think there's this comfort level with just articulation as impact, but it's only the first step of impact. One of the big things that I've learned over the last five or six years is you have to get feedback on a strategy, get some buy-in, articulate it, and then go figure out in the details whether people are doing it. And then the next way people mess up is people aren't doing it. They're like, all right, I know what we're missing here. We need a clearer mandate driving compliance with the new strategy. I think the real thing is, man, people are rational actors, why aren't they doing it? And to figure out the why not and to actually get it to work.


I think there's just this worldview of how strategy works, mandate, drive compliance, make it part of their performance review if they aren't compliant, fire the particularly non-compliant folks, don't let them get promoted if they don't drive the compliance. Man, we can do it that way, but we really don't have to because people are just rational. They want to come in. They want to do good work. They want to do things to help the company, help the customers, also help themselves. How do we just align the details by making it easy to do the right things?


At one job and one of the strategies that I write about is rolling out basically how do you justify access to customer data? A lot of start-ups, customer data is not super restricted. Maybe you can see too much customer data too easily. Most start-ups have this moment where they go from that to having some way where you can prove you have a business rationale for each piece of customer data you access. But a lot of times the first version of that is like a text box where you're expected to type in an explanation each time you do it. First, mission complete, but then you look a year later and all of them are just like CR or they're just like a letter, or maybe it's just like the letter I or something. And then someone put a letter minimum, so now it's nine Is instead of just one I or something like that. And the data is just terrible.


Again, this is where this trade-off comes, right? The first reaction oftentimes is like, I'm professional. We're going to fire the people who are writing I nine times, blah, blah, blah. But more interesting as, man, these are people trying to do their work, and this is really getting their way. How do we make it such that we can check their Zendesk tickets? And if they have a Zendesk ticket assigned to the customer that is also assigned to them that is open, we can automatically grant and inject that ticket as the rationale for the access.


And so it's just getting a little bit more curious about how the world really works and solving problems genuinely. From the perspective of the person, you need to change their behavior, not just top down. A lot of leadership beyond strategy is just expanding from one worldview, like I'm the security guy or I'm the product guy or whatever, to just thinking about the problem from five or six different lenses and how do you solve the entirety of it, not just the piece that you're held accountable to, which I think is a way to get promoted at a big enough company, but it's never an interesting way to solve meaningful problems.


Reggie Young:

Yeah, I love it. It's great. I've seen that problem of access to customer data in particular, so very resonant example.


Will Larson:

Every company you go to has literally dealt with that problem. It's a universal problem everywhere you go.


Reggie Young:

There's some universal thorny ones. There's that in fintech, there's deleting data. You have certain data retention minimums, but you don't want to hold it too long past that. But when you're a two-year-old start-up, the deadlines are so far off that why should you invest in them? And then you build out your products, and then suddenly you've complicated data infrastructure.


Will Larson:

I have seen that one a lot, too. I think that one's exceptionally hard because it comes with hard deadlines. You usually can't negotiate, which makes it trickier.


Reggie Young:

Yeah, definitely. A few wrap-up questions that I'd love to hit quickly. Do you have any tips for non-technical folks on how to best work with engineers? Practically in my experience, I would talk to the product team and they interface with the engineers. But there are definitely lots of instances where the legal team needed to know the quirks of how a certain product was structured for whatever regulatory purposes and needed to interface with the engineers. Any general tips for non-technical folks on how to best engage with the engineering teams?


Will Larson:

Over the last year or two, I've thought about prompts a lot as proxies for people. Usually, when you write a bad prompt, you get a bad result, and people will get that. But then they write a bad prompt to a human, they get a bad result and they're like, that person sucks. And so the first one is just, I think for working with any cross-functional partner, it's like, how do you give enough data to get a response you want?


Often across any two functions, you'll see questions which are like, I need help with this partner, and then it's like, why is no one helping me? No one can help you. And so there's this one version of it, which is this four or five kind of back and forth, extracting the context to actually answer. What is the problem? Why does it matter? What have you tried? What do you think would be possible that you can't do? There's just giving enough information so that people can actually reply. I think this is the most important thing.


Usually, when I see people struggling to communicate cross-functionally, including with engineers, it's because they literally are missing context. And when you get really vague questions, I think people are going to interpret them. I think some people are good at understanding the likely misinterpretations of vocabulary cross-functionally, and some people aren't. And so you'll see people where the asker and the answer are actually having two different conversations. They don't realize they're having two different conversations. And some third party has to come and be like, okay, you're talking past each other. Actually, this means this. This is the same word, but it means that. You really, how do you translate? Giving enough detail that people are unlikely to mistranslate what you're saying, giving enough detail in the first question, they can actually get to the bottom of it. That's one.


The second, and this is one of the core leadership values for engineering at Imprint and also everywhere I've worked, is respond to conflict with curiosity. The first solution is getting curious about the question. I think whenever the asker or the answer moves to solving before they're a little bit curious just to understand, that's when things get really hard. Leaving space for the counterparty to be curious, I think, is really important. If you try to push too quickly before you've had the understanding, then it gets really hard.


Obviously, when you've been working together with someone in particular, you know how to communicate really well with them, you don't have to leave so much space. But early on, particularly when you're working cross-functionally with people you've never worked with before, really, the only way to get successful communication is leaving a little bit of room for curiosity, making sure you actually agree on the mission statement before you get started. Surprisingly, it goes wrong more than you'd expect. Despite the fact that most of the people both of us work with are really smart, mature professional people, the basics of communication still go awry. Surprising.


Reggie Young:

Yeah. I love to tell folks whenever I'm giving communications feedback that you always have to assume that your colleagues are at capacity, probably didn't sleep well, also probably over-caffeinated, despite being also tired, maybe distracted by constant Slack pings. There's a reality of operating- a phrase I picked up from Mark Svensson, a colleague at Lithic, is assume positive intent, which I love. It's just if you see something, always try and find, what's the positive interpretation here? And then you can dig into, be more curious about the context of what's actually happening so easily.


Especially if you're tired, if you're distracted, if you're in a meeting and trying to Slack at the same time, your brain will just naturally go to that sort of, oh, no, we can't do this, or this is a conflict. I've lost count of the number of times that I've seen heated debates where, ultimately, everybody was actually talking about the same thing and had the same view but was using different vocabulary. It's such a common problem, unfortunately. Even to your point, you work with a lot of sophisticated, smart folks, but again, when there's so many demands in a fast-moving, high-growth company, it's hard to keep your head level some days.


Will Larson:

Absolutely.


Reggie Young:

Awesome. Well, if folks want to go pre-order your upcoming book or find out more about your Imprint, where should they go?


Will Larson:

If they want to order one of my books, just Amazon, Will Larson. This one in particular is Crafting Engineering Strategy, but there's a couple of others up there they could buy if they wanted to. Then for Imprint, imprint.co. Come take a look. We also are supporting a number of really great credit cards. If you want to get a new one, we have some pretty cool ones that we launched this year.


Reggie Young:

Awesome. Thanks so much for coming on. It's been a wonderful conversation.


Will Larson:

Likewise. This has been great. Thank you, Reggie.