Technology
December 10, 2025
Kate Hursh-Wogenstahl


Kernel is redefining what is possible at the intersection of AI and the open web. Built for the next generation of intelligent agents, Kernel gives developers instant access to cloud browsers that behave like real users: fast, secure, and ready to automate anything on the internet. Instead of wrestling with infrastructure, teams can launch browsers in milliseconds, run agents across any site, and build automation at a scale that was unimaginable even a year ago. Kernel removes the heavy lift so builders can stay focused on breakthroughs, not bottlenecks.
Behind the company is a founding team with a track record of building category-defining platforms. Co-founder and CTO Rafael Garcia, who previously built and exited Clever for $500M, brings deep technical and scaling experience, while co-founder and CEO Catherine Jue adds equally powerful leadership and product credibility from her time co-founding Sway Finance (YC S16) and building at Cash App. Together, they have planted Kernel across both San Francisco and Cincinnati, proving that world-class AI companies do not have to grow exclusively on the coasts. Backed by Y Combinator (S25) and some of the strongest venture partners in the country, Kernel is betting big on its dual-HQ model, its velocity, and its belief that the future of AI automation will be built by teams that can move fast, think boldly, and operate anywhere.
We picked the brains of Cincinnati based CTO Rafael Garcia and founding engineer Mason Williams to explore the future of AI, what Kernel is building, and the lessons they have learned while scaling an ambitious dual-HQ startup.
Cintrifuse: When you imagine Kernel fully built, what capabilities do you see developers to integrate into their daily workflows that are not possible today?
Kernel: LLMs arrived faster than our frameworks for understanding and harnessing them. The current experience of using them is like the early days of computing—powerful but awkward, requiring too much manual orchestration. I think the future is one where we've built out the right primitives: the fundamental building blocks that let you take this general-purpose intelligence and quickly apply it to real tasks reliably. Primitives like: here's a computer for your AI agent to use, here are tools for controlling it, here's how identity and authorization work, here's how you compose these into workflows. When those primitives exist and work seamlessly, integrating AI agents into daily workflows will feel as natural as calling an API does today.
Cintrifuse: As AI agents mature, what part of software development do you think will look completely different or disappear entirely in the next five years?
Kernel: We're all-in on AI-assisted development at Kernel, so I've watched this transformation firsthand. There are whole categories of code that will simply never be written by hand again—tests, boilerplate APIs, architecture diagrams, database migrations, documentation. These used to be tedious keystroke-by-keystroke affairs; now they're prompted into existence. The same goes for technical specs and implementation plans. Most engineers at Kernel now start by collaborating with AI to sketch out an approach, refining it through conversation until it's right, then letting the AI write the first draft of the actual code. The core skill is shifting from typing speed to clarity of thought—your ability to precisely articulate the problem and guide the AI's work matters far more than your ability to crank out syntax.
Cintrifuse: What has been the hardest technical or architectural challenge you have faced so far, and how did solving it change the way your team builds?
Kernel: Browsers are not your typical cloud workload. When people talk about scaling things in the cloud, they usually mean websites or APIs—workloads that have been around for decades and are well-understood operationally. Browsers, specifically Chromium, are a different beast entirely: resource-hungry, stateful, and not designed with horizontal scaling in mind. We spent a lot of time figuring out how to run them efficiently at scale, and the solution we landed on was microVMs—small, lightweight slices of a computer that can be suspended when idle and resumed almost instantly. Think of it like having a laptop in the cloud that you can open and close on demand. This architecture fundamentally changed how we think about resource management, and it's now core to how we build everything at Kernel.
Cintrifuse: What is the next major technical unlock you are working toward that could significantly expand what customers can build with Kernel?
Kernel: One problem we're thinking a lot about is agent identity—giving AI agents the ability to log into services securely on behalf of humans, with explicit consent and scoped authorization. This is one of the key missing pieces in letting agents use the internet the way humans do. Today, agents can browse and read, but the moment they need to take action on your behalf—booking something, submitting a form, managing an account—everything breaks down. Solving identity unlocks an enormous surface area of useful agent applications. Beyond that, we're relentlessly focused on performance: making everything faster and more reliable across the board, because speed and reliability compound into trust.
Cintrifuse: What is the biggest shift you have seen in developer or customer behavior around AI agents this year, and how has that influenced your roadmap?
Kernel: It's hard to believe that Claude Computer Use—the first LLM to treat computer control as a native capability in a mainstream, consumer-facing product—came out only a year ago. Since then, the models have gotten dramatically better at this, and we've had to constantly help our customers stay on top of the latest best practices for utilizing different models and their evolving capabilities. This shift toward visual computer control has directly shaped our roadmap. We've added more APIs in Kernel for full computer interaction—mouse clicks, screenshots, drag and drop, keyboard input. We plan to keep investing heavily here, because the rate of improvement in the underlying models shows no signs of slowing down.
Cintrifuse: What advantages has Kernel gained by building across both Cincinnati and San Francisco, and how do you maintain a unified pace and culture across two very different ecosystems?
Kernel: The ability to tap into two different talent pools has been huge for us. Cincinnati punches well above its weight on many fronts, including software engineering talent—there are excellent engineers here who aren't interested in the Bay Area grind. San Francisco, meanwhile, is where most of our customers are, so we've focused on hiring customer-facing roles there. This split has worked surprisingly well. Having people close to customers means faster feedback loops, while our engineering team in Cincinnati can focus deeply without the constant context-switching that comes from being in the epicenter of the hype cycle.
To learn more about what Kernel is building and book a demo, visit onkernel.com.
Kate Hursh-Wogenstahl is Director of Marketing & Communications at Cintrifuse in Cincinnati, a non-profit organization focused on accelerating startup growth in that city. Kate trained as a designer, building her career in non-profit marketing and engagement, eventually working as a Creative Director before joining Cintrifuse. She received her Masters from Purdue and Bachelor’s from the University of Cincinnati.