All posts
Founder

Why I Closed My Restaurant After 8 Years to Build AI Software

I owned a Michelin-recommended restaurant for 8 years. Closing it in December 2025 wasn't the hard part. What came next was harder — and more interesting.

Tin Zulic··9 min read

Closing a restaurant you've built for eight years is surreal.

The last service at our restaurant was in December 2025. The Michelin recommendation didn't save it — by that point I had known for a year that I was going to close it, and the reason wasn't financial. It was that I had spent the last two years paying more attention to the software automating parts of our operation than to the restaurant itself. The signal was hard to miss.

This is the story of how I got here, and what I learned about building software as someone with no technical background.

The Eight Years

Our restaurant was a 40-seat place in the kind of city that takes its food seriously. Not a large operation. We ran tight margins, cared too much about everything, and somehow landed a Michelin recommendation in 2023.

The Michelin recognition changed the restaurant in ways I didn't fully anticipate. It changed the booking patterns. It changed the customer expectations. It changed the team's sense of what they were doing. Some of that was wonderful. Some of it created pressure that eight-person teams aren't built to handle.

In 2024, I started trying to automate the parts of the operation that were eating my time but didn't require my judgment. Reservation management. Supplier communication. Inventory tracking. I wasn't looking for AI — I was looking for time.

I found AI agents.

The First Contact With AI Agents

My first attempt was with a tool I won't name — a "no-code AI assistant" that promised to handle inbox management. It was a chatbot wearing an "agent" label. When I asked it to monitor a specific email address and automatically respond to a category of supplier inquiry, it didn't know what I was talking about.

I spent a long time looking for the actual thing — something that could run autonomously, connect to real systems, and take actions rather than generate text for me to act on.

I found OpenClaw.

The promise was exactly what I wanted: a personal AI agent running on a server I owned, connected to my tools, capable of acting on a schedule. Private. Autonomous. Real.

The reality was that I spent the next four months intermittently trying to get it set up. Not because it's bad software — it's genuinely good software. But because it was written for people who understand servers, and I was not one of those people.

I hit the 70% wall so reliably that I started being able to predict where it would happen. Every time I thought I was close, something new required knowledge I didn't have. Firewall configuration. Service management. Debugging cryptic log errors. Skill conflicts that manifested as my agent responding to messages but then silently stopping for hours.

In the end, I got it working. Imperfectly. With help from forum posts and a developer friend who spent an afternoon looking at my configuration.

What Actually Automated

When it worked, it worked well.

Reservation confirmations and reminders — automated. The routine back-and-forth with the booking platform stopped landing in my inbox. My agent handled it.

Supplier inquiry responses for standard orders — automated. We had a handful of suppliers who needed predictable answers to predictable questions. My agent handled those while I slept.

Team briefings — the agent pulled together the day's bookings, any prep notes I'd logged, and weather (we had outdoor seating) into a morning brief. I stopped typing the same summary every morning.

By the time I closed the restaurant, these three automations were saving me roughly 90 minutes a day. That doesn't sound like much. In a restaurant, 90 minutes is a different service.

The Decision to Close

The closing wasn't a failure. That's not modesty — it's accurate.

I had been running for eight years at a level of intensity that I knew was unsustainable. The Michelin recognition created a longer runway to keep going, not a reason to. When I looked honestly at what I wanted to do with the next decade, I wanted to build the tool I had spent four months struggling to set up.

Not because I want to become a software developer — I still can't write meaningful code. But because I had felt the specific pain of being a non-technical person who needs capable AI tooling and can't access it due to technical complexity. And I thought: that's a solvable problem.

Building Without Being a Developer

Volos exists because I believed — and still believe — that you don't have to be a developer to build a software product. You have to think clearly about what the product does, who it's for, and what problem it solves. The code is a detail that can be handled by the right tools and the right help.

I build with Claude Code. I've been using AI development tools since about eight months before closing the restaurant. The learning curve was real. The productivity once I understood how to use them was also real.

What I've learned about non-technical building:

Architecture thinking beats coding ability. If you understand what you're building and why each piece connects to the others, you can direct AI tools effectively. If you're just hoping the AI figures out the structure, you get chaos.

Clarity of requirement is the skill. The most valuable thing I contribute to Volos isn't code. It's knowing exactly what the product needs to do, for whom, and why. Good requirements produce good software. Vague requirements produce vague software.

Build in public removes the safety net — in a good way. Telling people what you're building before it's ready is uncomfortable. It's also the fastest feedback mechanism I've found. Every week of public building has produced more useful information than any amount of internal theorizing.

You can build things you couldn't use before. I couldn't have built Volos without having spent four months struggling with OpenClaw. The product is a direct response to specific, experienced friction. That's different from building a solution to a problem you've theorized.

What I Tell Other Non-Technical Founders

The conversation I have most often is with people who are smart and capable in their domain and scared that they can't build software because they're not developers.

My honest answer: you can. With caveats.

You need to be willing to think at the architecture level. If you treat AI coding tools as magic boxes, you'll get unpredictable outputs and no understanding of why. If you treat them as capable collaborators who need good direction, you can build real products.

You need to be willing to be wrong and iterate. My first version of everything I've built has been wrong. Wrong at the product level, wrong at the implementation level, wrong about what users needed. Iteration — which requires you to be honest about failures — is the mechanism that produces right.

You need to accept that there are real limits. Some things require a developer. I work with developers for the parts that are beyond what AI tools and I can figure out together. That's fine. Non-technical building isn't about doing everything yourself — it's about being able to contribute to and direct the process.

The restaurant taught me that the difference between a good restaurant and a great one is usually the owner's willingness to care about things that don't scale: specific ingredients, specific techniques, specific relationships. Software isn't so different. Caring deeply about specific problems that affect specific people is still the thing that produces products worth building.

I just do it in a different industry now.


What I'm building and why →

The 70% problem I kept hitting →

Related posts

Want the agent without the setup?

Volos handles everything. €99 setup + €49/month.

Join the waitlist