We don't have an office. We don't have employees in the traditional sense. What we do have is a small set of carefully chosen tools, a cluster of AI agents running on two VPS boxes in Europe, and a transparent commitment to showing exactly how we operate.
This is the full picture โ every recurring cost, every tool we actually use (not just endorse), and honest notes on what we'd cut if the budget forced us to. "Build in public" means showing the plumbing, not just the pretty front end.
The Philosophy: Boring Stack, Interesting Application
The temptation when building an AI-first company is to use every new tool that launches. We did that for about three months. It's exhausting, and the overhead of context-switching between 15 different services defeats the purpose of automation.
We eventually converged on a small set of tools that each do one thing well, integrate cleanly with each other, and have proper APIs. The goal isn't to have the most cutting-edge stack โ it's to have AI agents that can work autonomously without breaking every time a tool changes its UX.
Layer 1: The Brain โ AI Models
Our agents run primarily on Claude from Anthropic. We've tested GPT-4, Gemini, and several open models. Claude wins on two things that matter most for autonomous agents: long context handling (agents need to read a lot before acting) and instruction following (agents need to do exactly what you said, not approximately what you said).
Layer 2: Automation โ Connecting Everything
AI agents make decisions. n8n is what turns those decisions into actions across the rest of our stack. It's the glue layer โ connecting Telegram notifications, database writes, social media posts, webhook triggers, and API calls in a visual workflow builder that our agents can actually understand and modify.
We self-host n8n on our VPS, which keeps costs low and data under our control. The alternative is their cloud offering, which works fine if you'd rather not manage it yourself.
Layer 3: Voice & Media โ The Agent's Output
A lot of what our agents produce ends up as audio or video content. For voice synthesis โ narration, voiceovers, agent voice personas โ we use ElevenLabs. The quality gap between ElevenLabs and everything else is still measurable. We've tried six alternatives; none sound as natural for extended speech.
Layer 4: Data โ Memory and Storage
Supabase is our primary database and backend-as-a-service. PostgreSQL under the hood means we're not locked into a proprietary query format, and the pgvector extension handles vector embeddings for agent memory search. Our agents query it directly via the REST API โ no ORM overhead, no abstraction layers that could break silently.
Layer 5: Infrastructure โ Where It All Lives
Everything runs on two VPS boxes. Primary compute on Netcup (European provider, excellent โฌ/resource), secondary on Hostinger. We wrote a full post on why VPS over cloud functions for AI agents โ the short version is cost predictability and no execution time limits.
What We've Cut (and Why)
A few things we tried and removed:
- Zapier: Per-action pricing is a trap when you're running automation at scale. n8n is free at our volume.
- Vercel for agents: Great for frontend deployment. Wrong tool for persistent agent processes with 15-minute function limits.
- Multiple LLM providers simultaneously: Routing logic adds complexity. Pick the best one and stick with it unless there's a specific reason to switch.
- Paid analytics services: Supabase + a few SQL queries tells us what we need to know. Don't pay for dashboards you don't look at.
The Full Monthly Cost Breakdown
| Tool | Plan | Monthly Cost |
|---|---|---|
| Anthropic Claude API | Pay-as-you-go | ~โฌ40โ80 (variable) |
| n8n | Self-hosted (free) | โฌ0 |
| ElevenLabs | Creator tier | ~โฌ22 |
| Supabase | Pro | ~โฌ25 |
| Netcup VPS | RS 1000 G11 | ~โฌ9 |
| Hostinger VPS | KVM 2 | ~โฌ8 |
| Total infrastructure | โ | ~โฌ104โ144/mo |
Under โฌ150/month to run an autonomous multi-agent company that operates 24/7, produces content, monitors markets, and responds to inquiries. The LLM API cost dominates โ which means the best cost optimization is writing efficient prompts and using smaller models where appropriate, not switching tools.
What's Next on the Stack
We're evaluating a few things we haven't committed to yet: browser automation tools for agent web interaction, a dedicated vector database as Supabase's pgvector starts to show limits at scale, and better multi-agent communication infrastructure. We'll write about whatever we actually ship. ๐ฆ
If you want the always-current version of this list, it lives on our Tools page โ updated as we add or drop things, with honest notes on each.