Building a Multi App AI Backend: How I Created One System to Power Multiple AI Products
Have you ever found yourself building an AI powered app, getting it to work beautifully, and then thinking, "okay, now I need another AI app... do I really have to build all of this from scratch again?" Yeah, me too. And honestly, the thought of duplicating infrastructure for every new product idea felt like copying your entire homework just to change the title. Not fun at all.
So I decided to build one AI backend that could power multiple apps, each with its own workflows, endpoints, and logic, but all sharing the same core services. Let me walk you through how I pulled it off with FastAPI, LangGraph, Google Gemini, Cloud Firestore, and a sprinkle of Celery + Redis for the heavy lifting.
The Problem: One Backend Per App Does Not Scale
Let me paint the picture. I had two AI product ideas:
- ClearReply: You paste a message you received (say, a recruiter's DM or a passive aggressive email from Dave in accounting), and the AI suggests response tones and generates multiple response options for you. Think of it as your personal diplomatic advisor.
- Fit My CV: You upload your CV along with a job description, and the AI tailors your CV to match that role, then generates a polished PDF. Basically, it does the tedious part of job hunting for you.
Now, both apps need authentication, subscription validation, Firestore storage, and AI processing. Building two separate backends would mean maintaining two sets of auth logic, two deployment pipelines, two of everything. That's not engineering, that's just suffering.
The Big Idea: A Multi App Architecture
Instead of duplicating, I designed a single backend with a clean separation between shared services and app specific logic. Every app gets its own folder under the main project for routes, models, services, and storage, but they all plug into the same FastAPI instance and share common utilities like subscription validation and authentication.
The URL structure keeps things tidy:
# App A endpoints
/api/v1/app-a/analyze
/api/v1/app-a/generate
# App B endpoints
/api/v1/app-b/jobs/create
/api/v1/app-b/jobs/{job_id}/status
/api/v1/app-b/upload
Adding a new app? Just register it in the app config, create its folder, wire up the router, and you're live. It's like adding a new tenant to an apartment building that already has plumbing and electricity. You just furnish the room.
Why LangGraph + Gemini?
Here's where it gets fun. For the AI workflows, I went with LangGraph, a framework for building stateful, graph based AI pipelines. If you've used LangChain before, think of LangGraph as the grown up version that lets you define your AI logic as a directed graph of nodes and edges.
Why does this matter? Because real AI workflows aren't just "send prompt, get response." They have steps, decisions, and state that needs to persist between them.
Take ClearReply's two step workflow:
- Step 1: User pastes a message, the AI analyzes it and returns 4 tone suggestions (Professional, Friendly, Casual, Enthusiastic).
- Step 2: User picks a tone, the AI generates 3 tailored response options in that tone.
With LangGraph, each step is a node in the graph, the conversation state flows between them, and everything is tracked cleanly. No spaghetti prompt chains. No duct taped logic. Just a clean, maintainable workflow graph.
And powering all of this? Google Gemini (specifically Gemini 3 Flash Preview). I'm using structured output so the AI returns proper Pydantic models, not raw text I have to pray is valid JSON. This means the AI's responses are typed, validated, and ready to use straight out of the box.
Background Jobs: The Unsung Hero
Fit My CV has a challenge that ClearReply doesn't: generating a tailored CV with a PDF takes time. You can't make a user stare at a loading spinner for 30+ seconds. That's a UX crime ☹️
Then I've decided to go with Celery + Redis. When a user submits a CV tailoring job, here's what happens:
- The API creates a job record in Firestore and queues it in Celery.
- The client polls a status endpoint, getting real time progress updates.
- The Celery worker does the heavy lifting: AI processing, PDF generation from HTML templates, and uploading to Firebase Storage.
- Once done, the client fetches the result with the tailored CV data and a download URL.
The PDF templates themselves are pure HTML + CSS (I've got few boring designs like: Professional, Modern, Slate, Suna, etc), which makes them easy to tweak and version without touching any backend logic. It's the kind of separation of concerns that keeps you sane when things get complex.
What I Learned Along the Way
Building this system taught me a few things worth sharing:
- Design for multi tenancy early. Retrofitting a single app backend into a multi app one is painful. If there's even a chance you'll build a second product, plan the structure from day one.
- Graph based AI workflows are a game changer. LangGraph made my AI logic dramatically easier to reason about, test, and extend. Once you go graph, you don't go back.
- Structured output from LLMs saves hours. Instead of parsing messy text responses, getting typed Pydantic models directly from Gemini eliminated an entire class of bugs.
- Background jobs aren't optional for AI products. Anything that takes more than a couple of seconds needs to be async. Your users will thank you.
What's Next?
The architecture is built to grow. The next app I plug in will take a fraction of the time the first two took, and that's the whole point. Build the foundation once, then just keep building on top of it.
If you're thinking about building AI powered products, consider investing time in your backend architecture early. It's not the flashy part, but it's the part that lets you ship fast without burning out.
Oh and if you're curious to try the apps that came out of all this, here you go:
- ClearReply: Your AI powered message tone advisor. Available on App Store and Google Play.
- Fit My CV: Tailor your CV to any job description in minutes. Available on App Store, Google Play, and on the web.
Give them a spin and let me know what you think!
Till next time! ✌️