Building a Full-Stack App: From Idea to Production
I’ve built enough full-stack applications at this point that my process is fairly repeatable. The specific features change per project, but the architecture, tooling, and deployment approach stay mostly the same. Here’s a practical walkthrough of how I go from an idea to a running application in production.
Starting Point: Define What You’re Building
Before touching any code, I need a clear picture of what the application does. Not a detailed spec — just enough to know the core entities, the main user flows, and any integrations.
For this walkthrough, imagine a simple project management tool: users create projects, add tasks, assign them to team members, and track progress. Nothing groundbreaking, but enough to illustrate real decisions.
I sketch the data model first. What are the entities? What are the relationships? For this app: Users, Projects, Tasks, and Assignments. This informs everything downstream — the database schema, the API shape, and the frontend components.
Backend: Node.js + TypeScript + MikroORM
I build backends with Node.js and TypeScript. TypeScript catches a class of bugs at compile time that would otherwise show up in production, and the developer experience with modern tooling is excellent.
For the ORM, I use MikroORM with PostgreSQL. The entity definitions are clean and the migration system is reliable:
@Entity()
export class Task {
@PrimaryKey()
id!: number;
@Property()
title!: string;
@Property({ type: 'text', nullable: true })
description?: string;
@Enum(() => TaskStatus)
status: TaskStatus = TaskStatus.TODO;
@ManyToOne(() => Project)
project!: Project;
@ManyToOne(() => User, { nullable: true })
assignee?: User;
@Property()
createdAt: Date = new Date();
}
MikroORM’s Unit of Work pattern means I batch database operations instead of firing queries one at a time. It also supports the identity map, so I don’t accidentally load the same entity twice in a single request. These things matter when the application grows.
For the API layer, I typically use Express or Fastify — whichever fits the project. I structure the codebase by feature, not by type. So instead of a controllers/ folder and a services/ folder, I have a tasks/ folder with everything related to tasks in one place. This makes it easier to find things and easier to delete features cleanly.
Database: PostgreSQL
PostgreSQL is my default database for almost everything. It’s reliable, the tooling is mature, and it handles both relational data and JSON fields well.
I generate migrations with MikroORM’s CLI. Every schema change gets a migration file that’s committed to the repo. No manual SQL in production, no “just run this ALTER TABLE” messages. Migrations run automatically during deployment.
For the project management app, the schema is straightforward — a few tables with foreign keys. But even for simple schemas, I always set up proper indexes on columns I’ll query frequently. A missing index on a frequently filtered column is the most common performance issue I see in client projects.
Frontend: React
For the frontend, I reach for React. It’s widely understood, well-documented, and the ecosystem covers nearly everything I need. I typically use Vite for the build tool — fast dev server, fast builds, minimal config.
I keep the frontend simple. State management is usually just React’s built-in hooks plus a lightweight data fetching library. I don’t reach for Redux or complex state management unless the application genuinely needs it. Most CRUD apps don’t.
The frontend talks to the backend through a REST API. I define the API contract early so frontend and backend development can happen in parallel. If I’m working solo on both, I’ll build the API first, test it with curl or Hoppscotch, then build the frontend against it.
Deployment: Docker + Caddy + Drone CI
This is where everything comes together. My deployment setup is intentionally simple:
Docker — Both the backend and frontend get their own Dockerfiles. The backend runs as a Node.js container. The frontend is built as static files and served by Caddy. Multi-stage Docker builds keep the images small:
# Backend
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/package.json ./
CMD ["node", "dist/index.js"]
Caddy — Caddy handles reverse proxying and automatic HTTPS. The Caddyfile is minimal:
app.example.com {
reverse_proxy backend:3000
}
app.example.com {
root * /srv
file_server
try_files {path} /index.html
}
No manual certificate management. No Certbot. Caddy provisions and renews Let’s Encrypt certificates automatically.
Drone CI — When I push a git tag, Drone CI builds the Docker images and pushes them to the container registry. Watchtower runs on the VPS and detects new images, pulling and restarting containers automatically.
The pipeline looks like:
- Push a git tag (e.g.,
v1.2.0) - Drone CI triggers, builds Docker images, pushes to registry
- Watchtower on the VPS detects the new image
- Container restarts with the new version
The entire infrastructure runs on a single VPS. No Kubernetes, no managed container services, no complicated orchestration. For most applications, a single server with Docker Compose is more than enough.
Tradeoffs and Decisions
Every stack has tradeoffs. Here are the ones I’ve accepted:
- Single server — This means no horizontal scaling. For the vast majority of projects I work on, a single VPS handles the load easily. If a project needs to scale horizontally, I’ll use a different architecture from the start.
- MikroORM over Prisma — Prisma is more popular, but MikroORM gives me more control over the Unit of Work and doesn’t require a separate binary. The migration tooling is also more predictable in my experience.
- No serverless — I prefer managing my own server over dealing with cold starts, vendor lock-in, and function-level debugging. The cost difference is negligible at the scale I work at.
- Caddy over Nginx — I switched to Caddy for the automatic HTTPS and simpler config. For raw throughput at massive scale, Nginx is still faster. But I’ve never hit that limit on a client project.
The Result
This stack lets me go from zero to a deployed, production-ready application in a few weeks. It’s boring in the best way — no surprises, no exotic dependencies, no “works on my machine” problems.
The whole thing is reproducible. Clone the repo, run docker compose up, and you have the same environment locally as production. New developers can onboard quickly. Clients can find other developers to maintain it if I’m not available.
That’s the goal — build something that works, ship it, and make sure it stays running without me babysitting it.
If you need a full-stack application built, I’m available for freelance work. Get in touch — I typically respond within 24 hours.
Related
How I Build and Ship a Client Project
My process for taking a client project from first conversation to production — discovery, development, deployment, and handoff.
Word Embeddings, Cross-Lingual Alignment, and Building CLEU
How word embeddings work, what cross-lingual alignment means, and why I built a tool to explore them with FAISS.
Git Tags, Drone CI, and Watchtower — A Simple Deployment Pipeline
How I moved from Jenkins to Drone CI with git tags and Watchtower for a lightweight, reliable deployment pipeline.