Teardown
  • Blog
DocsDashboard
27/10/2025

How to build modern full stack applications

Learn how to set up a modern tech stack for building full-stack applications. Using Bun as a package manager to build APIs, scripts, and streamline development.

Chris
Chris MitchellFounder of Teardown

This blog is an extension of a talk I gave at a Vancouver.dev event on October 22nd, 2025, called "How I Built Teardown".

If you're looking for the slides, you can find them here.


About Me

Hey, I'm Chris!

I've spent close to 10 years building software in startups from seed to scale-up.

In that time, I've built everything from small websites to large-scale SaaS applications.

Currently, I work at Lazer Technologies as a Senior Software Engineer while building Teardown on the side.

If you want to learn more about me, visit my website delacour.co.nz or find me on LinkedIn.


Disclaimer

Everything here is based on my own experiences, opinions, and preferences, which may not be suitable for all projects or use cases.


Building from the Ground Up

When I first start building a new product — whether it's a new SaaS, mobile app, website, API, or even all of them combined — there are a few key areas I focus on to ensure I have a strong foundation from the start.

Repo Structure

How is the code organized?

Frontend

How does the user interact with the product?

Backend

API routes, background tasks, and more

Database

Where is the data being stored?


Repo Structure

This is arguably the first thing in any project — it's the foundation that everything else is built on top of.

There are pros and cons to any approach, but in the age of AI and LLMs, having everything in one monorepo makes tremendous sense. Setting up clear rules for AI to follow is crucial — otherwise, you're leaving it to guess where to look. Give your AI tools a head start by defining exactly where to find things.

Using Bun Workspaces

To achieve this, I use Bun Workspaces. My initial directory structure typically looks something like this:

my-project/
├── scripts/        # Shared scripts
├── apps/           # Deployable applications
│   ├── backend/    # API server
│   ├── web/        # Landing page app
│   └── dashboard/  # Dashboard app
└── packages/       # Shared libraries
    ├── types/      # Shared TypeScript types
    ├── sdk/        # API client
    ├── ui/         # React components
    ├── styles/     # Shared styles
    └── tsconfig/   # Shared TypeScript config

This structure is pretty self-explanatory, but let me break it down:

  • scripts/ - Shared scripts for running dev, build, test, lint, format, and other common tasks
  • apps/ - All deployable applications live here (web apps, mobile apps, APIs, etc.)
  • packages/ - All shared libraries, including shared types, SDKs, UI components, styles, and configuration

Most of the time, there will be at least one app, if not more (Dashboard, App, Admin, etc.).

With packages, I typically have a few shared types, SDKs, and UI components that are almost the same in every project, just with minor project-specific adjustments.


Scripts

Scripts are always part of any project, so establishing clear patterns for engineers to follow is essential.

For this, I also use Bun and leverage the Bun runtime to execute the scripts.

The reason I chose Bun here is that I can write my scripts using TypeScript and interact with the shell system using their $ directive (docs).

Example: Generating Supabase Types

For example, when using Supabase, there isn't a built-in script to easily generate types for your project — but Supabase does provide a CLI.

By combining Bun and the Supabase CLI, you can write a script like this to generate types for your Supabase project:

#!/usr/bin/env bun

import { $ } from "bun";

const PROJECT_ID = "<<YOUR_PROJECT_ID>>";
const GENERATED_TYPES_FILE = "./src/generated.types.ts";

// Remove the previous types file if it exists
await $`rm -f ${GENERATED_TYPES_FILE}`;

// Generate updated types from Supabase
await $`supabase gen types typescript --project-id=${PROJECT_ID} --schema=public,v1 > ${GENERATED_TYPES_FILE}`;

// Automatically update the git repo (optional)
await $`git add ${GENERATED_TYPES_FILE}`;
await $`git commit -m "chore: update generated types from supabase"`;

When you run this script, it will generate the types for the specified Supabase project and commit them to the repository on the current branch.

You could even extend this to run automatically before a new PR is merged through a GitHub Action, supplying the project ID via secrets.

You might also want to either exclude the generated file from your linting and formatting rules, or add a step in the script to format the file after generation. For this, you could use a tool like Biome.


Frontend

When it comes to the frontend, I typically use TanStack Start, a full-stack framework powered by TanStack Router for React.

Why TanStack Start Over Next.js?

People often ask me why I pick TanStack Start over Next.js.

I find Next.js has too many footguns for my liking. I've seen countless Reddit posts from developers complaining about unexpected $10k+ bills from Vercel because of configuration mistakes or unoptimized code.

With TanStack Start, there's no "directive magic" to worry about. You get a clean, simple, and predictable file structure combined with strong type safety and excellent developer experience. The DevTools for all TanStack packages make debugging and inspecting your apps straightforward.

Do your own research and decide what works best for you, but I highly recommend TanStack Start for new projects. If you're already using Next.js and satisfied with it, there's no need to switch.

I've built several projects with TanStack Start and have been extremely happy with the results.


Backend

Now let's talk about the backend. There's endless debate about monoliths versus microservices, but I think it's a bit of a false dichotomy.

Monoliths Are Fine (Actually, They're Great)

Monoliths are not only fine — they're often the right choice. I encourage people to start with a monolith. Microservices are complex and should only be used when you have a genuinely complex application that requires splitting into smaller, independent modules.

As your application grows, you may eventually need to split the backend into smaller modules, which can lead to a microservices architecture.

Most applications can run on a monolithic backend for quite some time. The right time to consider microservices is typically when you start splitting off dedicated teams to work on different parts of the application — for example, a "payments" team handling everything payment-related.

Both architectures have their place and should be used when appropriate, but for most applications, a monolithic backend is perfectly adequate.

Tech Stack: Elysia + Bun

When it comes to the tech stack, I've recently landed on Elysia and, again, Bun.

The flexibility Elysia provides, combined with the performance of Bun, is outstanding. It's fast, easy to use, and simple to deploy.

Using Bun, I compile the TypeScript server into a single binary file, which makes running multiple instances straightforward and provides a slight performance boost. But really, for me, it's simply easier to manage.

Here's an example Dockerfile for Bun + Elysia deployment (minimal, single binary):

# Example Dockerfile for Bun + Elysia deploy (minimal, single binary)

FROM oven/bun:1.3 AS builder
WORKDIR /app

COPY . .

RUN bun install --frozen-lockfile

# Compile Elysia app to a binary named `app` (no .js extension)
RUN bun build ./src/index.ts --outfile=dist/app --compile

FROM oven/bun:1.3 AS runner
WORKDIR /app

# Copy just the compiled binary (not JS, no node_modules, no source)
COPY --from=builder /app/dist/app ./

EXPOSE 3000

ENTRYPOINT ["./app"]

This Dockerfile builds your Bun+Elysia backend as a single binary named app, and the ENTRYPOINT runs that compiled file — similar to how you would run a bash script: ./script.sh.


Database

When it comes to databases, I typically use PostgreSQL.

The choice depends on the project and its requirements, but for most projects, I use Supabase because of its ease of use and the fact that it's a fully managed database.

If I want to use a self-hosted database, I typically use PostgreSQL on Render.com.

Why Not Just Use AWS Directly?

You might wonder why not use AWS or another cloud provider directly.

The main reason is simplicity. Setting up AWS is tedious, and I don't want to spend time managing infrastructure provisioning.

Render.com provides a simple, user-friendly interface for deploying databases, allowing me to focus on building the application instead of managing infrastructure.


Final Thoughts

Building new products doesn't have to be overwhelming. By establishing a solid foundation with:

  • A well-organized monorepo structure
  • Modern tooling like Bun for performance and developer experience
  • Frameworks that prioritize simplicity and speed
  • Infrastructure that stays out of your way

You can focus on what really matters: shipping fast and iterating faster to build products your customers love.

Have questions about this setup or want to share your own approach?
Let's connect on LinkedIn!

React Native developers, check out Teardown. We help you coordinate mobile releases and force updates when users are on outdated versions of your app.

On this page

About Me
Disclaimer
Building from the Ground Up
Repo Structure
Using Bun Workspaces
Scripts
Example: Generating Supabase Types
Frontend
Why TanStack Start Over Next.js?
Backend
Monoliths Are Fine (Actually, They're Great)
Tech Stack: Elysia + Bun
Database
Why Not Just Use AWS Directly?
Final Thoughts
Teardown

© 2024 Teardown.