Schema & Migrations

Turbine supports two workflows for managing your database schema: database-first (introspect an existing database) and schema-first (define your schema in TypeScript and push it).


Database-First (Introspect)

This is the default workflow. Point Turbine at your existing database and generate types:

Terminal
npx turbine generate

Turbine reads information_schema and pg_catalog to discover:

  • All tables, columns, and their Postgres types
  • Primary keys, unique constraints, NOT NULL
  • Foreign key relationships between tables
  • Indexes

It then generates TypeScript types that exactly match your database.

Type Mapping

| Postgres Type | TypeScript Type | |---|---| | bigserial, bigint, integer, smallint | number | | text, varchar, char | string | | boolean | boolean | | timestamptz, timestamp, date | Date | | jsonb, json | unknown (or typed via generics) | | uuid | string | | real, double precision, numeric | number | | bytea | Buffer | | text[], integer[], etc. | string[], number[], etc. |

Column Name Conversion

Turbine automatically converts between naming conventions:

  • Postgres: snake_case (e.g., created_at, org_id)
  • TypeScript: camelCase (e.g., createdAt, orgId)

This happens at the query builder level. Your database columns stay as-is.


Schema-First (defineSchema)

Define your database schema in TypeScript using defineSchema() and sync it to your database with turbine push.

defineSchema API

TypeScript
// turbine/schema.ts
import { defineSchema } from 'turbine-orm';

export default defineSchema({
  organizations: {
    id:        { type: 'serial', primaryKey: true },
    name:      { type: 'text', notNull: true },
    slug:      { type: 'varchar', maxLength: 100, unique: true, notNull: true },
    plan:      { type: 'varchar', maxLength: 50, default: "'free'" },
    createdAt: { type: 'timestamp', default: 'now()' },
  },

  users: {
    id:        { type: 'serial', primaryKey: true },
    email:     { type: 'text', unique: true, notNull: true },
    name:      { type: 'text', notNull: true },
    role:      { type: 'varchar', maxLength: 50, default: "'user'" },
    orgId:     { type: 'bigint', notNull: true, references: 'organizations.id' },
    createdAt: { type: 'timestamp', default: 'now()' },
  },

  posts: {
    id:        { type: 'serial', primaryKey: true },
    title:     { type: 'text', notNull: true },
    content:   { type: 'text' },
    published: { type: 'boolean', default: 'false' },
    userId:    { type: 'bigint', notNull: true, references: 'users.id' },
    orgId:     { type: 'bigint', notNull: true, references: 'organizations.id' },
    createdAt: { type: 'timestamp', default: 'now()' },
  },

  comments: {
    id:        { type: 'serial', primaryKey: true },
    body:      { type: 'text', notNull: true },
    postId:    { type: 'bigint', notNull: true, references: 'posts.id' },
    userId:    { type: 'bigint', notNull: true, references: 'users.id' },
    createdAt: { type: 'timestamp', default: 'now()' },
  },
});

Column Types

| Type Name | Postgres Type | Notes | |---|---|---| | 'serial' | BIGSERIAL | Auto-incrementing integer | | 'bigint' | BIGINT | 64-bit integer | | 'integer' | INTEGER | 32-bit integer | | 'smallint' | SMALLINT | 16-bit integer | | 'text' | TEXT | Unlimited text | | 'varchar' | VARCHAR(n) | Use with maxLength | | 'boolean' | BOOLEAN | true/false | | 'timestamp' | TIMESTAMPTZ | Timestamp with timezone | | 'date' | DATE | Date only | | 'json' | JSONB | Binary JSON | | 'uuid' | UUID | UUID type | | 'real' | REAL | 32-bit float | | 'double' | DOUBLE PRECISION | 64-bit float | | 'numeric' | NUMERIC | Arbitrary precision | | 'bytea' | BYTEA | Binary data |

Column Options

| Option | Type | Description | |---|---|---| | type | ColumnTypeName | Required. Column type | | primaryKey | boolean | PRIMARY KEY constraint | | notNull | boolean | NOT NULL constraint | | nullable | boolean | Explicitly nullable | | unique | boolean | UNIQUE constraint | | default | string | DEFAULT expression (raw SQL, e.g., 'now()' or "'active'") | | references | string | Foreign key in 'table.column' format | | maxLength | number | Max length for varchar columns |

Push to Database

Terminal
# Preview the SQL that will run
npx turbine push --dry-run

# Apply changes
npx turbine push

# Regenerate types after pushing
npx turbine generate

The push command:

  1. Loads your schema file
  2. Introspects the current database state
  3. Computes a diff (new tables, new columns, type changes)
  4. Generates DDL statements in dependency order (referenced tables first)
  5. Creates indexes on foreign key columns automatically

Generated SQL Example

For the schema above, turbine push generates:

SQL
CREATE TABLE organizations (
    id BIGSERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    slug VARCHAR(100) UNIQUE NOT NULL,
    plan VARCHAR(50) DEFAULT 'free',
    created_at TIMESTAMPTZ DEFAULT now()
);

CREATE TABLE users (
    id BIGSERIAL PRIMARY KEY,
    email TEXT UNIQUE NOT NULL,
    name TEXT NOT NULL,
    role VARCHAR(50) DEFAULT 'user',
    org_id BIGINT NOT NULL REFERENCES organizations(id),
    created_at TIMESTAMPTZ DEFAULT now()
);

CREATE INDEX idx_users_org_id ON users (org_id);

-- ... posts and comments tables follow

Migration Workflow

For production deployments, use migrations instead of push.

Create a Migration

Terminal
npx turbine migrate create add_users_table

This creates a timestamped SQL file:

turbine/migrations/20240315120000_add_users_table.sql

Edit the file with UP and DOWN sections:

SQL
-- UP
CREATE TABLE users (
    id BIGSERIAL PRIMARY KEY,
    email TEXT UNIQUE NOT NULL,
    name TEXT NOT NULL,
    org_id BIGINT NOT NULL REFERENCES organizations(id),
    created_at TIMESTAMPTZ DEFAULT now()
);

CREATE INDEX idx_users_org_id ON users (org_id);

-- DOWN
DROP TABLE IF EXISTS users;

Apply Migrations

Terminal
# Apply all pending migrations
npx turbine migrate up

# Apply only the next one
npx turbine migrate up --step 1

Migrations run inside a transaction. If any migration fails, the entire batch is rolled back.

Rollback

Terminal
# Rollback the last migration
npx turbine migrate down

# Rollback the last 3
npx turbine migrate down --step 3

Check Status

Terminal
npx turbine migrate status

Migration Tracking

Turbine creates a _turbine_migrations table in your database to track which migrations have been applied. This table is created automatically on the first migrate up.


Recommended Workflow

Development

Use push for rapid iteration:

Terminal
# Edit turbine/schema.ts
# Preview changes
npx turbine push --dry-run
# Apply
npx turbine push
# Regenerate types
npx turbine generate

Production

Use migrations for reproducible, versioned changes:

Terminal
# Create a migration
npx turbine migrate create add_users_table
# Edit the SQL file
# Apply in CI/CD
npx turbine migrate up
# Regenerate types
npx turbine generate

Hybrid

Use push for prototyping, then generate a migration from the diff:

Terminal
# Prototype with push
npx turbine push

# When ready for production, create a migration with the same SQL
npx turbine push --dry-run > migration.sql
npx turbine migrate create my_feature
# Paste the SQL into the migration file