Your Music Time Machine works perfectly on your laptop. Users authenticate with Spotify, select a year, and watch their personalized playlist build with audio feature visualizations. Then you deploy it to Railway with three web workers. Suddenly, random errors appear: OperationalError: database is locked. Some users successfully create playlists. Others see error pages. The problem isn't your code. It's SQLite's architecture.
Chapter 24 taught you database migration patterns using a simple weather cache. That was deliberate. You learned the four-phase workflow (Preparation, Setup, Execution, Validation) with a straightforward single-table example. Now you'll apply those exact patterns to your Music Time Machine, a significantly more complex application with OAuth authentication, three related tables, JSON data, and a Flask web interface.
This chapter walks you through the complete migration. You'll analyze your existing schema, map SQLite types to PostgreSQL, write data migration scripts, update your Flask application code, and deploy to production with connection pooling. By the end, your Music Time Machine handles hundreds of concurrent users without blocking, stores audio features in queryable JSONB columns, and demonstrates production-grade database patterns in your portfolio.
Database migration isn't just a technical task. It's interview material. When you explain "I migrated a three-table OAuth application from SQLite to PostgreSQL with zero data loss," you demonstrate professional judgment, production experience, and systematic problem-solving. This chapter helps you document every decision for your portfolio and interviews.
Chapter Roadmap
This chapter takes you through a complete, real-world database migration — from analyzing your existing Music Time Machine schema all the way to deploying a production-ready PostgreSQL-backed application on Railway.
Assessment and Schema Mapping
Analyze your existing SQLite schema across three tables (artists, tracks, playlist_tracks), understand why PostgreSQL solves your concurrency problems, and map every SQLite column to its optimal PostgreSQL equivalent — including JSONB for audio features and proper timestamps.
PostgreSQL Setup and Data Migration
Set up PostgreSQL locally and on Railway, create the production schema, then write a multi-table migration script that transfers artists, tracks, and playlist entries while converting JSON strings to queryable JSONB columns and preserving all foreign key relationships.
Application Code Conversion
Replace sqlite3 with psycopg2 across your Flask application, implement connection pooling, convert query syntax from ? placeholders to %s, update your dashboard routes to use JSONB queries, and verify Chart.js visualizations still render correctly.
Production Deployment
Deploy your migrated Music Time Machine to Railway with a PostgreSQL addon, configure environment variables and connection strings, run production migrations, and verify the live application handles concurrent users without the database is locked errors that plagued SQLite.
Advanced PostgreSQL Features
Unlock PostgreSQL-exclusive capabilities for your Music Time Machine: implement full-text search across track and artist names, write advanced JSONB queries for music analysis (filtering by tempo, energy, danceability), and optimize slow queries with EXPLAIN ANALYZE.
Current Architecture Analysis
Before migrating, understand what you built in Chapters 14-18. Your Music Time Machine has three interconnected tables storing Spotify track data, artist information, and user-created playlists. Let's document the current schema systematically.
import sqlite3
# Connect to your Music Time Machine database
conn = sqlite3.connect('music_time_machine.db')
cursor = conn.cursor()
# Get all tables
cursor.execute("""
SELECT name FROM sqlite_master
WHERE type='table'
ORDER BY name
""")
tables = cursor.fetchall()
print("Tables in Music Time Machine:")
for table_name, in tables:
print(f"\n{'='*50}")
print(f"Table: {table_name}")
print('='*50)
# Get schema
cursor.execute(f"PRAGMA table_info({table_name})")
columns = cursor.fetchall()
for col in columns:
col_id, name, type_, not_null, default, is_pk = col
constraints = []
if is_pk:
constraints.append("PRIMARY KEY")
if not_null:
constraints.append("NOT NULL")
if default:
constraints.append(f"DEFAULT {default}")
constraint_str = " ".join(constraints) if constraints else ""
print(f" {name}: {type_} {constraint_str}")
# Get row count
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
count = cursor.fetchone()[0]
print(f"\n Rows: {count:,}")
# Sample data
cursor.execute(f"SELECT * FROM {table_name} LIMIT 1")
sample = cursor.fetchone()
if sample:
print(f" Sample: {sample[:3]}...") # First 3 columns
conn.close()
Tables in Music Time Machine:
==================================================
Table: artists
==================================================
id: INTEGER PRIMARY KEY
spotify_id: TEXT NOT NULL
name: TEXT NOT NULL
created_at: TEXT NOT NULL
Rows: 847
Sample: (1, '4Z8W4fKeB5YxbusRsdQVPb', 'Radiohead')...
==================================================
Table: playlist_tracks
==================================================
id: INTEGER PRIMARY KEY
playlist_id: TEXT NOT NULL
track_id: INTEGER NOT NULL
position: INTEGER NOT NULL
created_at: TEXT NOT NULL
Rows: 2,341
Sample: (1, 'playlist_2020_december', 523, 1)...
==================================================
Table: tracks
==================================================
id: INTEGER PRIMARY KEY
spotify_id: TEXT NOT NULL
name: TEXT NOT NULL
artist_id: INTEGER
album_name: TEXT
release_date: TEXT
duration_ms: INTEGER
popularity: INTEGER
audio_features: TEXT
created_at: TEXT NOT NULL
Rows: 1,872
Sample: (1, '3n3Ppam7vgaVa1iaRUc9Lp', 'Mr. Brightside')...
This output reveals your schema structure: three tables with foreign key relationships, timestamps stored as TEXT, and audio features stored as JSON strings. You have real data (1,872 tracks, 847 artists, 2,341 playlist entries) that must migrate without loss.
Why PostgreSQL Solves Your Concurrency Problem
Remember the architectural difference from Chapter 24: SQLite is an embedded database that runs inside your application process. Only one process can write at a time. When you deploy your Flask application with three Gunicorn workers, each worker is a separate process. When two users simultaneously create playlists (writing to the database), one worker waits while the other writes. If the wait exceeds the timeout, you get database is locked errors.
PostgreSQL solves this with client-server architecture. The database runs as a separate server process that coordinates all writes internally. Your three Flask workers connect to this server simultaneously. When User A creates a playlist while User B analyzes their library, PostgreSQL handles both requests without blocking. The server manages concurrency, transactions, and locking automatically.
With SQLite, your Music Time Machine handles about 10-20 requests per minute before conflicts appear. With PostgreSQL and connection pooling, it handles hundreds of concurrent users creating personalized time machines simultaneously. This isn't theoretical—this is the difference between a portfolio project that impresses ("deployed, but has issues") and one that convinces ("production-ready at scale").
Beyond concurrency, PostgreSQL brings features your Music Time Machine can leverage: JSONB columns for queryable audio features, full-text search for track names, and proper timestamp handling with timezone support. These aren't just nice-to-haves. They're interview talking points that demonstrate you understand production database patterns.
Migration Timeline and Approach
This migration follows the four-phase workflow from Chapter 24, but scaled for a multi-table application with production deployment. Here's your realistic timeline:
Preparation and Schema Analysis (1-2 hours)
Document existing schema, analyze data types, plan PostgreSQL equivalents. Create backup of SQLite database. Set up test environment.
PostgreSQL Setup and Schema Creation (1-2 hours)
Install PostgreSQL locally, create production database on Railway, write PostgreSQL schema with proper types, set up Alembic.
Data Migration Script (2-3 hours)
Write migration script for all three tables, handle JSON conversion, preserve foreign keys, verify data integrity.
Application Code Updates (2-3 hours)
Convert database.py to psycopg2, update all queries, implement connection pooling, test OAuth flow thoroughly.
Testing and Validation (2 hours)
Test all Flask routes, verify Chart.js visualizations, test concurrent users, compare SQLite vs PostgreSQL behavior.
Production Deployment (1 hour)
Deploy to Railway with PostgreSQL addon, run migrations, configure environment variables, smoke test live application.
Total time: 8-12 hours spread across 2-3 focused sessions. This isn't a quick afternoon project. You're migrating production data and updating a complex application. Taking time prevents mistakes that require rollbacks.
Before starting, copy your music_time_machine.db file to a safe location: music_time_machine_backup_2024_12_08.db. Include the date so you know when you created it. If anything goes wrong during migration, you can restore from this backup instantly.
For Railway deployment, keep your SQLite version live while you test the PostgreSQL version on a separate Railway project. Once PostgreSQL works perfectly, switch traffic. This gives you a zero-downtime migration path and an instant rollback option if needed.
Learning Objectives
By the end of this chapter, you'll be able to:
- Assess multi-table applications for PostgreSQL migration and plan the migration systematically.
- Map complex SQLite schemas to PostgreSQL with proper data types including JSONB and timestamps.
- Write data migration scripts that preserve foreign key relationships and handle large datasets.
- Convert Flask applications from sqlite3 to psycopg2 with connection pooling.
- Migrate OAuth-enabled applications without breaking authentication flows.
- Deploy PostgreSQL-backed applications to production with zero downtime.
- Implement advanced PostgreSQL features like JSONB queries and full-text search.
- Document database migrations for portfolio presentation and technical interviews.