Local Development Setup¶
Overview¶
Local development setup encompasses all the tools, services, and configurations needed to run a Django application on a developer's machine. This includes Python environments, databases, caches, AWS service mocks, and supporting infrastructure.
Setup Philosophy¶
Core Principles
- Automation first - scripts handle complex setup
- Idempotent operations - safe to run multiple times
- Environment isolation - don't pollute global system
- Service parity - local mirrors production closely
- Fast feedback - quick iteration cycles
Local vs Container Development¶
Two development approaches exist:
- Native/Local: Install tools directly on host OS
- Container-based: Use devcontainers (Docker)
When to use each:
| Aspect | Native | Container |
|---|---|---|
| Setup time | Longer (OS-specific) | Shorter (Docker handles it) |
| Performance | Faster (native execution) | Slightly slower (container overhead) |
| Consistency | Varies by OS | Identical across team |
| Debugging | Full tool access | Limited by container |
| Resource usage | Lighter | Heavier (Docker daemon) |
Theory: Container-based development (devcontainers) is recommended for teams prioritizing consistency. Native development suits solo developers or those needing maximum performance for specific workflows (data science, machine learning).
Prerequisites¶
Required Tools¶
Before starting, install these tools:
Python Version Manager¶
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
Theory: uv is a modern Python package and environment manager. It's 10-100x faster than pip and handles:
- Python version installation
- Virtual environment creation
- Dependency resolution
- Package installation
Alternative: pyenv + pip + virtualenv (traditional approach)
Docker and Docker Compose¶
# macOS
brew install --cask docker
# Linux
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Windows
# Download Docker Desktop from docker.com
Theory: Docker runs supporting services (MySQL, Redis, LocalStack) in containers. This avoids installing databases directly on the host and enables version-specific testing.
Just (Command Runner)¶
# macOS
brew install just
# Linux
curl --proto '=https' --tlsv1.2 -sSf https://just.systems/install.sh | bash
# Via npm (all platforms)
npm install -g just-install
Theory: just is a command runner (like make but better). It provides a consistent interface for common tasks:
just init-dev-local # Initialize local environment
just test-poseidon # Run tests
just pcr # Run pre-commit checks
direnv (Optional but Recommended)¶
# macOS
brew install direnv
# Linux
sudo apt install direnv
# Shell integration (add to ~/.bashrc or ~/.zshrc)
eval "$(direnv hook bash)" # for bash
eval "$(direnv hook zsh)" # for zsh
Theory: direnv automatically activates/deactivates virtual environments when entering/leaving directories. It loads .envrc files, setting environment variables and activating venvs automatically.
Without direnv:
With direnv:
Operating System Dependencies¶
Different operating systems require different packages:
macOS¶
# Install Homebrew if not present
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install dependencies
brew update
brew install \
openssl \
readline \
sqlite3 \
xz \
zlib \
tcl-tk \
libxml2 \
libxmlsec1
Theory: Python C extensions require system libraries. OpenSSL provides SSL/TLS support, readline enables interactive shell features, libxml2/libxmlsec1 support XML processing (SAML, SOAP), etc.
Linux (Ubuntu/Debian)¶
sudo apt update
sudo apt install -y \
make \
build-essential \
libssl-dev \
zlib1g-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
wget \
curl \
llvm \
libncursesw5-dev \
xz-utils \
tk-dev \
libxml2-dev \
libxmlsec1-dev \
libxmlsec1-openssl \
libffi-dev \
liblzma-dev \
pkg-config \
clang
Theory: Linux requires build tools (build-essential, clang) and library headers (-dev packages). The -dev packages contain header files needed to compile C extensions.
Windows (WSL2 Recommended)¶
For Windows, use WSL2 (Windows Subsystem for Linux):
Theory: Windows has fundamental differences from Unix (path separators, line endings, permissions). WSL2 provides a real Linux kernel, ensuring compatibility with production environments (which are typically Linux).
Bootstrap Script¶
Automated Setup¶
The bootstrap script automates environment creation:
#!/bin/bash
# bootstrap_venv.sh
set -e # Exit on error
export UV_VENV_CLEAR=1 # Clear existing venvs before creating
echo "Starting development environment setup..."
# Detect OS and install dependencies
if [[ "$OSTYPE" == "darwin"* ]]; then
# macOS
if ! command -v brew &> /dev/null; then
echo "Homebrew not found. Install from https://brew.sh"
exit 1
fi
brew update
brew install openssl readline sqlite3 xz zlib
elif [[ "$OSTYPE" == "linux"* ]]; then
# Linux
sudo apt update
sudo apt install -y build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl
fi
# Install uv if not present
if ! command -v uv &> /dev/null; then
echo "Installing uv..."
curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.cargo/bin:$PATH"
fi
# Install direnv if not present
if ! command -v direnv &> /dev/null; then
echo "Installing direnv..."
if [[ "$OSTYPE" == "darwin"* ]]; then
brew install direnv
elif [[ "$OSTYPE" == "linux"* ]]; then
sudo apt-get install -y direnv
fi
# Add shell hook
if [[ "$SHELL" == */zsh ]]; then
grep -q 'eval "$(direnv hook zsh)"' ~/.zshrc || \
echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc
else
grep -q 'eval "$(direnv hook bash)"' ~/.bashrc || \
echo 'eval "$(direnv hook bash)"' >> ~/.bashrc
fi
fi
# Create virtual environment
TARGET_PYTHON="3.13.5"
VENV_NAME="venv-myapp-${TARGET_PYTHON}-dev"
echo "Installing Python $TARGET_PYTHON"
uv python install "$TARGET_PYTHON"
echo "Creating virtual environment: $VENV_NAME"
uv venv "$VENV_NAME" --python "$TARGET_PYTHON"
# Install dependencies
echo "Installing dependencies..."
uv pip install -r requirements/requirements-dev.txt --python "$VENV_NAME/bin/python"
# Install Playwright browsers for testing
echo "Installing Playwright browsers..."
"$VENV_NAME/bin/python" -m playwright install --with-deps chromium
# Create .envrc for direnv auto-activation
cat > .envrc << EOF
# Auto-activate virtual environment
VENV_PATH="$VENV_NAME"
if [ -d "\$VENV_PATH" ]; then
source "\$VENV_PATH/bin/activate"
export VIRTUAL_ENV_PROMPT="(myapp-dev) "
else
echo "Warning: Virtual environment not found at \$VENV_PATH"
fi
EOF
# Allow direnv
direnv allow
echo "✅ Setup completed successfully!"
echo "Python version: $(python --version)"
echo "Virtual environment: $VENV_NAME"
Key Components:
Error Handling¶
Theory: Exit immediately if any command fails. Prevents cascading errors. Without this, failed steps might go unnoticed, leading to incomplete setup.
OS Detection¶
if [[ "$OSTYPE" == "darwin"* ]]; then
# macOS-specific
elif [[ "$OSTYPE" == "linux"* ]]; then
# Linux-specific
fi
Theory: Different operating systems require different package managers and package names. Detection enables cross-platform compatibility.
Tool Installation¶
Theory: Check if tool exists before installing. Makes the script idempotent (safe to run multiple times). command -v checks if a command is available in PATH.
Virtual Environment Creation¶
Theory: uv downloads and installs the exact Python version, then creates a venv using it. This ensures:
- Correct Python version (matches production)
- Isolated dependencies (doesn't affect system Python)
- Reproducible environments (same version across team)
direnv Configuration¶
cat > .envrc << 'EOF'
source venv-myapp-3.13.5-dev/bin/activate
export VIRTUAL_ENV_PROMPT="(myapp-dev) "
EOF
direnv allow
Theory: .envrc runs when entering the directory. It activates the venv automatically. direnv allow is required for security (prevents malicious .envrc files from running without confirmation).
Requirements Compilation¶
Before installing dependencies, compile requirements files:
# Generate requirements for different environments
TARGET_PYTHON="3.13.5"
uv pip compile \
requirements/requirements-production.in \
-o requirements/requirements-production-py-${TARGET_PYTHON}.txt \
--python-version=${TARGET_PYTHON}
uv pip compile \
requirements/requirements-dev.in \
-o requirements/requirements-dev-py-${TARGET_PYTHON}.txt \
--python-version=${TARGET_PYTHON}
# Create symlinks for convenience
ln -sf requirements-production-py-${TARGET_PYTHON}.txt \
requirements/requirements-production.txt
ln -sf requirements-dev-py-${TARGET_PYTHON}.txt \
requirements/requirements-dev.txt
Theory: Compiling requirements:
- Resolves dependencies: Finds compatible versions of all packages
- Pins versions: Creates reproducible installs
- Platform-specific: Compiles for target Python version and OS
- Fast installation: Pre-resolved dependencies install faster
Input files (.in): High-level dependencies
Output files (.txt): Fully resolved with all transitive dependencies
Supporting Services¶
Docker Network¶
Create a shared network for services:
Theory: Docker networks enable service discovery. Containers on the same network can reference each other by service name:
Database Setup¶
MySQL Container¶
# docker-compose.dev.local.yml
services:
db:
image: mysql:8.0
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myapp
MYSQL_USER: myapp
MYSQL_PASSWORD: myapp
volumes:
- db-data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 5s
retries: 10
interval: 10s
networks:
- dev-network
volumes:
db-data:
networks:
dev-network:
external: true
Theory:
- Image tag: Use specific versions (
:8.0) not:latestfor reproducibility - Port mapping:
3306:3306allows host access (for GUI tools like MySQL Workbench) - Environment variables: Initialize database and user on first run
- Volume:
db-datapersists data across container restarts - Healthcheck: Ensures database is ready before dependent services start
Start the database:
Theory: -d runs in detached mode (background). Services start and remain running.
Database Initialization¶
After starting the database, initialize schema:
# Run migrations
DJANGO_SETTINGS_MODULE='myapp.settings.development' \
python manage.py migrate
# Create test database (if using separate DB for tests)
DJANGO_SETTINGS_MODULE='myapp.settings.development' \
python manage.py migrate --database=test_db
# Create cache tables
DJANGO_SETTINGS_MODULE='myapp.settings.development' \
python manage.py createcachetable
Theory: Django migrations are idempotent. Running them multiple times is safe; Django tracks which migrations have been applied. The --database flag targets specific database configurations (for multi-database setups).
Redis Setup¶
# docker-compose.dev.local.yml
services:
redis:
image: redis:7-alpine
restart: always
ports:
- "6379:6379"
volumes:
- redis-data:/data
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- dev-network
volumes:
redis-data:
Theory:
- alpine: Minimal image variant (~5MB vs ~100MB)
- appendonly: Enables persistence (writes to disk)
- Volume: Persist Redis data across restarts
- Healthcheck:
redis-cli pingreturns PONG when ready
LocalStack (AWS Services)¶
LocalStack mocks AWS services for local development:
# docker-compose.dev.local.yml
services:
localstack:
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
- SERVICES=ssm,s3,sqs,ses
- DEBUG=1
- DATA_DIR=/var/lib/localstack/data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4566/_localstack/health"]
timeout: 10s
retries: 10
interval: 5s
start_period: 30s
volumes:
- localstack-data:/var/lib/localstack
networks:
- dev-network
volumes:
localstack-data:
Theory: LocalStack provides:
- SSM: Parameter Store for configuration
- S3: Object storage for files
- SQS: Message queues
- SES: Email sending
This eliminates the need for AWS accounts during development and enables offline work.
LocalStack Initialization¶
After starting LocalStack, initialize resources:
# Wait for LocalStack to be ready
sleep 30
# Create S3 bucket
aws --endpoint-url=http://localhost:4566 s3 mb s3://myapp-private
# Initialize SSM parameters
python scripts/init-localstack-ssm.py
init-localstack-ssm.py:
import boto3
# Connect to LocalStack
ssm = boto3.client(
'ssm',
endpoint_url='http://localhost:4566',
region_name='us-east-1',
aws_access_key_id='test',
aws_secret_access_key='test'
)
# Create parameters
parameters = {
'/myapp/dev/database-host': 'db',
'/myapp/dev/database-name': 'myapp',
'/myapp/dev/redis-url': 'redis://redis:6379/0',
'/myapp/dev/secret-key': 'dev-secret-key-change-in-production',
}
for name, value in parameters.items():
ssm.put_parameter(
Name=name,
Value=value,
Type='String',
Overwrite=True
)
print(f"Created parameter: {name}")
Theory: Django applications read configuration from SSM in production. LocalStack enables the same pattern locally, ensuring configuration code paths are tested.
Email Testing (Mailpit)¶
# docker-compose.dev.local.yml
services:
mailpit:
image: axllent/mailpit
restart: unless-stopped
ports:
- "8025:8025" # Web UI
- "1025:1025" # SMTP
environment:
MP_MAX_MESSAGES: 5000
MP_SMTP_AUTH_ACCEPT_ANY: 1
MP_SMTP_AUTH_ALLOW_INSECURE: 1
volumes:
- mailpit-data:/data
networks:
- dev-network
volumes:
mailpit-data:
Theory: Mailpit captures all outgoing emails, preventing accidental sends to real addresses during development. The web UI (http://localhost:8025) displays captured emails. Django configuration:
Certificate Generation¶
For HTTPS development, generate self-signed certificates:
#!/bin/bash
# scripts/setup-certificates.sh
CERT_DIR="$HOME/certs"
mkdir -p "$CERT_DIR"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout "$CERT_DIR/selfsigned.key" \
-out "$CERT_DIR/selfsigned.crt" \
-subj "/CN=localhost" \
-addext "subjectAltName = DNS:localhost,DNS:*.localhost,DNS:*.local"
chmod 644 "$CERT_DIR/selfsigned.crt"
chmod 600 "$CERT_DIR/selfsigned.key"
echo "✅ Certificates created in $CERT_DIR"
Theory:
- -x509: Create self-signed certificate (not a CSR)
- -nodes: No passphrase (for development convenience)
- -days 365: Valid for one year
- -newkey rsa:2048: Generate 2048-bit RSA key
- subjectAltName: Support localhost and wildcard subdomains
Security Note: Self-signed certificates trigger browser warnings. This is expected. For team-wide development, consider a local CA or mkcert.
Using Certificates¶
Configure Django to use certificates:
# settings/development.py
import os
SECURE_SSL_REDIRECT = False # Don't force HTTPS (development)
# For runserver_plus (django-extensions)
RUNSERVERPLUS_SERVER_ADDRESS_PORT = '0.0.0.0:443'
RUNSERVERPLUS_CERT_FILE = '/certs/selfsigned.crt'
RUNSERVERPLUS_KEY_FILE = '/certs/selfsigned.key'
Run with HTTPS:
python manage.py runserver_plus --cert-file ~/certs/selfsigned.crt \
--key-file ~/certs/selfsigned.key
Theory: Testing HTTPS locally ensures cookies (Secure flag), CORS, and third-party integrations (OAuth) work correctly before deployment.
Environment Variables¶
.env File Management¶
Create a .env.local file for development:
# .env.local
# Django Settings
DJANGO_SETTINGS_MODULE=myapp.settings.development
DJANGO_SECRET_KEY=dev-secret-key-change-in-production
LOG_LEVEL=DEBUG
ALLOW_PYTEST_BYPASS=true
# Database
POSEIDON_DATABASE_HOST=db
POSEIDON_DATABASE_NAME=myapp
POSEIDON_DATABASE_USER=myapp
POSEIDON_DATABASE_PASSWORD=myapp
# Redis
REDIS_URL=redis://redis:6379/0
# AWS (LocalStack)
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT_URL=http://localstack:4566
AWS_ACCESS_KEY_ID=test
AWS_SECRET_ACCESS_KEY=test
LOCALSTACK_HOST=localstack
# Email
EMAIL_HOST=mailpit
EMAIL_PORT=1025
EMAIL_USE_TLS=false
Theory: .env files centralize configuration. Keep them out of version control (.gitignore):
Provide an .env.example template for new developers:
# .env.example
DJANGO_SETTINGS_MODULE=myapp.settings.development
DJANGO_SECRET_KEY=
DATABASE_HOST=db
DATABASE_NAME=myapp
# ... (with blank/placeholder values)
Loading Environment Variables¶
Django can load .env files automatically:
# settings/base.py
from pathlib import Path
import environ
env = environ.Env()
# Read .env file
environ.Env.read_env(Path(__file__).resolve().parent.parent / '.env.local')
# Use environment variables
SECRET_KEY = env('DJANGO_SECRET_KEY')
DEBUG = env.bool('DEBUG', default=False)
Theory: django-environ provides type-safe environment variable access. The bool(), int(), list() methods convert strings to appropriate types.
Complete Setup Script¶
Combine all steps into a single initialization command:
# justfile
init-dev-local:
@echo "🚀 Initializing local development environment..."
# Setup Python environment
@echo "🐍 Setting up Python environment..."
bash bootstrap_venv.sh
# Create Docker network
@echo "🌐 Creating Docker network..."
docker network inspect dev-network >/dev/null 2>&1 || \
docker network create dev-network
# Start services
@echo "🐳 Starting Docker services..."
docker compose -f docker-compose.dev.local.yml up -d
# Wait for services to be ready
@echo "⏳ Waiting for services to start..."
sleep 30
# Initialize LocalStack
@echo "☁️ Initializing LocalStack..."
awslocal s3 mb s3://myapp-private
python scripts/init-localstack-ssm.py
# Run migrations
@echo "🗄️ Running database migrations..."
DJANGO_SETTINGS_MODULE='myapp.settings.development' \
python manage.py migrate
# Create cache tables
@echo "💾 Creating cache tables..."
DJANGO_SETTINGS_MODULE='myapp.settings.development' \
python manage.py createcachetable
# Create superuser
@echo "👤 Creating superuser..."
DJANGO_SETTINGS_MODULE='myapp.settings.development' \
python manage.py createsuperuser --noinput \
--username admin --email admin@example.com || true
@echo "✅ Local development environment ready!"
@echo "Run 'python manage.py runserver' to start the application"
Theory: A single command (just init-dev-local) handles the entire setup. This:
- Reduces onboarding time (minutes vs hours)
- Ensures consistency (everyone follows the same steps)
- Documents the process (justfile is self-documenting)
- Enables automation (CI can use the same commands)
Development Workflow¶
Daily Workflow¶
After initial setup, the daily workflow is:
# 1. Start Docker services (if not already running)
docker compose -f docker-compose.dev.local.yml up -d
# 2. Activate virtual environment (or let direnv do it)
cd myproject # direnv activates automatically
# 3. Pull latest code
git pull
# 4. Run migrations (if schema changed)
python manage.py migrate
# 5. Start development server
python manage.py runserver
# 6. In another terminal: watch Tailwind CSS
just tailwind-watch
Theory: Most days, steps 1-4 are instant (no changes). Step 5 starts the server, step 6 watches for CSS changes and rebuilds automatically.
Common Commands¶
Organize common tasks in justfile:
# Run development server
run:
python manage.py runserver 0.0.0.0:8000
# Run with HTTPS
run-https:
python manage.py runserver_plus --cert-file ~/certs/selfsigned.crt \
--key-file ~/certs/selfsigned.key
# Run tests
test:
pytest
# Run specific test file
test-file FILE:
pytest {{FILE}}
# Run pre-commit checks
pcr:
pre-commit run --all-files
# Build Tailwind CSS
tailwind-build:
npx tailwindcss -i ./static/css/input.css \
-o ./static/css/output.css \
--minify
# Watch Tailwind CSS
tailwind-watch:
npx tailwindcss -i ./static/css/input.css \
-o ./static/css/output.css \
--watch
# Create migration
makemigrations:
python manage.py makemigrations
# Apply migrations
migrate:
python manage.py migrate
# Create superuser
createsuperuser:
python manage.py createsuperuser
# Django shell
shell:
python manage.py shell_plus
# Collect static files
collectstatic:
python manage.py collectstatic --noinput
# Database shell
dbshell:
python manage.py dbshell
# Clear cache
clear-cache:
python manage.py clear_cache
Theory: Justfile provides a discoverable command interface. Run just to see available commands. This eliminates the need to remember complex command syntax.
Troubleshooting¶
Port Conflicts¶
Symptom: docker compose up fails with "port already in use"
Diagnosis:
# Find process using port
lsof -i :3306 # macOS/Linux
netstat -ano | findstr :3306 # Windows
# Check running containers
docker ps
Solutions:
- Stop conflicting service
- Change port mapping in
docker-compose.yml - Stop unused containers:
docker stop $(docker ps -q)
Database Connection Failures¶
Symptom: Django can't connect to database
Diagnosis:
# Check container status
docker ps
# Check container logs
docker logs myapp-db
# Test connection from host
mysql -h 127.0.0.1 -P 3306 -u myapp -p
Solutions:
- Ensure database container is running and healthy
- Verify credentials in
.env.local - Check
DATABASE_HOST(uselocalhostor127.0.0.1for host,dbfor containers) - Wait longer (database startup can take 10-30s)
Virtual Environment Issues¶
Symptom: ModuleNotFoundError despite installing package
Diagnosis:
# Check which Python is active
which python
# Check installed packages
pip list
# Verify virtual environment is activated
echo $VIRTUAL_ENV
Solutions:
- Activate virtual environment:
source venv-myapp-3.13.5-dev/bin/activate - Reinstall dependencies:
uv pip install -r requirements/requirements-dev.txt - Clear and recreate venv:
rm -rf venv-* && bash bootstrap_venv.sh
LocalStack Not Ready¶
Symptom: AWS SDK errors when accessing S3/SSM
Diagnosis:
# Check LocalStack health
curl http://localhost:4566/_localstack/health
# Check LocalStack logs
docker logs localstack
Solutions:
- Wait longer (LocalStack takes 20-60s to start)
- Restart LocalStack:
docker restart localstack - Check resource limits (LocalStack needs adequate memory)
Permission Errors¶
Symptom: "Permission denied" when running scripts or accessing files
macOS/Linux:
# Make script executable
chmod +x bootstrap_venv.sh
# Fix file ownership (if running as different user)
sudo chown -R $USER:$USER ~/src/myproject
Windows (WSL2):
Theory: File permissions differ between operating systems. Scripts need execute permission (+x). WSL2 sometimes sets overly permissive defaults; setting umask fixes this.
Best Practices¶
Do's¶
Recommended Practices
- Script everything: Automate setup and common tasks
- Document prerequisites: List required tools in README
- Use .env files: Keep configuration out of code
- Pin versions: Specify exact versions for reproducibility
- Test initialization: Regularly test setup on fresh machines
- Provide examples: Include
.env.exampleand sample data - Use just/make: Standardize command interface
- Enable direnv: Automatic environment activation
Don'ts¶
Avoid These Patterns
- Don't commit secrets: Keep
.envfiles out of git - Don't use latest tags: Pin Docker image versions
- Don't skip health checks: Wait for services to be ready
- Don't install globally: Use virtual environments
- Don't assume OS: Write cross-platform scripts
- Don't hardcode paths: Use environment variables
- Don't skip documentation: Explain non-obvious steps
Platform-Specific Notes¶
macOS¶
- Performance: Docker Desktop on macOS is slower than Linux due to VM overhead
- File watching: Use
:cachedvolume mount consistency for better performance - SSL: May need to add certificates to Keychain Access for browser trust
Linux¶
- Performance: Native Docker performance (no VM)
- Permissions: May need to add user to
dockergroup:sudo usermod -aG docker $USER - systemd: Docker can start on boot:
sudo systemctl enable docker
Windows (WSL2)¶
- Line endings: Configure git to use LF:
git config --global core.autocrlf input - File system: Keep code in WSL file system (
~/project), not Windows (/mnt/c/) - Performance: WSL2 file system is much faster than
/mnt/c/
Next Steps¶
- Devcontainers: Container-based development environment
- Docker Configuration: Deep dive into Dockerfile patterns
- Django Settings: Multi-environment configuration
- Testing Setup: Configure test environment