Skip to content

Django Testing with pytest

Testing Django applications with pytest provides powerful fixtures, clearer test organization, and better integration with modern Python testing tools. This guide covers production-ready testing patterns for Django 5.2+ projects using pytest-django.

Why pytest for Django Testing?

pytest offers significant advantages over Django's built-in unittest-based test framework:

  • Fixture system: Reusable, composable test dependencies with clear scope management
  • Parameterization: Test the same logic with multiple inputs without code duplication
  • Better assertions: Clear, readable assertion failures with detailed error messages
  • Plugin ecosystem: Integration with coverage, VCR, Playwright, and other testing tools
  • Cleaner syntax: Less boilerplate code compared to class-based unittest tests
  • Parallel execution: Run tests concurrently for faster feedback cycles

Django Testing Philosophy

Django testing should verify behavior at the appropriate level: unit tests for business logic, integration tests for database operations, and UI tests for user interactions. pytest's fixture system naturally supports this layered approach.

pytest-django Setup and Configuration

Installation

Install pytest-django and related testing tools:

pip install pytest-django pytest-cov pytest-xdist pytest-recording

pytest Configuration (pytest.ini)

Create a pytest.ini file in your Django project root:

[pytest]

python_files = tests.py test_*.py *_tests.py conftest.py
addopts =
    --cov-report=html
    --cov-config=.coveragerc
    --strict-markers
    --create-db
    --cov=poseidon
    --exitfirst

markers =
    email: marks tests that send email (deselect with '-m "not email"')
    live_test: marks tests that require live external services
    vcr: marks tests that use VCR cassettes for API mocking
    ui: marks tests as UI tests that require a browser
    e2e: marks tests as end-to-end tests
    slow: marks tests as slow (taking more than 2 seconds)
    tenant: marks tests that need to use specific tenant database

filterwarnings =
    default::DeprecationWarning:__main__
    ignore::DeprecationWarning
    ignore::PendingDeprecationWarning
    ignore::ImportWarning
    ignore::ResourceWarning

Key configuration options:

  • python_files: Defines test file naming patterns
  • --strict-markers: Requires all markers to be defined, preventing typos
  • --create-db: Automatically creates test databases
  • --exitfirst: Stops after the first test failure for faster feedback
  • markers: Custom markers for categorizing and filtering tests

conftest.py Structure

The conftest.py file provides shared fixtures and configuration for all tests. Create this file at the root of your test directory:

"""Test fixtures and configuration for the Django test suite."""

import os
import pytest
from django.contrib.auth import get_user_model
from django.conf import settings

def pytest_sessionstart(session):
    """Validate test environment settings before any tests run."""
    # Verify correct settings module
    allowed_settings = ["myproject.settings.development"]
    current_settings = os.environ.get("DJANGO_SETTINGS_MODULE", "")
    if current_settings not in allowed_settings:
        pytest.exit(
            f"pytest must be run with DJANGO_SETTINGS_MODULE={allowed_settings[0]}. "
            f"Current: '{current_settings}'"
        )

    # Safety check: verify database hosts are localhost
    SAFE_HOSTS = ["127.0.0.1", "localhost", "::1", "db", "mysql"]
    unsafe_dbs = []
    for alias, config in settings.DATABASES.items():
        host = config.get("HOST", "")
        if host and host not in SAFE_HOSTS:
            unsafe_dbs.append((alias, host))

    if unsafe_dbs:
        unsafe_details = ", ".join([f"{alias} -> {host}" for alias, host in unsafe_dbs])
        pytest.exit(
            f"SAFETY ERROR: Tests cannot run against non-local databases!\n"
            f"Found: {unsafe_details}\n"
            f"Allowed hosts: {', '.join(SAFE_HOSTS)}"
        )

def pytest_configure(config):
    """Configure test-specific Django settings."""
    # Enable test-specific settings overrides
    settings.ALLOW_PYTEST_BYPASS = True

    # Silence system checks not relevant for testing
    settings.SILENCED_SYSTEM_CHECKS += [
        "security.W004",  # SECRET_KEY disclosure
        "security.W008",  # SSL redirect disabled
        "security.W012",  # SECURE_HSTS_SECONDS not set
    ]

    # Use dummy cache backend for tests
    settings.CACHES = {
        "default": {
            "BACKEND": "django.core.cache.backends.dummy.DummyCache",
        }
    }

@pytest.fixture
def api_client():
    """Create a REST framework API test client."""
    from rest_framework.test import APIClient
    return APIClient()

@pytest.fixture
def test_password():
    """Provide a standard test password."""
    return "strong-test-pass"  # nosec B105

@pytest.fixture
def create_user(db, django_user_model, test_password):
    """Factory function for creating test users."""
    def make_user(**kwargs):
        kwargs.setdefault("password", test_password)
        if "username" not in kwargs:
            import uuid
            kwargs["username"] = str(uuid.uuid4())
        return django_user_model.objects.create_user(**kwargs)
    return make_user

@pytest.fixture
def authenticated_client(client, create_user):
    """Returns a client authenticated as a test user."""
    user = create_user()
    client.force_login(user)
    return client

Database Safety Checks

Always implement safety checks to prevent tests from running against production databases. The pytest_sessionstart hook validates database hosts before any tests execute.

Test Database Configuration

Single Database Setup

For projects with a single database, configure test database settings:

# settings/testing_databases.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'test_myproject',
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': 'localhost',
        'PORT': '5432',
        'TEST': {
            'NAME': 'test_myproject',
        },
    }
}

Multiple Database Configuration

Django applications often use multiple databases for different purposes. Configure each database with appropriate test settings:

# settings/testing_databases.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'test_main',
        'USER': 'test',
        'PASSWORD': 'test',
        'HOST': 'localhost',
        'PORT': '3306',
        'TEST': {'NAME': 'test_main'},
    },
    'tenant': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'test_tenant',
        'USER': 'test',
        'PASSWORD': 'test',
        'HOST': 'localhost',
        'PORT': '3306',
        'TEST': {'NAME': 'test_tenant'},
    },
    'readonly': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'test_tenant',  # Points to same DB as tenant
        'USER': 'test',
        'PASSWORD': 'test',
        'HOST': 'localhost',
        'PORT': '3306',
        'TEST': {
            'MIRROR': 'tenant',  # Mirror tenant database
        },
    },
}

Database Setup Fixture

Control database setup with a session-scoped fixture:

@pytest.fixture(scope="session")
def django_db_setup(django_db_blocker):
    """Set up test databases with migrations and unmanaged models."""
    with django_db_blocker.unblock():
        from django.core.management import call_command

        # Run migrations for all databases
        for alias in settings.DATABASES.keys():
            call_command('migrate', database=alias, interactive=False)

        # Load initial test fixtures if needed
        call_command('loaddata', 'test_data.json')

Testing with Multiple Databases

Mark tests that require specific databases:

# Test requiring multiple databases
@pytest.mark.django_db(databases=["default", "tenant", "readonly"])
def test_cross_database_query(api_client):
    """Test operations spanning multiple databases."""
    # Create data in default database
    user = User.objects.create_user(username="testuser")

    # Create data in tenant database
    from myapp.models import TenantData
    data = TenantData.objects.using("tenant").create(name="Test")

    # Query from readonly mirror
    readonly_data = TenantData.objects.using("readonly").get(pk=data.pk)
    assert readonly_data.name == "Test"

Tenant-Specific Database Fixture

Create fixtures for tenant database access:

@pytest.fixture
def tenant_db(request, django_db_blocker):
    """Access a specific tenant database connection.

    Use with @pytest.mark.tenant("ALIAS") to specify database.
    """
    marker = request.node.get_closest_marker("tenant")
    alias = marker.args[0] if marker and marker.args else "tenant"

    with django_db_blocker.unblock():
        from django.db import connections
        return connections[alias]

# Usage in tests
@pytest.mark.tenant("tenant")
def test_tenant_specific(tenant_db):
    """Test tenant-specific database operations."""
    with tenant_db.cursor() as cursor:
        cursor.execute("SELECT COUNT(*) FROM tenant_table")
        count = cursor.fetchone()[0]
        assert count >= 0

Fixtures for Django Models

User Creation Fixtures

Create reusable fixtures for common user scenarios:

@pytest.fixture
def fake():
    """Create a consistently seeded Faker instance."""
    from faker import Faker
    faker = Faker()
    Faker.seed(12345)  # Fixed seed for reproducibility
    return faker

@pytest.fixture
def test_users(db, fake):
    """Create test users with groups and permissions."""
    from django.contrib.auth.models import Group

    # Create groups
    groups = {
        "Admins": Group.objects.get_or_create(name="Admins")[0],
        "Editors": Group.objects.get_or_create(name="Editors")[0],
    }

    # Create users
    User = get_user_model()
    users = {}

    admin = User.objects.create_user(
        username="admin",
        email=fake.email(),
        first_name=fake.first_name(),
        last_name=fake.last_name()
    )
    admin.groups.add(groups["Admins"])
    admin.save()
    users["admin"] = admin

    editor = User.objects.create_user(
        username="editor",
        email=fake.email(),
        first_name=fake.first_name(),
        last_name=fake.last_name()
    )
    editor.groups.add(groups["Editors"])
    editor.save()
    users["editor"] = editor

    return users

@pytest.fixture
def authenticated_admin(client, test_users):
    """Returns client authenticated as admin user."""
    client.force_login(test_users["admin"])
    return client

Model Factory Fixtures

Create factory fixtures for complex model creation:

@pytest.fixture
def article_factory(db, test_users):
    """Factory for creating Article instances."""
    from myapp.models import Article

    def make_article(**kwargs):
        kwargs.setdefault("author", test_users["editor"])
        kwargs.setdefault("title", "Test Article")
        kwargs.setdefault("content", "Test content")
        kwargs.setdefault("published", True)
        return Article.objects.create(**kwargs)

    return make_article

# Usage in tests
def test_article_creation(article_factory):
    """Test article creation with factory."""
    article = article_factory(title="Custom Title")
    assert article.title == "Custom Title"
    assert article.published is True

Fixture Data Loading

Load test data from fixtures:

@pytest.fixture
def load_test_data(django_db_setup, django_db_blocker):
    """Load test fixtures into database."""
    with django_db_blocker.unblock():
        from django.core.management import call_command

        # Clear existing data for isolation
        from myapp.models import Article
        Article.objects.all().delete()

        # Load fresh fixture data
        call_command('loaddata', 'articles.json')

@pytest.mark.django_db
def test_with_fixture_data(load_test_data):
    """Test using pre-loaded fixture data."""
    from myapp.models import Article
    assert Article.objects.count() > 0

Testing Views and APIs

Testing Django Views

Test view functions and class-based views:

from django.urls import reverse

def test_home_view(client):
    """Test home page renders correctly."""
    response = client.get(reverse('home'))
    assert response.status_code == 200
    assert b"Welcome" in response.content

def test_login_required_view(client, authenticated_client):
    """Test view requires authentication."""
    url = reverse('dashboard')

    # Unauthenticated request
    response = client.get(url)
    assert response.status_code == 302  # Redirect to login

    # Authenticated request
    response = authenticated_client.get(url)
    assert response.status_code == 200

def test_view_with_post(client, test_users):
    """Test view handling POST requests."""
    client.force_login(test_users["editor"])
    url = reverse('article_create')

    data = {
        'title': 'New Article',
        'content': 'Article content',
    }
    response = client.post(url, data)

    assert response.status_code == 302  # Redirect after success

    from myapp.models import Article
    assert Article.objects.filter(title='New Article').exists()

Testing REST Framework APIs

Test Django REST Framework viewsets and serializers:

from rest_framework import status

@pytest.fixture
def api_client_with_credentials(api_client, create_user):
    """Create authenticated API client."""
    user = create_user()
    api_client.force_authenticate(user=user)
    yield api_client
    api_client.force_authenticate(user=None)

def test_api_list_endpoint(api_client_with_credentials, article_factory):
    """Test API list endpoint."""
    # Create test data
    article_factory(title="Article 1")
    article_factory(title="Article 2")

    url = reverse('article-list')
    response = api_client_with_credentials.get(url)

    assert response.status_code == status.HTTP_200_OK
    assert len(response.data['results']) == 2

def test_api_detail_endpoint(api_client_with_credentials, article_factory):
    """Test API detail endpoint."""
    article = article_factory(title="Test Article")

    url = reverse('article-detail', kwargs={'pk': article.pk})
    response = api_client_with_credentials.get(url)

    assert response.status_code == status.HTTP_200_OK
    assert response.data['title'] == "Test Article"

def test_api_create_endpoint(api_client_with_credentials):
    """Test API create endpoint."""
    url = reverse('article-list')
    data = {
        'title': 'New Article',
        'content': 'Article content',
    }
    response = api_client_with_credentials.post(url, data, format='json')

    assert response.status_code == status.HTTP_201_CREATED
    assert response.data['title'] == 'New Article'

def test_api_filtering(api_client_with_credentials, article_factory):
    """Test API filtering capabilities."""
    article_factory(title="Python Article", published=True)
    article_factory(title="Django Article", published=False)

    url = reverse('article-list')
    response = api_client_with_credentials.get(url, {'published': 'true'})

    assert response.status_code == status.HTTP_200_OK
    assert len(response.data['results']) == 1
    assert response.data['results'][0]['title'] == "Python Article"

def test_api_pagination(api_client_with_credentials, article_factory):
    """Test API pagination."""
    for i in range(25):
        article_factory(title=f"Article {i}")

    url = reverse('article-list')
    response = api_client_with_credentials.get(url, {'page': 1})

    assert response.status_code == status.HTTP_200_OK
    assert 'next' in response.data
    assert 'previous' in response.data

Testing with Mocks

Use unittest.mock to isolate external dependencies:

from unittest.mock import patch, Mock

@patch('myapp.views.external_api_call')
def test_view_with_external_api(mock_api, client, authenticated_client):
    """Test view that calls external API."""
    # Configure mock
    mock_api.return_value = {'status': 'success', 'data': [1, 2, 3]}

    url = reverse('api_view')
    response = authenticated_client.get(url)

    assert response.status_code == 200
    mock_api.assert_called_once()

@patch('myapp.models.send_email')
def test_model_method_sends_email(mock_send_email, article_factory):
    """Test model method that sends email."""
    article = article_factory()
    article.publish()

    # Verify email was sent
    assert mock_send_email.called
    call_args = mock_send_email.call_args
    assert 'Article Published' in call_args[1]['subject']

Testing with Multiple Databases

Database Routing

Test code that uses multiple databases:

@pytest.mark.django_db(databases=["default", "tenant"])
def test_multi_database_operation():
    """Test operations across multiple databases."""
    from myapp.models import MainData, TenantData

    # Create in default database
    main = MainData.objects.create(name="Main")

    # Create in tenant database
    tenant = TenantData.objects.using("tenant").create(
        name="Tenant",
        main_id=main.id
    )

    # Query from both databases
    assert MainData.objects.count() == 1
    assert TenantData.objects.using("tenant").count() == 1

@pytest.mark.django_db(databases=["default", "readonly"])
def test_readonly_database():
    """Test readonly database mirror."""
    from myapp.models import Article

    # Create in default database
    article = Article.objects.create(title="Test")

    # Read from readonly mirror
    readonly_article = Article.objects.using("readonly").get(pk=article.pk)
    assert readonly_article.title == "Test"

Transaction Testing

Test transaction behavior with multiple databases:

from django.db import transaction

@pytest.mark.django_db(transaction=True, databases=["default", "tenant"])
def test_transaction_rollback():
    """Test transaction rollback across databases."""
    from myapp.models import MainData, TenantData

    try:
        with transaction.atomic(using="default"):
            MainData.objects.create(name="Main")

            with transaction.atomic(using="tenant"):
                TenantData.objects.using("tenant").create(name="Tenant")
                raise ValueError("Force rollback")
    except ValueError:
        pass

    # Verify rollback
    assert MainData.objects.count() == 0
    assert TenantData.objects.using("tenant").count() == 0

Coverage Strategy for Django

Coverage Configuration (.coveragerc)

Configure coverage to focus on application code:

[run]
source = myproject
omit =
    */migrations/*
    */settings/*
    */tests/*
    */venv/*
    manage.py
    */wsgi.py

[report]
skip_empty = True
skip_covered = True

exclude_lines =
    pragma: no cover
    pragma: nocover
    def __repr__
    def __str__
    if self.debug:
    if settings.DEBUG
    raise AssertionError
    raise NotImplementedError
    if __name__ == .__main__.:

Running Tests with Coverage

Execute tests with coverage reporting:

# Run all tests with coverage
DJANGO_SETTINGS_MODULE='myproject.settings.development' \
    pytest --cov-config=.coveragerc --cov=myproject

# Run specific test file
DJANGO_SETTINGS_MODULE='myproject.settings.development' \
    pytest --cov-config=.coveragerc \
    -c pytest.ini \
    myproject/tests/unit/test_views.py

# Run specific test function
DJANGO_SETTINGS_MODULE='myproject.settings.development' \
    pytest --cov-config=.coveragerc \
    myproject/tests/unit/test_views.py::test_home_view

# Generate HTML coverage report
pytest --cov-config=.coveragerc --cov=myproject --cov-report=html

Coverage Best Practices

Focus coverage on meaningful code:

# Exclude boilerplate from coverage
def __repr__(self):  # pragma: no cover
    return f"<Article: {self.title}>"

def __str__(self):  # pragma: no cover
    return self.title

# Exclude debug-only code
if settings.DEBUG:  # pragma: no cover
    print(f"Processing {item}")

Coverage Goals

Aim for 80-90% coverage for business logic and views. Don't obsess over 100% coverage - focus on testing critical paths and edge cases. Some code (migrations, settings, simple properties) doesn't benefit from testing.

VCR for External API Testing

VCR Setup and Configuration

Use pytest-recording (vcrpy) to record and replay HTTP interactions:

# conftest.py
@pytest.fixture(scope="session")
def vcr_config():
    """Global VCR configuration."""
    return {
        'cassette_library_dir': 'tests/fixtures/cassettes',
        'record_mode': 'once',  # Record once, then replay
        'filter_headers': ['authorization', 'api-key'],
        'filter_query_parameters': ['api_key', 'token'],
        'serializer': 'yaml',
        'match_on': ['uri', 'method'],
        'ignore_hosts': ['*.amazonaws.com'],
    }

Recording API Calls

Record external API interactions:

import pytest

@pytest.mark.vcr
def test_external_api_call():
    """Test external API with VCR recording."""
    import requests

    response = requests.get('https://api.example.com/data')
    assert response.status_code == 200
    assert 'results' in response.json()

First run creates a cassette file at tests/fixtures/cassettes/test_external_api_call.yaml:

interactions:
- request:
    method: GET
    uri: https://api.example.com/data
    headers:
      User-Agent: python-requests/2.31.0
  response:
    status:
      code: 200
      message: OK
    body:
      string: '{"results": [{"id": 1, "name": "Test"}]}'
    headers:
      Content-Type: application/json

Subsequent runs replay the recorded interaction without hitting the real API.

VCR with Django Integration Tests

Test Django code that makes external API calls:

@pytest.mark.vcr
@pytest.mark.django_db
def test_sync_external_data():
    """Test syncing data from external API."""
    from myapp.tasks import sync_external_data

    # First run: records API calls
    # Subsequent runs: replays from cassette
    result = sync_external_data()

    assert result['status'] == 'success'
    assert result['items_synced'] > 0

    # Verify data was saved
    from myapp.models import ExternalData
    assert ExternalData.objects.count() > 0

Managing VCR Cassettes

Organize cassettes by test module:

tests/
  fixtures/
    cassettes/
      test_external_api.yaml
      test_integration/
        test_user_sync.yaml
        test_data_import.yaml

Update cassettes when API changes:

# Delete cassettes to re-record
rm -rf tests/fixtures/cassettes/

# Run tests to create new cassettes
pytest tests/integration/

Testing Middleware and Permissions

Testing Custom Middleware

Test middleware behavior in isolation and integration:

from django.test import RequestFactory

def test_custom_middleware():
    """Test custom middleware processing."""
    from myapp.middleware import CustomMiddleware

    factory = RequestFactory()
    request = factory.get('/test/')

    # Create middleware with mock get_response
    def get_response(request):
        from django.http import HttpResponse
        return HttpResponse("OK")

    middleware = CustomMiddleware(get_response)
    response = middleware(request)

    # Verify middleware added expected attributes
    assert hasattr(request, 'custom_attribute')
    assert response.status_code == 200

def test_middleware_integration(client):
    """Test middleware in full request cycle."""
    response = client.get('/test/')

    # Verify middleware effects
    assert 'X-Custom-Header' in response
    assert response['X-Custom-Header'] == 'expected-value'

Testing Django REST Framework Permissions

Test permission classes:

from rest_framework.test import APIRequestFactory
from myapp.permissions import IsOwnerOrReadOnly

@pytest.fixture
def api_request_factory():
    """Create APIRequestFactory for testing permissions."""
    return APIRequestFactory()

def test_permission_owner_can_edit(api_request_factory, create_user, article_factory):
    """Test owner can edit their own article."""
    user = create_user()
    article = article_factory(author=user)

    # Create mock request
    request = api_request_factory.put('/api/articles/1/')
    request.user = user

    # Create mock view
    from unittest.mock import Mock
    view = Mock()
    view.action = 'update'

    # Test permission
    permission = IsOwnerOrReadOnly()
    assert permission.has_object_permission(request, view, article) is True

def test_permission_non_owner_cannot_edit(api_request_factory, create_user, article_factory):
    """Test non-owner cannot edit article."""
    owner = create_user(username="owner")
    other_user = create_user(username="other")
    article = article_factory(author=owner)

    request = api_request_factory.put('/api/articles/1/')
    request.user = other_user

    view = Mock()
    view.action = 'update'

    permission = IsOwnerOrReadOnly()
    assert permission.has_object_permission(request, view, article) is False

def test_permission_anyone_can_read(api_request_factory, create_user, article_factory):
    """Test any user can read article."""
    owner = create_user(username="owner")
    reader = create_user(username="reader")
    article = article_factory(author=owner)

    request = api_request_factory.get('/api/articles/1/')
    request.user = reader

    view = Mock()
    view.action = 'retrieve'

    permission = IsOwnerOrReadOnly()
    assert permission.has_object_permission(request, view, article) is True

Testing Access Control

Test view-level access control:

from django.contrib.auth.models import Group

def test_group_based_access(client, create_user):
    """Test access based on group membership."""
    # Create groups
    admin_group = Group.objects.create(name="Admins")

    # Create users
    admin = create_user(username="admin")
    admin.groups.add(admin_group)

    regular_user = create_user(username="regular")

    # Admin can access
    client.force_login(admin)
    response = client.get('/admin-only/')
    assert response.status_code == 200

    # Regular user cannot access
    client.force_login(regular_user)
    response = client.get('/admin-only/')
    assert response.status_code == 403

def test_api_permission_classes(api_client, create_user):
    """Test API permission class enforcement."""
    from rest_framework.authtoken.models import Token

    user = create_user()
    token = Token.objects.create(user=user)

    # Request without authentication
    response = api_client.get('/api/protected/')
    assert response.status_code == 401

    # Request with authentication
    api_client.credentials(HTTP_AUTHORIZATION=f'Token {token.key}')
    response = api_client.get('/api/protected/')
    assert response.status_code == 200

Testing Management Commands

Basic Command Testing

Test Django management commands:

from io import StringIO
from django.core.management import call_command

def test_management_command_output():
    """Test management command produces expected output."""
    out = StringIO()
    call_command('mycommand', stdout=out)

    output = out.getvalue()
    assert 'Success' in output
    assert 'processed 10 items' in output

@pytest.mark.django_db
def test_management_command_creates_data():
    """Test management command creates database records."""
    from myapp.models import Article

    initial_count = Article.objects.count()
    call_command('import_articles', 'test_data.csv')

    final_count = Article.objects.count()
    assert final_count > initial_count

def test_management_command_with_arguments():
    """Test command with arguments."""
    out = StringIO()
    call_command('mycommand', '--verbose', '2', stdout=out)

    output = out.getvalue()
    assert 'Verbose output' in output

Testing Command Options

Test commands with various options:

def test_command_dry_run():
    """Test command in dry-run mode."""
    from myapp.models import Article

    initial_count = Article.objects.count()
    out = StringIO()
    call_command('delete_old_articles', '--dry-run', stdout=out)

    # Verify no changes in dry-run mode
    assert Article.objects.count() == initial_count
    assert 'Would delete' in out.getvalue()

def test_command_with_filters():
    """Test command with filtering options."""
    from myapp.models import Article

    # Create test data
    Article.objects.create(title="Keep", published=True)
    Article.objects.create(title="Delete", published=False)

    call_command('cleanup_articles', '--unpublished-only')

    # Verify only unpublished articles deleted
    assert Article.objects.filter(published=True).exists()
    assert not Article.objects.filter(published=False).exists()

Testing Command Error Handling

Test error conditions:

from django.core.management.base import CommandError

def test_command_missing_required_argument():
    """Test command fails with missing argument."""
    with pytest.raises(CommandError) as exc_info:
        call_command('mycommand')

    assert 'required argument' in str(exc_info.value).lower()

def test_command_invalid_input():
    """Test command handles invalid input."""
    out = StringIO()
    err = StringIO()

    with pytest.raises(CommandError):
        call_command('import_data', 'nonexistent.csv',
                    stdout=out, stderr=err)

    assert 'file not found' in err.getvalue().lower()

UI Testing with Playwright

Playwright Setup

Install Playwright for browser testing:

pip install pytest-playwright
playwright install

Create UI test fixtures in tests/ui/conftest.py:

"""Pytest configuration for UI tests."""

import os
import pytest

# Enable Django async operations for Playwright
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true"

def pytest_sessionfinish(*_):
    """Clean up environment after tests."""
    if "DJANGO_ALLOW_ASYNC_UNSAFE" in os.environ:
        del os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"]

@pytest.fixture(scope="function")
def live_url(live_server):
    """Provide live server URL for UI tests."""
    return live_server.url

Writing Playwright Tests

Test UI interactions:

import pytest
from playwright.sync_api import Page, expect

@pytest.mark.ui
def test_login_page(page: Page, live_url):
    """Test login page loads and functions."""
    page.goto(f"{live_url}/login/")

    # Verify page title
    expect(page).to_have_title("Login - MyApp")

    # Fill login form
    page.fill('input[name="username"]', 'testuser')
    page.fill('input[name="password"]', 'testpass')
    page.click('button[type="submit"]')

    # Verify redirect to dashboard
    expect(page).to_have_url(f"{live_url}/dashboard/")

@pytest.mark.ui
@pytest.mark.django_db
def test_article_creation_flow(page: Page, live_url, authenticated_admin):
    """Test complete article creation workflow."""
    page.goto(f"{live_url}/articles/new/")

    # Fill article form
    page.fill('#id_title', 'Test Article')
    page.fill('#id_content', 'Article content here')
    page.select_option('#id_category', 'Technology')

    # Submit form
    page.click('button:has-text("Publish")')

    # Verify success message
    expect(page.locator('.alert-success')).to_contain_text('Article published')

    # Verify article appears in list
    expect(page.locator('.article-list')).to_contain_text('Test Article')

Testing HTMX Interactions

Test HTMX partial updates:

@pytest.mark.ui
@pytest.mark.django_db
def test_htmx_search(page: Page, live_url, article_factory):
    """Test HTMX-powered search."""
    # Create test articles
    article_factory(title="Python Tutorial")
    article_factory(title="Django Guide")

    page.goto(f"{live_url}/articles/")

    # Type in search box (triggers HTMX request)
    search_input = page.locator('#search')
    search_input.fill('Python')

    # Wait for HTMX update
    page.wait_for_selector('.article-list')

    # Verify filtered results
    articles = page.locator('.article-item')
    expect(articles).to_have_count(1)
    expect(articles.first).to_contain_text('Python Tutorial')

Testing with Test Data

Create test data for UI tests:

@pytest.fixture
def setup_ui_test_data(db, django_db_blocker):
    """Set up test data for UI tests."""
    with django_db_blocker.unblock():
        from myapp.models import Article, Category

        tech = Category.objects.create(name="Technology")
        science = Category.objects.create(name="Science")

        Article.objects.create(
            title="AI Advances",
            category=tech,
            published=True
        )
        Article.objects.create(
            title="Climate Research",
            category=science,
            published=True
        )

    yield

    with django_db_blocker.unblock():
        Article.objects.all().delete()
        Category.objects.all().delete()

@pytest.mark.ui
def test_category_filtering(page: Page, live_url, setup_ui_test_data):
    """Test filtering articles by category."""
    page.goto(f"{live_url}/articles/")

    # Select category filter
    page.select_option('#category-filter', 'Technology')

    # Verify filtered results
    expect(page.locator('.article-item')).to_have_count(1)
    expect(page.locator('.article-item')).to_contain_text('AI Advances')

CI/CD Integration

GitHub Actions Configuration

Configure pytest in CI pipeline:

name: Django Tests

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest

    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_PASSWORD: postgres
          POSTGRES_DB: test_db
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432

    steps:
    - uses: actions/checkout@v3

    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.12'

    - name: Install dependencies
      run: |
        pip install -r requirements.txt
        pip install pytest pytest-django pytest-cov

    - name: Run tests
      env:
        DJANGO_SETTINGS_MODULE: myproject.settings.development
        DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
      run: |
        pytest --cov=myproject --cov-report=xml

    - name: Upload coverage
      uses: codecov/codecov-action@v3
      with:
        file: ./coverage.xml

Parallel Test Execution

Run tests in parallel for faster CI:

# Install pytest-xdist
pip install pytest-xdist

# Run tests in parallel (auto-detect CPU count)
pytest -n auto

# Run with specific worker count
pytest -n 4

Configure in pytest.ini:

[pytest]
addopts = -n auto --maxfail=1

Docker Test Environment

Create consistent test environment with Docker:

# Dockerfile.test
FROM python:3.12-slim

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt pytest pytest-django

# Copy application code
COPY . .

# Run tests
CMD ["pytest", "--cov=myproject"]
# docker-compose.test.yml
version: '3.8'

services:
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: test_db

  test:
    build:
      context: .
      dockerfile: Dockerfile.test
    depends_on:
      - db
    environment:
      DJANGO_SETTINGS_MODULE: myproject.settings.development
      DATABASE_HOST: db
    command: pytest --cov=myproject

Run tests in Docker:

docker-compose -f docker-compose.test.yml up --abort-on-container-exit

Best Practices

Test Organization

Organize tests by type and purpose:

tests/
  conftest.py           # Shared fixtures
  pytest.ini            # pytest configuration

  unit/                 # Unit tests
    models/
      test_article.py
      test_user.py
    utils/
      test_helpers.py
    views/
      test_api.py

  integration/          # Integration tests
    test_user_workflow.py
    test_data_sync.py

  ui/                   # UI tests
    conftest.py         # UI-specific fixtures
    test_login.py
    test_articles.py

  fixtures/             # Test data
    cassettes/          # VCR cassettes
    test_data.json      # Django fixtures

Test Naming Conventions

Use descriptive, consistent test names:

# Good test names
def test_user_creation_sets_default_values():
    """Test user creation initializes defaults."""
    pass

def test_article_publish_date_validation_fails_for_past_dates():
    """Test publish date must be in future."""
    pass

def test_api_returns_404_for_nonexistent_article():
    """Test API returns 404 for missing article."""
    pass

# Poor test names
def test_user():
    pass

def test_1():
    pass

def test_article_date():
    pass

Test Independence

Ensure tests don't depend on each other:

# Bad: Tests depend on execution order
def test_create_article():
    article = Article.objects.create(title="Test")
    assert article.id == 1

def test_update_article():
    article = Article.objects.get(id=1)  # Fails if run independently
    article.title = "Updated"
    article.save()

# Good: Each test is self-contained
def test_create_article(article_factory):
    article = article_factory(title="Test")
    assert article.title == "Test"

def test_update_article(article_factory):
    article = article_factory(title="Original")
    article.title = "Updated"
    article.save()

    article.refresh_from_db()
    assert article.title == "Updated"

Fixture Scope Management

Choose appropriate fixture scopes:

# Session scope: expensive setup, shared across all tests
@pytest.fixture(scope="session")
def django_db_setup(django_db_blocker):
    """Set up test databases once per session."""
    pass

# Module scope: shared within a test module
@pytest.fixture(scope="module")
def external_api_client():
    """Create API client once per module."""
    pass

# Function scope (default): fresh for each test
@pytest.fixture
def article():
    """Create fresh article for each test."""
    return Article.objects.create(title="Test")

Test Data Factories

Prefer factories over hard-coded fixtures:

# Good: Flexible factory
@pytest.fixture
def article_factory(db):
    def make_article(**kwargs):
        defaults = {
            'title': 'Default Title',
            'content': 'Default content',
            'published': False,
        }
        defaults.update(kwargs)
        return Article.objects.create(**defaults)
    return make_article

def test_published_article(article_factory):
    article = article_factory(published=True)
    assert article.published

def test_draft_article(article_factory):
    article = article_factory()  # Uses default published=False
    assert not article.published

# Less flexible: Hard-coded fixture
@pytest.fixture
def article(db):
    return Article.objects.create(
        title='Fixed Title',
        published=True
    )

Assertion Quality

Write clear, specific assertions:

# Good: Specific assertions
def test_article_serialization(article_factory, api_client):
    article = article_factory(title="Test", author__username="john")

    response = api_client.get(f'/api/articles/{article.id}/')

    assert response.status_code == 200
    assert response.data['title'] == "Test"
    assert response.data['author']['username'] == "john"
    assert 'created_at' in response.data

# Poor: Vague assertions
def test_article_serialization(article_factory, api_client):
    article = article_factory()
    response = api_client.get(f'/api/articles/{article.id}/')
    assert response.status_code == 200
    assert response.data  # Too vague

Performance Optimization

Optimize test execution time:

# Use database transactions for speed
@pytest.mark.django_db(transaction=True)
def test_bulk_operations():
    """Transaction tests run faster."""
    articles = [
        Article(title=f"Article {i}")
        for i in range(1000)
    ]
    Article.objects.bulk_create(articles)
    assert Article.objects.count() == 1000

# Limit database queries
def test_api_with_select_related(api_client, article_factory):
    """Test optimized queries."""
    article = article_factory()

    with django_assert_num_queries(2):  # Verify query count
        response = api_client.get('/api/articles/')
        assert len(response.data['results']) > 0

Test Performance

Fast tests encourage running them frequently. Use pytest --durations=10 to identify slow tests. Consider using pytest-xdist for parallel execution in large test suites.

Debug-Friendly Tests

Make tests easy to debug:

# Good: Clear failure messages
def test_article_validation():
    article = Article(title="")

    with pytest.raises(ValidationError) as exc_info:
        article.full_clean()

    errors = exc_info.value.message_dict
    assert 'title' in errors, f"Expected title error, got: {errors}"
    assert 'required' in str(errors['title']).lower()

# Poor: Unclear failures
def test_article_validation():
    article = Article(title="")
    with pytest.raises(ValidationError):
        article.full_clean()

Use pytest -vv for verbose output and pytest --pdb to drop into debugger on failure.

Summary

Testing Django applications with pytest provides powerful fixtures, better organization, and comprehensive coverage capabilities. Key takeaways:

  • pytest-django seamlessly integrates pytest with Django's test infrastructure
  • Fixtures enable reusable test components with clear dependency management
  • Multiple databases require explicit marking and careful transaction handling
  • VCR records external API interactions for reliable, fast integration tests
  • Coverage focuses on meaningful code, not arbitrary percentage goals
  • Playwright enables robust UI testing with Django's live server
  • CI/CD automation ensures tests run consistently across environments

Well-tested Django applications ship confidently, refactor safely, and maintain quality as they grow.