Testing with pytest¶
Testing is the foundation of reliable software. pytest is Python's most powerful testing framework, combining simplicity with extensive capabilities for unit, integration, and end-to-end testing.
Why pytest?¶
Key Benefits
- Minimal boilerplate - Write tests as simple functions
- Powerful fixtures - Reusable test dependencies
- Rich plugin ecosystem - Django, coverage, VCR, parallel execution
- Excellent failure reporting - Clear, actionable error messages
- Flexible organization - Unit, integration, UI test separation
- Industry standard - Used by Django, FastAPI, Flask, and most Python projects
Philosophy and Mindset¶
Test Pyramid¶
graph TD
A[UI Tests: Few, Critical Paths] --> B[Integration Tests: API, DB, External Services]
B --> C[Unit Tests: Business Logic, Models, Utils]
style A fill:#ff6b6b
style B fill:#ffd93d
style C fill:#6bcf7f
Test Distribution
- 70% Unit Tests - Fast, isolated, test individual functions/classes
- 20% Integration Tests - Test component interactions, database, APIs
- 10% UI/E2E Tests - Critical user journeys only
Coverage Goals¶
Pragmatic Coverage Target: 80%+
- Not 100%: Diminishing returns, focus on critical paths
- Exclude: Migrations, settings, management commands, third-party integrations
- Prioritize: Business logic, API endpoints, data transformations
- Use
# pragma: no cover: For defensive code, error handling edge cases
# .coveragerc
[run]
source = myapp
omit =
*/migrations/*
*/settings/*
*/management/*
*/tests/*
*/integrations/*
wsgi.py
manage.py
[report]
skip_empty = True
skip_covered = True
exclude_lines =
pragma: no cover
pragma: nocover
def __repr__
def __str__
if self.debug:
if settings.DEBUG
raise AssertionError
Installation and Configuration¶
Install pytest and Plugins¶
# Core testing tools
uv pip install pytest pytest-cov pytest-django
# Optional but recommended
uv pip install pytest-xdist # Parallel test execution
uv pip install pytest-timeout # Timeout long-running tests
uv pip install vcrpy # Record/replay HTTP interactions
uv pip install pytest-mock # Enhanced mocking
pytest Configuration (pytest.ini)¶
Create pytest.ini in your project root:
[pytest]
# Test discovery patterns
python_files = tests.py test_*.py *_tests.py conftest.py
# Default options
addopts =
--cov-report=html
--cov-config=.coveragerc
--strict-markers
--exitfirst
--cov=myapp
--verbose
# Custom markers
markers =
slow: marks tests as slow (taking more than 2 seconds)
integration: marks tests as integration tests
ui: marks tests as UI tests that require a browser
e2e: marks tests as end-to-end tests
vcr: marks tests that use VCR for recording HTTP interactions
live_test: marks tests that hit real external APIs
# Warning filters
filterwarnings =
error::DeprecationWarning
ignore::PendingDeprecationWarning
ignore::ImportWarning
Django Integration (pytest-django)¶
For Django projects, configure database access:
[pytest]
DJANGO_SETTINGS_MODULE = myproject.settings.development
python_files = tests.py test_*.py *_tests.py
addopts =
--reuse-db # Reuse test database between runs
--create-db # Create database if missing
--nomigrations # Skip migrations for speed
--cov=myproject
Database Safety
Always validate database settings before running tests. Never run tests against production databases.
Test Organization¶
Directory Structure¶
myproject/
├── tests/
│ ├── __init__.py
│ ├── conftest.py # Shared fixtures
│ ├── unit/ # Unit tests - fast, isolated
│ │ ├── __init__.py
│ │ ├── models/
│ │ │ └── test_user.py
│ │ ├── api/
│ │ │ └── test_views.py
│ │ └── utils/
│ │ └── test_helpers.py
│ ├── integration/ # Integration tests - DB, API, external services
│ │ ├── __init__.py
│ │ ├── test_api_endpoints.py
│ │ ├── test_database.py
│ │ └── external/
│ │ └── test_stripe_integration.py
│ └── ui/ # UI/E2E tests - browser automation
│ ├── __init__.py
│ ├── conftest.py # UI-specific fixtures
│ └── test_user_flow.py
├── conftest.py # Root fixtures (optional)
└── pytest.ini
Test File Naming¶
# ✅ Good: Clear, descriptive names
test_user_authentication.py
test_payment_processing.py
test_email_notifications.py
# ❌ Bad: Vague names
test_stuff.py
tests.py
my_test.py
Test Function Naming¶
# ✅ Good: Describes what is tested
def test_user_creation_with_valid_email():
pass
def test_payment_fails_with_invalid_card():
pass
def test_api_returns_404_for_missing_resource():
pass
# ❌ Bad: Unclear purpose
def test_user():
pass
def test_case_1():
pass
def test_it_works():
pass
Writing Tests¶
Basic Test Structure¶
"""Test module for user authentication.
This module tests login, logout, password reset, and session management.
"""
import pytest
from myapp.models import User
def test_user_creation():
"""Test that users can be created with valid data."""
user = User.objects.create(
username="testuser",
email="test@example.com"
)
assert user.username == "testuser"
assert user.email == "test@example.com"
def test_user_email_validation():
"""Test that invalid emails are rejected."""
with pytest.raises(ValidationError):
User.objects.create(
username="testuser",
email="not-an-email"
)
Test with Setup and Teardown¶
def test_user_with_explicit_cleanup():
"""Test user creation with manual cleanup."""
# Setup
user = User.objects.create(username="testuser")
try:
# Test
assert user.is_active is True
finally:
# Teardown
user.delete()
Use Fixtures Instead
Fixtures are better than explicit setup/teardown. See Fixtures section below.
Parametrized Tests¶
Test multiple scenarios without repeating code:
import pytest
@pytest.mark.parametrize("email,is_valid", [
("user@example.com", True),
("test@test.org", True),
("invalid-email", False),
("@example.com", False),
("user@", False),
])
def test_email_validation(email, is_valid):
"""Test email validation with multiple inputs."""
result = validate_email(email)
assert result == is_valid
Testing Exceptions¶
import pytest
def test_division_by_zero():
"""Test that division by zero raises ValueError."""
with pytest.raises(ValueError) as exc_info:
divide(10, 0)
assert "Cannot divide by zero" in str(exc_info.value)
def test_api_raises_404():
"""Test API raises 404 for missing resource."""
with pytest.raises(Http404):
get_user_by_id(999999)
Fixtures and conftest.py¶
Fixtures are pytest's powerful dependency injection system. They provide reusable test data and setup logic.
Basic Fixtures¶
# conftest.py
import pytest
from django.contrib.auth import get_user_model
@pytest.fixture
def user():
"""Create a test user."""
User = get_user_model()
return User.objects.create(
username="testuser",
email="test@example.com"
)
@pytest.fixture
def admin_user():
"""Create an admin user."""
User = get_user_model()
return User.objects.create(
username="admin",
email="admin@example.com",
is_staff=True,
is_superuser=True
)
Use in tests:
def test_user_has_email(user):
"""Test user fixture provides email."""
assert user.email == "test@example.com"
def test_admin_permissions(admin_user):
"""Test admin user has elevated permissions."""
assert admin_user.is_staff is True
assert admin_user.is_superuser is True
Fixture Scopes¶
Control fixture lifecycle:
@pytest.fixture(scope="function") # Default: runs for each test
def temp_user():
user = User.objects.create(username="temp")
yield user
user.delete()
@pytest.fixture(scope="module") # Runs once per module
def database_connection():
conn = create_connection()
yield conn
conn.close()
@pytest.fixture(scope="session") # Runs once per test session
def browser():
driver = webdriver.Chrome()
yield driver
driver.quit()
Scope Hierarchy
- function (default) - Per test function
- class - Per test class
- module - Per test file
- package - Per test package
- session - Once for entire test run
Fixture Dependencies¶
Fixtures can depend on other fixtures:
@pytest.fixture
def user():
"""Create basic user."""
return User.objects.create(username="testuser")
@pytest.fixture
def authenticated_client(client, user):
"""Create authenticated client."""
client.force_login(user)
return client
def test_profile_access(authenticated_client):
"""Test authenticated user can access profile."""
response = authenticated_client.get('/profile/')
assert response.status_code == 200
Autouse Fixtures¶
Automatically run fixtures without explicit declaration:
@pytest.fixture(autouse=True)
def reset_cache():
"""Clear cache before each test."""
cache.clear()
yield
cache.clear()
def test_something():
"""Cache is automatically cleared before this test."""
pass
Django Database Fixtures¶
import pytest
@pytest.fixture
def db_with_data(db):
"""Populate database with test data."""
User.objects.create(username="user1")
User.objects.create(username="user2")
Product.objects.create(name="Product 1")
return db
@pytest.mark.django_db
def test_user_count(db_with_data):
"""Test database contains expected users."""
assert User.objects.count() == 2
conftest.py Hierarchy¶
pytest discovers fixtures in conftest.py files at multiple levels:
myproject/
├── conftest.py # Session-wide fixtures
├── tests/
│ ├── conftest.py # All tests fixtures
│ ├── unit/
│ │ └── conftest.py # Unit tests only
│ └── integration/
│ └── conftest.py # Integration tests only
Fixture Visibility
Fixtures are available to tests in the same directory and subdirectories. Use this to organize fixtures by scope.
Django Testing Patterns¶
Database Access¶
import pytest
# Automatic database access
@pytest.mark.django_db
def test_create_user():
"""Test requires database access."""
user = User.objects.create(username="test")
assert user.pk is not None
# Multi-database access
@pytest.mark.django_db(databases=["default", "analytics"])
def test_cross_database():
"""Test requires multiple databases."""
user = User.objects.create(username="test")
analytics_data = AnalyticsEvent.objects.using("analytics").create(
user_id=user.id
)
assert analytics_data.pk is not None
# Transaction control
@pytest.mark.django_db(transaction=True)
def test_with_transactions():
"""Test that requires transaction support."""
with transaction.atomic():
User.objects.create(username="test")
Django Test Client¶
import pytest
from django.test import Client
@pytest.fixture
def client():
"""Django test client."""
return Client()
def test_homepage(client):
"""Test homepage returns 200."""
response = client.get('/')
assert response.status_code == 200
def test_api_endpoint(client):
"""Test API endpoint with JSON."""
response = client.post(
'/api/users/',
data={'username': 'test'},
content_type='application/json'
)
assert response.status_code == 201
assert response.json()['username'] == 'test'
Django Authentication¶
@pytest.fixture
def authenticated_client(client, user):
"""Client authenticated as user."""
client.force_login(user)
return client
def test_protected_view(authenticated_client):
"""Test authenticated access to protected view."""
response = authenticated_client.get('/dashboard/')
assert response.status_code == 200
def test_unauthorized_access(client):
"""Test unauthenticated user is redirected."""
response = client.get('/dashboard/')
assert response.status_code == 302
assert '/login/' in response.url
Testing Models¶
@pytest.mark.django_db
def test_user_str_representation():
"""Test User __str__ method."""
user = User.objects.create(
username="testuser",
email="test@example.com"
)
assert str(user) == "testuser"
@pytest.mark.django_db
def test_user_full_name():
"""Test user full_name property."""
user = User.objects.create(
username="testuser",
first_name="John",
last_name="Doe"
)
assert user.get_full_name() == "John Doe"
Testing Views¶
@pytest.mark.django_db
def test_list_view(client):
"""Test list view returns all objects."""
# Create test data
User.objects.create(username="user1")
User.objects.create(username="user2")
response = client.get('/users/')
assert response.status_code == 200
assert len(response.context['users']) == 2
@pytest.mark.django_db
def test_detail_view(client):
"""Test detail view returns correct object."""
user = User.objects.create(username="testuser")
response = client.get(f'/users/{user.pk}/')
assert response.status_code == 200
assert response.context['user'].username == "testuser"
VCR for API Testing¶
VCR (Video Cassette Recorder) records HTTP interactions and replays them in tests. Perfect for testing external API integrations without hitting live services.
Installation¶
Basic Usage¶
import pytest
import vcr
@pytest.mark.vcr
def test_api_call():
"""Test external API call with VCR recording."""
response = requests.get('https://api.example.com/users/1')
assert response.status_code == 200
assert response.json()['id'] == 1
First run: Records HTTP interaction to cassettes/test_api_call.yaml
Subsequent runs: Replays recorded interaction (no network call)
Custom Cassette Configuration¶
# conftest.py
import pytest
@pytest.fixture(scope='module')
def vcr_config():
"""Configure VCR for all tests."""
return {
'cassette_library_dir': 'tests/fixtures/cassettes',
'record_mode': 'once', # once, new_episodes, none, all
'match_on': ['uri', 'method'],
'filter_headers': ['authorization'], # Don't record auth tokens
}
# Use in tests
@pytest.mark.vcr(cassette_name='custom_name.yaml')
def test_with_custom_cassette():
"""Test with custom cassette name."""
response = requests.get('https://api.example.com/data')
assert response.status_code == 200
VCR Best Practices¶
VCR Configuration
- cassette_library_dir: Store cassettes in
tests/fixtures/cassettes/ - record_mode = 'once': Record once, replay always (fastest)
- record_mode = 'new_episodes': Record new requests, replay existing
- record_mode = 'none': Never record, only replay (CI/CD)
- filter_headers: Remove sensitive data (API keys, tokens)
# Integration test with VCR
@pytest.mark.vcr
@pytest.mark.integration
def test_stripe_payment():
"""Test Stripe payment with recorded API calls."""
customer = stripe.Customer.create(
email="test@example.com",
source="tok_visa"
)
assert customer.email == "test@example.com"
# This API call is recorded and replayed
charge = stripe.Charge.create(
amount=1000,
currency='usd',
customer=customer.id
)
assert charge.status == 'succeeded'
Cassette Organization¶
tests/
└── fixtures/
└── cassettes/
├── integration/
│ ├── test_stripe_payment.yaml
│ └── test_sendgrid_email.yaml
└── unit/
└── test_api_client.yaml
Custom Markers¶
Organize and selectively run tests using markers:
# pytest.ini
[pytest]
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests as integration tests
ui: marks UI tests requiring browser
vcr: marks tests using VCR cassettes
live_test: marks tests hitting real APIs
Using Markers¶
import pytest
@pytest.mark.slow
def test_expensive_operation():
"""Test that takes a long time."""
result = compute_expensive_thing()
assert result is not None
@pytest.mark.integration
@pytest.mark.django_db
def test_full_workflow():
"""Integration test for complete workflow."""
user = User.objects.create(username="test")
order = create_order(user)
assert order.status == "pending"
@pytest.mark.ui
def test_login_flow(browser):
"""UI test for login flow."""
browser.get('http://localhost:8000/login')
browser.find_element_by_name('username').send_keys('test')
browser.find_element_by_name('password').send_keys('password')
browser.find_element_by_name('submit').click()
assert 'dashboard' in browser.current_url
Running Tests by Marker¶
# Run only fast tests (exclude slow)
pytest -m "not slow"
# Run only integration tests
pytest -m integration
# Run unit tests only (not integration or ui)
pytest -m "not integration and not ui"
# Run specific markers
pytest -m "vcr and integration"
Running Tests¶
Basic Execution¶
# Run all tests
pytest
# Run specific file
pytest tests/unit/test_users.py
# Run specific test
pytest tests/unit/test_users.py::test_user_creation
# Run tests matching pattern
pytest -k "user" # Runs all tests with "user" in name
Verbose and Reporting¶
# Verbose output
pytest -v
# Very verbose (show print statements)
pytest -vv -s
# Show slowest tests
pytest --durations=10
# Stop on first failure
pytest -x
# Stop after N failures
pytest --maxfail=3
Coverage Reports¶
# Generate HTML coverage report
pytest --cov=myapp --cov-report=html
# Terminal coverage report
pytest --cov=myapp --cov-report=term
# Coverage with missing lines
pytest --cov=myapp --cov-report=term-missing
# Fail if coverage below threshold
pytest --cov=myapp --cov-fail-under=80
Parallel Execution¶
# Install pytest-xdist
uv pip install pytest-xdist
# Run tests in parallel (auto-detect CPUs)
pytest -n auto
# Run with specific worker count
pytest -n 4
Django-Specific¶
# Run with Django settings
DJANGO_SETTINGS_MODULE=myproject.settings.testing pytest
# Reuse database between runs (faster)
pytest --reuse-db
# Create database if missing
pytest --create-db
# Skip migrations (much faster)
pytest --nomigrations
# Full Django test command
DJANGO_SETTINGS_MODULE=myproject.settings.development \
pytest --cov-config=.coveragerc \
-c pytest.ini \
tests/unit/
Advanced Patterns¶
Mocking and Patching¶
from unittest.mock import Mock, patch
import pytest
def test_with_mock():
"""Test using mock objects."""
mock_service = Mock()
mock_service.get_data.return_value = {"id": 1, "name": "Test"}
result = process_data(mock_service)
assert result['name'] == "Test"
mock_service.get_data.assert_called_once()
@patch('myapp.services.external_api_call')
def test_with_patch(mock_api):
"""Test with patched function."""
mock_api.return_value = {"status": "success"}
result = perform_action()
assert result['status'] == "success"
mock_api.assert_called()
Fixture Factories¶
import pytest
from factory import Factory, Faker
class UserFactory(Factory):
"""Factory for creating test users."""
class Meta:
model = User
username = Faker('user_name')
email = Faker('email')
@pytest.fixture
def user_factory():
"""Provide user factory to tests."""
return UserFactory
def test_with_factory(user_factory):
"""Test using factory fixture."""
user1 = user_factory.create()
user2 = user_factory.create()
assert user1.username != user2.username
assert User.objects.count() == 2
Snapshot Testing¶
def test_api_response_snapshot(client, snapshot):
"""Test API response matches snapshot."""
response = client.get('/api/users/')
snapshot.assert_match(response.json())
Testing Async Code¶
import pytest
@pytest.mark.asyncio
async def test_async_function():
"""Test async function."""
result = await async_operation()
assert result is not None
@pytest.mark.asyncio
async def test_async_api_call():
"""Test async API call."""
async with httpx.AsyncClient() as client:
response = await client.get('https://api.example.com/data')
assert response.status_code == 200
Testing Background Tasks¶
@pytest.mark.django_db
def test_celery_task():
"""Test Celery task execution."""
result = send_email_task.delay(
to="test@example.com",
subject="Test"
)
assert result.successful()
assert Email.objects.count() == 1
Test Safety and Best Practices¶
Database Safety Checks¶
# conftest.py
def pytest_sessionstart(session):
"""Validate test environment before running tests."""
import os
from django.conf import settings
# Check settings module
allowed_settings = ["myproject.settings.testing"]
current_settings = os.environ.get("DJANGO_SETTINGS_MODULE", "")
if current_settings not in allowed_settings:
pytest.exit(
f"Tests must use {allowed_settings}. "
f"Current: {current_settings}"
)
# Check database hosts
safe_hosts = ["127.0.0.1", "localhost", "testdb"]
for alias, config in settings.DATABASES.items():
host = config.get("HOST", "")
if host and host not in safe_hosts:
pytest.exit(
f"Unsafe database host: {host} for {alias}. "
f"Tests can only use: {safe_hosts}"
)
Fixture Cleanup¶
@pytest.fixture
def temp_file():
"""Create temporary file with guaranteed cleanup."""
filepath = "/tmp/test_file.txt"
# Setup
with open(filepath, 'w') as f:
f.write("test data")
yield filepath
# Teardown (always runs)
if os.path.exists(filepath):
os.remove(filepath)
Isolation Between Tests¶
@pytest.fixture(autouse=True)
def isolate_tests():
"""Ensure tests don't affect each other."""
# Before each test
cache.clear()
mock_service.reset()
yield
# After each test
cache.clear()
mock_service.reset()
CI/CD Integration¶
GitHub Actions¶
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.13'
- name: Install uv
run: pip install uv
- name: Install dependencies
run: uv pip install -r requirements.txt
- name: Run tests
run: |
pytest --cov=myapp \
--cov-report=xml \
--cov-report=term-missing
env:
DATABASE_URL: postgresql://postgres:postgres@localhost/test
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
GitLab CI¶
test:
image: python:3.13-slim
services:
- postgres:15
variables:
POSTGRES_DB: test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
DATABASE_URL: postgresql://postgres:postgres@postgres/test
before_script:
- pip install uv
- uv pip install -r requirements.txt
script:
- pytest --cov=myapp --cov-report=term-missing
coverage: '/TOTAL.*\s+(\d+%)$/'
Troubleshooting¶
Tests Run Slowly¶
# Find slow tests
pytest --durations=20
# Run in parallel
pytest -n auto
# Skip slow tests
pytest -m "not slow"
# Use --reuse-db for Django
pytest --reuse-db
# Skip migrations
pytest --nomigrations
Database Issues¶
# Recreate database
pytest --create-db
# Don't reuse database
pytest --no-migrations --create-db
# Multiple databases
pytest --create-db --reuse-db
Import Errors¶
# Check Python path
pytest --collect-only
# Run from project root
cd /path/to/project
pytest
# Set PYTHONPATH
PYTHONPATH=/path/to/project pytest
Fixture Not Found¶
# Check conftest.py location
# Fixtures must be in:
# - Same directory as test
# - Parent directory conftest.py
# - Root conftest.py
# Debug fixture discovery
pytest --fixtures # List all available fixtures
Best Practices Summary¶
✅ Do¶
- Organize by type: Separate unit, integration, UI tests
- Use fixtures: Avoid explicit setup/teardown
- Parametrize tests: Test multiple scenarios efficiently
- Use markers: Tag and selectively run tests
- Mock external services: Use VCR for HTTP, mocks for APIs
- Test behaviors: Not implementation details
- Keep tests isolated: No dependencies between tests
- Run tests before committing: Catch issues early
- Maintain coverage: 80%+ for business logic
- Document complex tests: Explain why, not just what
❌ Don't¶
- Don't test third-party code (Django, libraries)
- Don't aim for 100% coverage (diminishing returns)
- Don't write slow tests (mock I/O, use fixtures)
- Don't couple tests (one test per function)
- Don't ignore failing tests (fix or skip explicitly)
- Don't test implementation (test behavior)
- Don't duplicate test logic (use fixtures, factories)
- Don't commit without running tests
Integration with Other Tools¶
Pre-commit Hooks¶
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: pytest
name: pytest
entry: pytest
language: system
pass_filenames: false
always_run: true
args: ['-x', '--tb=short']
justfile Commands¶
# Run all tests
test:
pytest
# Run with coverage
test-cov:
pytest --cov=myapp --cov-report=html
open htmlcov/index.html
# Run specific test file
test-file file:
pytest {{file}} -vv
# Run fast tests only
test-fast:
pytest -m "not slow" -n auto
# Run integration tests
test-integration:
pytest -m integration
VS Code Configuration¶
{
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"tests",
"-v",
"--tb=short"
],
"python.testing.autoTestDiscoverOnSaveEnabled": true
}
Real-World Example¶
From the zenith production project:
pytest.ini¶
[pytest]
python_files = tests.py test_*.py *_tests.py conftest.py conftest_playwright.py
addopts =
--cov-report=html
--cov-config=.coveragerc
--strict-markers
--create-db
--cov=poseidon
--exitfirst
markers =
email: marks tests that send emails
live_test: marks tests that hit live APIs
vcr: marks tests using VCR
ui: marks UI tests requiring browser
e2e: marks end-to-end tests
slow: marks slow tests (>2 seconds)
tenant: marks tests needing specific tenant database
filterwarnings =
default::DeprecationWarning:__main__
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
conftest.py Pattern¶
# tests/conftest.py
import pytest
from django.contrib.auth import get_user_model
def pytest_sessionstart(session):
"""Safety checks before running tests."""
# Validate settings module
# Check database hosts
# Ensure test environment
@pytest.fixture
def user(db):
"""Create test user."""
User = get_user_model()
return User.objects.create(
username="testuser",
email="test@example.com"
)
@pytest.fixture
def authenticated_client(client, user):
"""Authenticated Django test client."""
client.force_login(user)
return client
@pytest.fixture(scope='module')
def vcr_config():
"""VCR configuration for API tests."""
return {
'cassette_library_dir': 'tests/fixtures/cassettes',
'record_mode': 'once',
'match_on': ['uri', 'method'],
'filter_headers': ['authorization', 'cookie'],
}
Further Reading¶
Next Steps¶
- Security Scanning - Find vulnerabilities with Bandit and Vulture
- Django Testing - Django-specific testing patterns
- Pre-commit Hooks - Automate testing in Git workflow
- CI/CD Integration - Continuous testing in deployment pipelines