Module 3: Testing Framework – Quality Assurance with Bazel

module-3:-testing-framework-–-quality-assurance-with-bazel

Learning Objectives

By the end of this module, you will:

  • Master Bazel’s testing framework and test execution
  • Implement comprehensive unit, integration, and end-to-end testing
  • Configure test sharding, parallelization, and coverage reporting
  • Set up continuous integration with quality gates
  • Build a robust testing pipeline for production-ready applications

3.1 Bazel Testing Fundamentals

Understanding Bazel Test Rules

Bazel provides several test rules for Python:

  • py_test: Basic Python test execution
  • py_pytest: Advanced testing with pytest integration
  • py_test_suite: Grouping multiple tests

Basic Test Structure

# BUILD file
load("@rules_python//python:defs.bzl", "py_test", "py_library")

py_library(
    name = "calculator",
    srcs = ["calculator.py"],
    visibility = ["//visibility:public"],
)

py_test(
    name = "calculator_test",
    srcs = ["calculator_test.py"],
    deps = [":calculator"],
    python_version = "PY3",
)
# calculator.py
class Calculator:
    def add(self, a: int, b: int) -> int:
        return a + b

    def subtract(self, a: int, b: int) -> int:
        return a - b

    def multiply(self, a: int, b: int) -> int:
        return a * b

    def divide(self, a: int, b: int) -> float:
        if b == 0:
            raise ValueError("Cannot divide by zero")
        return a / b
# calculator_test.py
import unittest
from calculator import Calculator

class TestCalculator(unittest.TestCase):
    def setUp(self):
        self.calc = Calculator()

    def test_add(self):
        self.assertEqual(self.calc.add(2, 3), 5)
        self.assertEqual(self.calc.add(-1, 1), 0)

    def test_subtract(self):
        self.assertEqual(self.calc.subtract(5, 3), 2)
        self.assertEqual(self.calc.subtract(1, 1), 0)

    def test_multiply(self):
        self.assertEqual(self.calc.multiply(3, 4), 12)
        self.assertEqual(self.calc.multiply(0, 5), 0)

    def test_divide(self):
        self.assertEqual(self.calc.divide(10, 2), 5.0)
        with self.assertRaises(ValueError):
            self.calc.divide(5, 0)

if __name__ == '__main__':
    unittest.main()

3.2 Advanced Testing with pytest

Setting up pytest with Bazel

# WORKSPACE file additions
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# pytest integration
http_archive(
    name = "rules_python_pytest",
    sha256 = "...",
    urls = ["..."],
)

load("@rules_python_pytest//:repositories.bzl", "rules_python_pytest_dependencies")
rules_python_pytest_dependencies()

pytest Configuration

# BUILD file
load("@rules_python//python:defs.bzl", "py_test")
load("@rules_python_pytest//python:pytest.bzl", "py_pytest_test")

py_pytest_test(
    name = "advanced_calculator_test",
    srcs = ["advanced_calculator_test.py"],
    deps = [
        ":calculator",
        "@pip//pytest",
        "@pip//pytest-cov",
        "@pip//pytest-mock",
    ],
    args = [
        "--verbose",
        "--tb=short",
        "--cov=calculator",
        "--cov-report=term-missing",
    ],
)
# advanced_calculator_test.py
import pytest
from unittest.mock import Mock, patch
from calculator import Calculator

class TestAdvancedCalculator:
    @pytest.fixture
    def calculator(self):
        return Calculator()

    @pytest.mark.parametrize("a,b,expected", [
        (2, 3, 5),
        (-1, 1, 0),
        (0, 0, 0),
        (100, -50, 50),
    ])
    def test_add_parametrized(self, calculator, a, b, expected):
        assert calculator.add(a, b) == expected

    @pytest.mark.parametrize("a,b,expected", [
        (10, 2, 5.0),
        (9, 3, 3.0),
        (1, 2, 0.5),
    ])
    def test_divide_parametrized(self, calculator, a, b, expected):
        assert calculator.divide(a, b) == expected

    def test_divide_by_zero_exception(self, calculator):
        with pytest.raises(ValueError, match="Cannot divide by zero"):
            calculator.divide(5, 0)

    @patch('calculator.Calculator.add')
    def test_add_mocked(self, mock_add, calculator):
        mock_add.return_value = 42
        result = calculator.add(1, 2)
        mock_add.assert_called_once_with(1, 2)
        assert result == 42

3.3 Test Organization and Structure

Test Suites and Categories

# BUILD file
load("@rules_python//python:defs.bzl", "py_test")

# Unit tests
py_test(
    name = "unit_tests",
    srcs = glob(["*_unit_test.py"]),
    deps = [
        "//src:core_library",
        "@pip//pytest",
    ],
    size = "small",
    tags = ["unit"],
)

# Integration tests
py_test(
    name = "integration_tests",
    srcs = glob(["*_integration_test.py"]),
    deps = [
        "//src:core_library",
        "//src:database",
        "@pip//pytest",
        "@pip//testcontainers",
    ],
    size = "medium",
    tags = ["integration"],
    data = ["//testdata:sample_data"],
)

# End-to-end tests
py_test(
    name = "e2e_tests",
    srcs = glob(["*_e2e_test.py"]),
    deps = [
        "//src:application",
        "@pip//selenium",
        "@pip//pytest",
    ],
    size = "large",
    tags = ["e2e"],
    timeout = "long",
)

# Test suite combining all tests
test_suite(
    name = "all_tests",
    tests = [
        ":unit_tests",
        ":integration_tests",
        ":e2e_tests",
    ],
)

Test Data Management

# BUILD file for test data
filegroup(
    name = "test_fixtures",
    srcs = glob(["fixtures/**/*.json"]),
    visibility = ["//tests:__subpackages__"],
)

filegroup(
    name = "sample_database",
    srcs = ["sample.db"],
    visibility = ["//tests:__subpackages__"],
)

3.4 Test Coverage and Quality Metrics

Coverage Configuration

# .coveragerc
[run]
source = src/
omit = 
    */tests/*
    */test_*
    setup.py
    */__pycache__/*

[report]
exclude_lines =
    pragma: no cover
    def __repr__
    raise AssertionError
    raise NotImplementedError

[html]
directory = coverage_html_report

Bazel Coverage Integration

# BUILD file
py_test(
    name = "coverage_test",
    srcs = ["test_coverage.py"],
    deps = [
        "//src:main_library",
        "@pip//coverage",
        "@pip//pytest",
        "@pip//pytest-cov",
    ],
    args = [
        "--cov=src",
        "--cov-report=xml:coverage.xml",
        "--cov-report=html:htmlcov",
        "--cov-fail-under=80",
    ],
)

3.5 Test Sharding and Parallelization

Sharding Configuration

# BUILD file
py_test(
    name = "large_test_suite",
    srcs = ["large_test_suite.py"],
    deps = [":test_dependencies"],
    shard_count = 4,  # Split test into 4 shards
    size = "large",
)

# Custom sharding strategy
py_test(
    name = "custom_sharded_test",
    srcs = ["custom_test.py"],
    deps = [":dependencies"],
    env = {
        "TEST_TOTAL_SHARDS": "$(TEST_TOTAL_SHARDS)",
        "TEST_SHARD_INDEX": "$(TEST_SHARD_INDEX)",
    },
)

Parallel Test Execution

# Run tests in parallel
bazel test //tests/... --jobs=8 --test_output=errors

# Run with specific test strategy
bazel test //tests/... --test_strategy=standalone --jobs=auto

3.6 Integration Testing Patterns

Database Integration Tests

# database_integration_test.py
import pytest
import testcontainers.postgres
from sqlalchemy import create_engine
from src.database import DatabaseManager

class TestDatabaseIntegration:
    @pytest.fixture(scope="class")
    def postgres_container(self):
        with testcontainers.postgres.PostgresContainer() as postgres:
            yield postgres

    @pytest.fixture
    def db_manager(self, postgres_container):
        connection_url = postgres_container.get_connection_url()
        engine = create_engine(connection_url)
        return DatabaseManager(engine)

    def test_user_creation(self, db_manager):
        user_id = db_manager.create_user("test@example.com", "Test User")
        assert user_id is not None

        user = db_manager.get_user(user_id)
        assert user.email == "test@example.com"
        assert user.name == "Test User"

    def test_user_deletion(self, db_manager):
        user_id = db_manager.create_user("delete@example.com", "Delete User")
        assert db_manager.delete_user(user_id) is True
        assert db_manager.get_user(user_id) is None

API Integration Tests

# api_integration_test.py
import pytest
import requests
from src.api import create_app

class TestAPIIntegration:
    @pytest.fixture(scope="class")
    def app(self):
        app = create_app(testing=True)
        with app.test_client() as client:
            yield client

    def test_health_endpoint(self, app):
        response = app.get('/health')
        assert response.status_code == 200
        assert response.json['status'] == 'healthy'

    def test_user_crud_operations(self, app):
        # Create user
        user_data = {'name': 'Test User', 'email': 'test@example.com'}
        response = app.post('/api/users', json=user_data)
        assert response.status_code == 201
        user_id = response.json['id']

        # Get user
        response = app.get(f'/api/users/{user_id}')
        assert response.status_code == 200
        assert response.json['name'] == 'Test User'

        # Update user
        updated_data = {'name': 'Updated User'}
        response = app.put(f'/api/users/{user_id}', json=updated_data)
        assert response.status_code == 200

        # Delete user
        response = app.delete(f'/api/users/{user_id}')
        assert response.status_code == 204

3.7 End-to-End Testing

Selenium E2E Tests

# e2e_web_test.py
import pytest
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

class TestWebApplicationE2E:
    @pytest.fixture(scope="class")
    def driver(self):
        options = webdriver.ChromeOptions()
        options.add_argument("--headless")
        options.add_argument("--no-sandbox")
        options.add_argument("--disable-dev-shm-usage")

        driver = webdriver.Chrome(options=options)
        driver.implicitly_wait(10)
        yield driver
        driver.quit()

    def test_user_registration_flow(self, driver):
        driver.get("http://localhost:8080/register")

        # Fill registration form
        driver.find_element(By.ID, "name").send_keys("Test User")
        driver.find_element(By.ID, "email").send_keys("test@example.com")
        driver.find_element(By.ID, "password").send_keys("password123")

        # Submit form
        driver.find_element(By.ID, "submit").click()

        # Wait for success message
        wait = WebDriverWait(driver, 10)
        success_message = wait.until(
            EC.presence_of_element_located((By.CLASS_NAME, "success"))
        )
        assert "Registration successful" in success_message.text

    def test_login_flow(self, driver):
        driver.get("http://localhost:8080/login")

        driver.find_element(By.ID, "email").send_keys("test@example.com")
        driver.find_element(By.ID, "password").send_keys("password123")
        driver.find_element(By.ID, "login").click()

        # Verify redirect to dashboard
        wait = WebDriverWait(driver, 10)
        wait.until(EC.url_contains("/dashboard"))
        assert "/dashboard" in driver.current_url

3.8 Test Configuration and Environment Management

Test Environment Configuration

# test_config.py
import os
from dataclasses import dataclass

@dataclass
class TestConfig:
    database_url: str = "sqlite:///:memory:"
    redis_url: str = "redis://localhost:6380"
    api_base_url: str = "http://localhost:8080"
    test_data_dir: str = "testdata"

    @classmethod
    def from_env(cls):
        return cls(
            database_url=os.getenv("TEST_DATABASE_URL", cls.database_url),
            redis_url=os.getenv("TEST_REDIS_URL", cls.redis_url),
            api_base_url=os.getenv("TEST_API_BASE_URL", cls.api_base_url),
            test_data_dir=os.getenv("TEST_DATA_DIR", cls.test_data_dir),
        )

pytest Configuration

# conftest.py
import pytest
import tempfile
import shutil
from test_config import TestConfig

@pytest.fixture(scope="session")
def test_config():
    return TestConfig.from_env()

@pytest.fixture
def temp_dir():
    temp_dir = tempfile.mkdtemp()
    yield temp_dir
    shutil.rmtree(temp_dir)

@pytest.fixture(autouse=True)
def setup_test_environment(test_config):
    # Setup code before each test
    yield
    # Teardown code after each test

3.9 Continuous Integration Integration

GitHub Actions with Bazel

# .github/workflows/test.yml
name: Test Suite

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: [3.8, 3.9, 3.10, 3.11]

    steps:
    - uses: actions/checkout@v3

    - name: Setup Bazel
      uses: bazelbuild/setup-bazelisk@v2

    - name: Cache Bazel
      uses: actions/cache@v3
      with:
        path: ~/.cache/bazel
        key: ${{ runner.os }}-bazel-${{ hashFiles('**/BUILD', '**/WORKSPACE') }}
        restore-keys: |
          ${{ runner.os }}-bazel-

    - name: Run unit tests
      run: |
        bazel test //tests:unit_tests --test_output=errors --test_summary=detailed

    - name: Run integration tests
      run: |
        bazel test //tests:integration_tests --test_output=errors

    - name: Generate coverage report
      run: |
        bazel coverage //tests:all_tests --combined_report=lcov

    - name: Upload coverage to Codecov
      uses: codecov/codecov-action@v3
      with:
        file: ./bazel-out/_coverage/_coverage_report.dat

    - name: Run E2E tests
      run: |
        bazel test //tests:e2e_tests --test_output=streamed
      if: github.event_name == 'push' && github.ref == 'refs/heads/main'

3.10 Quality Gates and Metrics

Quality Gate Configuration

# BUILD file
load("@rules_python//python:defs.bzl", "py_test")

py_test(
    name = "quality_gates",
    srcs = ["quality_gates_test.py"],
    deps = [
        "//src:all_modules",
        "@pip//pytest",
        "@pip//pytest-cov",
        "@pip//flake8",
        "@pip//mypy",
        "@pip//black",
    ],
    args = [
        "--cov=src",
        "--cov-fail-under=80",
        "--flake8",
        "--mypy",
        "--black-check",
    ],
    tags = ["quality"],
)

Performance Testing

# performance_test.py
import pytest
import time
from src.calculator import Calculator

class TestPerformance:
    def test_calculation_performance(self):
        calc = Calculator()

        start_time = time.time()
        for i in range(10000):
            calc.add(i, i + 1)
        end_time = time.time()

        execution_time = end_time - start_time
        assert execution_time < 1.0, f"Performance test failed: {execution_time}s"

    @pytest.mark.benchmark
    def test_benchmark_operations(self, benchmark):
        calc = Calculator()
        result = benchmark(calc.multiply, 123, 456)
        assert result == 56088

3.11 Test Reporting and Analytics

Custom Test Reporter

# test_reporter.py
import json
import sys
from datetime import datetime

class TestReporter:
    def __init__(self):
        self.results = {
            'timestamp': datetime.utcnow().isoformat(),
            'tests': [],
            'summary': {
                'total': 0,
                'passed': 0,
                'failed': 0,
                'skipped': 0
            }
        }

    def add_test_result(self, test_name, status, duration, error=None):
        self.results['tests'].append({
            'name': test_name,
            'status': status,
            'duration': duration,
            'error': error
        })

        self.results['summary']['total'] += 1
        self.results['summary'][status] += 1

    def generate_report(self, output_file='test_report.json'):
        with open(output_file, 'w') as f:
            json.dump(self.results, f, indent=2)

        print(f"Test Report Generated: {output_file}")
        print(f"Total: {self.results['summary']['total']}")
        print(f"Passed: {self.results['summary']['passed']}")
        print(f"Failed: {self.results['summary']['failed']}")
        print(f"Skipped: {self.results['summary']['skipped']}")

3.12 Best Practices and Advanced Patterns

Test Utilities and Helpers

# test_utils.py
import functools
import time
from typing import Callable, Any

def retry(max_attempts: int = 3, delay: float = 1.0):
    """Decorator to retry flaky tests"""
    def decorator(func: Callable) -> Callable:
        @functools.wraps(func)
        def wrapper(*args, **kwargs) -> Any:
            last_exception = None
            for attempt in range(max_attempts):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    last_exception = e
                    if attempt < max_attempts - 1:
                        time.sleep(delay)
                    continue
            raise last_exception
        return wrapper
    return decorator

def timeout(seconds: int):
    """Decorator to add timeout to tests"""
    def decorator(func: Callable) -> Callable:
        @functools.wraps(func)
        def wrapper(*args, **kwargs) -> Any:
            import signal

            def timeout_handler(signum, frame):
                raise TimeoutError(f"Test timed out after {seconds} seconds")

            signal.signal(signal.SIGALRM, timeout_handler)
            signal.alarm(seconds)

            try:
                return func(*args, **kwargs)
            finally:
                signal.alarm(0)
        return wrapper
    return decorator

Test Data Factories

# test_factories.py
import factory
from datetime import datetime, timedelta
from src.models import User, Product, Order

class UserFactory(factory.Factory):
    class Meta:
        model = User

    id = factory.Sequence(lambda n: n)
    name = factory.Faker('name')
    email = factory.Faker('email')
    created_at = factory.LazyFunction(datetime.utcnow)
    is_active = True

class ProductFactory(factory.Factory):
    class Meta:
        model = Product

    id = factory.Sequence(lambda n: n)
    name = factory.Faker('word')
    price = factory.Faker('pydecimal', left_digits=3, right_digits=2, positive=True)
    description = factory.Faker('text', max_nb_chars=200)

class OrderFactory(factory.Factory):
    class Meta:
        model = Order

    id = factory.Sequence(lambda n: n)
    user = factory.SubFactory(UserFactory)
    products = factory.List([factory.SubFactory(ProductFactory) for _ in range(3)])
    total_amount = factory.LazyAttribute(lambda obj: sum(p.price for p in obj.products))
    created_at = factory.LazyFunction(datetime.utcnow)

Practical Exercise: Complete Testing Pipeline

Create a comprehensive testing pipeline for a simple e-commerce application:

  1. Setup Project Structure:
   ecommerce/
   ├── WORKSPACE
   ├── BUILD
   ├── src/
   │   ├── BUILD
   │   ├── models/
   │   ├── services/
   │   └── api/
   ├── tests/
   │   ├── BUILD
   │   ├── unit/
   │   ├── integration/
   │   └── e2e/
   └── testdata/
  1. Implement Test Categories:

    • Unit tests for business logic
    • Integration tests for database operations
    • API integration tests
    • End-to-end user journey tests
  2. Configure Quality Gates:

    • 80% code coverage requirement
    • Performance benchmarks
    • Code quality checks (linting, type checking)
  3. Setup CI/CD Pipeline:

    • Automated test execution
    • Coverage reporting
    • Quality gate enforcement

Module Summary

In this module, you’ve learned:

  • Comprehensive testing strategies with Bazel
  • Advanced testing patterns and best practices
  • Test organization, sharding, and parallelization
  • Integration and end-to-end testing approaches
  • Quality gates and continuous integration
  • Performance testing and monitoring

https://www.linkedin.com/in/sushilbaligar/
https://github.com/sushilbaligar
https://dev.to/sushilbaligar
https://medium.com/@sushilbaligar

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
alterlang-intercode:-a-native-intercomprehension-paradigm-in-programming,-powered-by-gurudev

AlterLang InterCode: A Native Intercomprehension Paradigm in Programming, Powered by GuruDev

Next Post
devsecops-with-github-actions-and-argocd

DevSecOps with Github Actions and ArgoCD

Related Posts