System Design

Packages & APIs

UnitPylot is developed as a VS Code extension using TypeScript for the frontend, Node.js for the backend, and Python for extracting test suite data.

VS Code Logo
TypeScript Logo
Node.js Logo
Python Logo


The Node.js ecosystem provides core functionalities required for the extension, with modules that are used to interact with APIs, provide file system and path operations, and interact with SQLite databases. The key modules used are:
  • vscode: provides the APIs for extension interaction and Copilot integration with VS Code.
  • fs: handles file system operations such as reading and writing onto workspace files.
  • path: manages directory structures and retrieves file paths.
  • sqlite3: facilitates database management for reading records generated by pytest-monitor.
Through the vscode module, several APIs are utilised to implement core functionalities:
  • Extension API: manages the activation, deactivation, and life cycle of the extension. Also handles the custom command management and registration.
  • Language Model API: facilitates the creation of specialised agents through interaction with a cloud-based LLM model (GitHub Copilot powered by GPT-4.0).
  • TextEditor API: provides interaction with open code files, used for displaying AI-generated suggestions.
  • Workspace API: allows access to user project structures and file management.
  • Tree View API: implements hierarchical organisation, enabling the structured display of the test suite.
  • Web View API: enables interactive UI elements, such as test history graphs.
  • Chat API: allows the user to interact with the chat participant suggesting test optimisations.

Design Patterns

UnitPylot is modular, with distinct components for test execution, history management, and user interface integration. It employs several well-known design patterns which are detailed below:

Singleton Pattern

The `TestRunner` class implements the Singleton pattern to ensure that only one instance of the test runner exists throughout the extension's lifecycle.

Purpose: Centralises test execution and result management. Also prevents multiple instances of the `TestRunner` from being created, which could lead to inconsistent test results or redundant resource usage.
Example:

const testRunner = TestRunner.getInstance(context.workspaceState);


Observer Pattern

The `SidebarViewProvider` and `FailingTestsProvider` classes use the Observer pattern to update the UI dynamically based on changes in test results or coverage data.

Purpose: Ensures that the sidebar and tree views are updated whenever new test results or coverage data are available. Decouples the UI components from undelrying logic.
Example:

vscode.commands.registerCommand('UnitPylot.updateSidebar', () => { webviewProvider.update(); failingTestsProvider.refresh(); });


Command Pattern

The extension registers multiple commands (e.g., `runTests`, `runAllTests`, `updateSidebar`) that encapsulate specific actions triggered by user interactions or events.

Purpose: Decouples the invocation of commands from their implementation. Makes it easy to add new commands or modify existing ones without affecting other parts of the system.
Example:

vscode.commands.registerCommand('UnitPylot.vscode-run-tests.runTests', async () => { const { passed, failed } = await testRunner.getResultsSummary(); vscode.commands.executeCommand('UnitPylot.vscode-run-tests.updateResults', { passed, failed }); });


Factory Pattern

The `SidebarViewProvider` and `FailingTestsProvider` classes act as factories for creating and managing UI components like webviews and tree views.

Purpose: Simplifies the creation of complex UI components by encapsulating their initialisation logic. Ensures consistency in how UI components are created and managed.
Example:

vscode.window.registerWebviewViewProvider(SidebarViewProvider.viewType, webviewProvider);


Strategy Pattern

The `HistoryProcessor` class uses different strategies to process historical test data, such as calculating pass/fail counts or generating trends.

Purpose: Encapsulates different algorithms for processing historical data, making it easy to switch or extend them.
Example:

const passFailHistory = HistoryProcessor.getPassFailHistory();


Template Method Pattern

The `ReportGenerator` class uses a template method to define the steps for generating reports, while allowing specific details (e.g., JSON or Markdown format) to be customised.

Purpose: Provides a skeleton for the report generation process, ensuring consistency while allowing flexibility in specific steps.
Example:

ReportGenerator.generateSnapshotReport();



Summary of Design Patterns

Pattern Purpose Where used
Singleton Ensures a single instance of the test runner. TestRunner
Observer Enables automatic UI updates. SidebarViewProvider, FailingTestsProvider
Command Encapsulates user actions as commands. runTests, runAllTests, etc.
Factory Simplifies UI component creation. SidebarViewProvider, FailingTestsProvider
Strategy Provides flexible data processing strategies. HistoryProcessor
Template Method Defines a standardised report generation workflow. ReportGenerator

Design Goals

The project was designed with the following key goals in mind to ensure it provides a seamless, efficient, and extensible testing experience for Python developers:

Usability

Objective: Provide a seamless integration with Visual Studio Code and Pytest to enhance the developer experience.

Implementation:

  • The extension integrates directly with VS Code's command palette, sidebar, and tree views, allowing users to interact with test results and coverage data without leaving the editor.
  • Commands like runTests, runAllTests, and runSpecificTest are easily accessible, enabling users to execute tests with minimal effort.
  • Features like code coverage highlighting and inline suggestions ensure that developers can quickly identify and address issues in their code.


Automation

Objective: Minimise user intervention for running and modifying tests, enabling a more streamlined workflow.

Implementation:

  • Background tasks such as periodic test execution and snapshot saving are automated using interval-based scheduling.
  • The extension automatically updates the sidebar and tree views with the latest test results and coverage data after each test run or file save.
  • AI-powered commands, such as fixing failing tests or optimising slow tests, automate common tasks, reducing the manual effort required by developers.



Performance

Objective: Ensure efficient test execution and result visualisation, even for large codebases.

Implementation:

  • The TestRunner class is optimised to execute tests selectively, focusing on modified files or specific test cases, thereby reducing unnecessary overhead.
  • Test results and coverage data are cached and merged incrementally, avoiding redundant computations.
  • The sidebar and tree views are designed to update dynamically without blocking the editor, ensuring a smooth user experience.



Extensibility

Objective: Provide a flexible architecture that allows for the integration of additional AI models or test frameworks in the future.

Implementation:

  • The extension uses modular components, such as the TestRunner, HistoryManager, and SidebarViewProvider, which can be extended or replaced without affecting other parts of the system.
  • AI-powered features are designed to work with both GitHub Copilot and custom LLM endpoints, making it easy to integrate new AI models.
  • The command-based architecture allows for the addition of new commands, such as support for alternative test frameworks like unittest or nose, with minimal changes to the existing codebase.


By focusing on these design goals, UnitPylot ensures that developers can efficiently manage their testing workflows while benefiting from advanced features like AI-powered suggestions and automated test management.

System Structure Overview

System structure diagram


The diagram above illustrates the overall design of the system. Our features are categorised into the subsections shown where each one utilises a number of parts of the codebase. The Test Management container shows an overview of how test results and code coverage are gathered. This is a vital part of the system as all features rely on this. Test History Management contains code to store and process historical testing data. This relies on the Test Runner to obtain data. The AI functionality is abstracted to allow either Copilot or other third party APIs to be used.

Flow Diagrams

Some of the more complex functionalities are illustrated below:


Running Tests

System structure diagram


Finding Minimum Tests to Run

System structure diagram

UML class diagrams

UI Elements

Sidebar View UML

Tree View UML

Element in Tree View


LLM Integration

LLM Integration UML


Test Runner

Test Runner UML


History Manager

History Manager UML