DUFT Server Unit Testing#
DUFT Server features extensive unit tests. Almost all aspects of DUFT Server are covered through unit tests, and this includes services, APIs, and helper functions. Unit tests are generally stored close to the code they test, i.e. in the same directory or subdirectory of the main code file. In addition, DUFT Server features a few utilities and helper classes to make unit testing easier. For example, it is possible to turn features on or off during a unit tests, allowing the test to run with or without a feature turned on. There are also utilities to simulate user accounts and user roles. When DUFT Server unit tests run, an in-memory database with common users and roles is created, which is accessible through all the unit tests.
Note
There are currently a few iterations of unit tests in DUFT Server. All unit tests will eventually migrate to the paradigms shown in this document, but you may still see older unit test paradigms.
The main change is that tests are moving away from django.test.TestCase in favour or the more flexible pytest fixtures.
Look for tests using test_environment and api_client, for example in test_security_api_view.
Overall tests configuration#
The main test configuration is controlled in conftest.py which is stored directly in duft-server.
Setting up a testing environment and a test database#
conftest.py sets up a comprehensive testing environment for a Django REST API using pytest and pytest-django. It focuses on creating users with specific permissions, generating authentication tokens, and enabling feature-flagged API tests.
conftest.py begins by ensuring the Django database is properly configured for testing with the setup_django_environment fixture. This fixture ensures that the test database is ready before executing any tests. It uses the django_db_blocker to control access to the database during the test setup.
Additional fixtures that me be of use to other tests in the future should also be added to conftest.py.
Creating a test API client#
The api_client fixture provides an instance of APIClient from Django REST Framework, which is used to make API requests in tests.
The code below shows how this api_client can be used to test an API call. It uses the reverse lookup of v2-settings which will return api/v2/settings (defined in services/config/api/urls.py)
def test_get_settings(api_client):
"""
Test retrieving data connection parameters without authentication.
"""
url = reverse("v2-settings")
response = api_client.get(url)
assert response.status_code == 200
data = response.json()
assert data is not None
assert data["unittest"] is False
Creating test users#
The create_test_users fixture is responsible for creating test users with specific permissions. It iterates over API permissions and creates a user for each permission. Each user is assigned the corresponding permission and a JWT token is generated using the token_obtain_pair endpoint. The fixture also creates an authenticated user, an admin user, and a user without any permissions. It returns a dictionary containing the created users and their respective tokens.
Creating a test environment#
The test_environment fixture combines the API client and user data, providing a unified environment for tests. It includes a helper function get_token that retrieves tokens for different user types or specific permissions. This makes it easy to perform authenticated API requests in tests.
The test below shows an example of using this environment. It show how to use the API client, as well obtaining a token for an authenticated user.
@override_feature(user_authentication=True)
def test_security_test_api_view_authenticated(self, test_environment):
client = test_environment["client"]
token = test_environment["authenticated_token"]
headers = {"HTTP_AUTHORIZATION": f"Bearer {token}"}
response = client.get(reverse("security-test-not-authenticated"), **headers)
assert response.status_code == 200
response_data = response.json()
assert response_data["message"] == "Authenticated"
assert response_data["user"] == "authenticated"
For every Django Auth permission in the system (as defined in the ApiPermission enum in security_fixtures.py ), the fixture generates an authenticated user. That means you can obtain a specific user who only has a particular token. For example:
token1 = test_environment["user_tokens"] \
[f"{ApiPermission.TEST_PERMISSION.value}_token"]
token2 = test_environment["user_tokens"] \
[f"{ApiPermission.VIEW_DASHBOARD.value}_token"]
This code returns a token for a user which has (and only has) the TEST_PERMISSION or VIEW_DASHBOARD Django Auth permission.
This means, as long as you added the appropriate permission in the ApiPermission enum, the test framework creates a user specifically with the the permission name, and test token users are created automatically from those users. To be specific, the tokens above are created from a test_permission_user and view_dashboard_permission_user, both with password as password, and these users are generated and available (and only then) during unit testing.
Parameter-driven Testing#
A core testing function, api_service_tester, is a parameterized test helper. It allows tests to specify feature flags, target URLs, HTTP methods, user tokens, payloads, expected status codes, and custom assertions. The function constructs API requests, applies feature flags using context managers, and verifies responses against expected outcomes. It also supports dynamic URL reversing and includes error handling for missing tokens.
The code makes use of Django’s built-in permission system, ContentType framework, and REST Framework’s token-based authentication.
This allows for more advanced, declarative testing scenarios, for example in dashboard3dl_api_tests which tests as follows:
@pytest.mark.parametrize(
"description, feature_flags, url_name, method, token_name, payload, expected_status, assertions, reverse_kwargs",
[
(
"1 - test_get_3dl_dashboard_without_authorisation",
{"user_authentication": False},
"v2-3dldashboard",
"get",
None,
None,
HTTP_200_OK,
[lambda response: len(response) > 0],
{"dashboard_name": "3dlsample"}
),
In this case:
Parameter |
Description |
|---|---|
Description |
A description of the scenario to be tested, e.g. testing a 3DL dashboard without authorisation |
Feature Flags |
Which featrue flags should be turned on or off, for example, disabling user authentication for this scenario |
URL Name |
The name of the URL to test, for example, |
Method |
The method to use for this request, e.g. |
Token namne |
The token name if a user is to be authenticated, for example |
Payload |
A payload, mostly for |
Expected status |
The expected status code to be returned, for example |
Assertions |
An array of lambda expressions, which can contain assertions, for example |
Keyword arguments |
Specifies keyword arguments as expected in by the URL. For example, the |
The test framework will execute this test by calling the API with the method and keyword specified, authenticating users first if required, with features turned off or on, and will assert the return status code, and run any additional assertions.
This method makes it much easier to run similar testing scenarios reducing the need to write common boiler plate code.
Legacy tests#
DUFT Server still has many older tests, based on AuthenticatedBaseTestCase. Under the hood, this test case already uses the new test paradigms:
@pytest.mark.django_db
class AuthenticatedBaseTestCase(TestCase):
@pytest.fixture(autouse=True)
def setup_test_environment(self, test_environment: Dict[str, Any]):
# Store environment in class variable so setUp can access it
self._test_environment = test_environment
def setUp(self):
"""
Sets up the test environment by combining pytest fixtures with traditional setUp.
This allows both pytest and unittest patterns to work together.
"""
super().setUp()
if hasattr(self, '_test_environment'):
self.client = self._test_environment["client"]
self.user_tokens = self._test_environment["user_tokens"]
self.user_list = self._test_environment["user_list"]
self.authenticated_user = self._test_environment["authenticated_user"]
self.authenticated_token = self._test_environment["authenticated_token"]
self.admin_user = self._test_environment["admin_user"]
self.admin_token = self._test_environment["admin_token"]
self.no_permission_user = self._test_environment["no_permission_user"]
self.no_permission_token = self._test_environment["no_permission_token"]
Tests that derive from this base class will have access to the same features:
@pytest.mark.django_db
class TestRunQueryAPI(AuthenticatedBaseTestCase):
def setUp(self):
super().setUp()
self.client = APIClient()
# Set up valid query data and variations
self.valid_query_data = {"query": "SELECT * FROM dim_age_group", "data_connection_id": "ANA"}
self.unauthorised_query_data = {"query": "SELECT * FROM dim_age_group", "data_connection_id": "EPMS"}
self.invalid_query_data = {"query": ""}
self.valid_query_data_csv = {"query": "SELECT * FROM dim_age_group", "data_connection_id": "ANA", "format": "csv"}
# Use query_data_user from AuthenticatedBaseTestCase
self.auth_headers = {"HTTP_AUTHORIZATION": f'Bearer {self.user_tokens["query_data_token"]}'}
...
@override_feature(user_authentication=True)
def test_post_valid_data_csv_download(self):
url = reverse("run-query-format", kwargs={"data_format": "csv"})
response = self.client.post(url, self.valid_query_data, format="json", **self.auth_headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response["Content-Type"], "text/csv")
self.assertTrue(response.content.startswith(b"age_group_id"))
self.assertIn("Content-Disposition", response)
self.assertTrue(response["Content-Disposition"].startswith("attachment"))
The reason why this method will be phased out is that they don’t support parametrized unit testing, making them less flexible and declarative. However, it is likely both paradigms will coexist for a while.
Enabling or disabling features#
To test for multiple scenarios, DUFT testing allows most features to be explicitly set before a test runs, and it is highly recommended to do so. You should not rely on the feature flags set in the .env file, as these may differ for different users, yielding inconsistent results and failing tests.
Enabling or disabling features is very easy, by using the override_feature decorator:
@override_feature(user_authentication=False)
They can also be combined, as shown here in the test of the decorator itself (yes the decorator also gets tested):
@override_feature(data_tasks=False, user_authentication=False, task_scheduler=True)
def test_override_feature_decorator_multiple():
"""Test that decorator correctly overrides multiple features."""
assert features.data_tasks is False
assert features.user_authentication is False
assert features.task_scheduler is True
In many cases it is recommend to test code with authentication enabled and disabled, to ensure code performs as expected in both scenarios.
Another example is shown here, in data_connection_parameters_api_tests:
@override_feature(user_authentication=True, enforce_user_authentication=True)
def test_set_user_parameters_for_data_connection_not_existing_missing_permission(
self,
):
"""
Test setting user parameters for a non-existing data connection with authentication but missing permissions.
"""
headers = {"HTTP_AUTHORIZATION": f"Bearer {self.authenticated_token}"}
response = self.client.get(
"/api/v2/data-connections/UNITTEST2/parameters", **headers # type: ignore
)
assert response.status_code == 404 # Not found
current_datetime_str = str(datetime.datetime.now())
parameters = {"unitTested": current_datetime_str}
response = self.client.post(
"/api/v2/data-connections/UNITTEST2/parameters", data=parameters, **headers # type: ignore
)
assert response.status_code == 403 # Forbidden
This is a common security pattern designed to avoid leaking information. In API design, it’s often considered best practice to return 403 Forbidden instead of 404 Not Found when a user isn’t authorised to know whether a resource exists. This prevents malicious actors from probing the system to discover valid resource identifiers.
Conditional Testing and Mocking Tests#
It is not recommend to mock tests. Mocking tests significantly reduces the reliability of tests, especially when used to much. There are some tests where patching and mocking is unavoidable, but in most cases, it should be avoided.
A better option would be to use conditional testing. The features FeatureFlags has been updated to support conditional testing. They are not feature flags in that they don’t enable any additional features, but they instruct the testing framework to skip certain tests.
For example, if the AI Engine’s required local language models are not installed, the tests for the AI Engine should be skipped. This can be accomplished as follows:
Update your .env file by adding a specific test flag:
FEATURE_USER_AUTHENTICATION=True FEATURE_SERVER_UPLOADS=True FEATURE_TASK_SCHEDULER=False DUFT_UPLOADS=uploads DUFT_DATA=data AI_ENGINE=True TEST_AI_ENGINE=False # <-------
Add the feature to the FeatureFlags class:
class FeatureFlags: # Explicitly define attributes for all feature flags with defaults data_tasks: bool = True user_authentication: bool = False # Indictates whether user authentication is enabled or not. If it is not enabled, no API will require authentication, except a small subset enforce_user_authentication: bool = False # Indicates whether user authentication is enforced or not, if True, all APIs will require authentication server_uploads: bool = False task_scheduler: bool = False duft_uploads: str = "uploads" duft_data: str = "data" log_level: str = "INFO" ai_engine: bool = True # Test specific features, only used by the unit test framework test_ai_engine: bool = False # <-------
Add an instruction to the test to skip if a feature is not enabled:
@pytest.mark.skipif( not features.test_ai_engine, reason="Feature 'test_ai_engine' is disabled" )
In this example, the test will be skipped by the framework if the feature is not enabled.