Background#
Managing and making sense of health data isn’t easy. Many health facilities rely on multiple systems—electronic medical records (EMRs), laboratory systems, pharmacy databases—each designed for a specific purpose with its own data structures. While great for transactional processing, these systems don’t naturally work together, making it difficult to get a unified view of the data.
Some organisations use reporting tools like JasperReports or Apache Superset to visualise data, but these tools don’t solve the deeper issue of fragmented datasets. Before meaningful analysis can happen, data often needs to go through extract, transform, and load (ETL) processes to clean, merge, and structure it properly. This can be complex, requiring modifications to source systems or workarounds that aren’t always feasible. Integrating external data sources, like viral load results from national repositories, adds another layer of difficulty.
Developers and analysts often struggle with advanced indicators that require complex calculations. Without a dedicated transformation layer, they may end up writing intricate SQL queries buried deep within the system—hard to read, hard to verify, and nearly impossible to maintain. Many reporting systems are also rigid, requiring code changes and full redeployments just to update a single dashboard. Over time, this leads to fragmented and outdated reports.
DUFT is built to solve these challenges. It acts as a centralised data platform, handling everything from data transformation and integration to visualisation and reporting. Instead of relying on complex workarounds, DUFT makes data engineering, analytics, and automation seamless. With a flexible, configuration-driven approach, it allows users to modify dashboards, queries, and workflows without touching the underlying code. Currently in beta, DUFT is expected to launch in 2025, offering a streamlined way to manage and use health data more effectively.