Managing Change in the Data Warehouse Without Breaking Reports

Making changes in a data warehouse is unavoidable. Business definitions evolve, data models improve, and naming conventions change. However, uncontrolled changes to views or tables can easily break live reports, especially when those objects are already being consumed by Power BI, Excel, APIs, or downstream models. This article describes how we manage change safely, using … Read more

Building a Robust Data Engineering Utility Layer in Microsoft Fabric

Modern data platforms are not built on single scripts or ad‑hoc notebooks. They rely on reusable, well‑designed utility functions that handle extraction, transformation, auditing, and historical tracking in a consistent way. This article walks through a real‑world Python utility module used in Microsoft Fabric, explaining every major function, what problem it solves, and how it … Read more

How to Parameterise Dataflow Gen2 Destinations for Seamless Dev/Prod Deployment

Managing Dataflow Gen2 across multiple environments (Development and Production) can be a headache. By default, the destination Lakehouse or Warehouse IDs are hardcoded. This means when you deploy to a Production workspace, your dataflow might still try to write back to your Development environment. In this guide, I’ll show you a workaround to parameterize your … Read more

🚫 Why Sending Local Links to Power BI Files Isn’t an Enterprise Solution

Sharing local file links to Power BI (.pbix) reports may seem convenient, but it creates significant risks for your data, your reports, and your organisation. Here’s why this approach should be avoided in any enterprise environment. 1️⃣ It Creates Multiple, Conflicting Versions of the Same Report When a .pbix file is shared via local file … Read more

Using the SalesPipeline Template (pbit)

This template is designed to work as a reusable starting point for Sales Pipeline reporting across Busopp, Order and Invoice, using consistent datamart views and a parameter-driven connection pattern. The template can open and run “out of the box” if default values are provided (or if you enter the parameters when prompted). What you get … Read more

Pivot-me: Working with long, thin meta tables

The meta_codes, meta_dates (and any derived meta_values) views are intentionally long and thin. This structure: That said, long thin tables aren’t always the easiest shape to work with directly in a reporting tool. Shaping meta data into a report-friendly form Once you’ve applied some basic filtering (for example, keeping only the code or date types … Read more

Microsoft Fabric Deployment Pipelines

What are Deployment Pipelines? Pipeline Structure Stage Comparison in UI Item Pairing & Status Selective Deployment Deployment Rules Lakehouse & Deployment Pipelines in Fabric Deployment pipelines move Lakehouse metadata, including shortcuts, but do not copy data or table schemas. After deployment, shortcuts still point to the original source, and the Lakehouse will be empty unless … Read more

cdm_Archive_to_STG

Overview The cdm_Archive_to_STG notebook is a critical component in the cdm_today and cdm_Archive pipeline. Its primary role is to create a staging table that represents a point-in-time snapshot of source data, which is then used by the cdm__Archive_upsert notebook to accurately update the main dimension table. This staging layer acts as a buffer between raw … Read more