Power BI’s AI Toolkit: Evaluating Copilot for DAX Development in Power BI Desktop

If you have ever felt that slight surge of hope when clicking the Copilot button in Power BI Desktop, you are not alone. You have a specific piece of business logic in mind, and you are curious whether Copilot can actually save you time writing, formatting, and documenting a DAX measure.

The honest frustration in 2026 is not that AI does not work. It is the uncertainty of when to rely on it. If you have ever prompted Copilot for a measure only to find yourself spending more time validating its response than writing the measure yourself, you know the feeling.

It is not about being for or against AI. It is about knowing exactly where the tool hits its stride and where it starts to trip over its own logic.

To cut through that uncertainty, we put Copilot to work on a practical sales model across five realistic scenarios: writing measures, building advanced logic, explaining complex DAX, debugging errors, and documenting the semantic model. Along the way, each result gets a simple rating:

🟢 Green Light — Copilot handles it reliably. Safe to use with a quick review.

🟡 Yellow Light — Copilot provides a strong starting point, but the output needs validation before it goes into production.

🔴 Red Light — Copilot struggles or misleads. A human needs to drive.

A sample report is available for download at the end of this post so you can follow along and test these prompts against the same data.

Let’s find out where each scenario lands.


The Data Model: A Standard Sales Model

Instead of testing Copilot on isolated or theoretical formulas, we are putting it to work on a simplified but practical dataset.

The model consists of four core tables connected through a standard star schema:

  • Sales: The fact table. Each row is a single transaction with a SalesID, Amount, SalesDate, and CustomerID.
  • Product: 15 products across 6 categories: Smartphones, Laptops, Tablets, TVs, Headphones, and Watches, each with a Product Code and Product Image.
  • Employee: 17 employees spanning four departments: Retail, Sales, Marketing, HR, and R&D — with JobTitle, Department, and a ManagerID for hierarchy.
  • Region: Three regions: the United States, Europe, and Asia

The Sales table connects to each dimension through its ProductID, EmployeeID, and RegionID. Column names are descriptive, and the schema is intentionally straightforward.

Why does that matter? Copilot’s output quality is directly tied to how well the model is structured. Column names like Amount and SalesDate give it far more to work with than Col1 or Fld_003. A little upfront investment in clean, readable naming pays off quickly when we start prompting.

The Five Scenarios

The goal is to move beyond simple sums and see how Copilot handles the tasks that actually eat up our time. Here is what we are testing:

  1. Establishing the Basics: Foundational measures like total sales and department-filtered aggregations, to confirm Copilot correctly reads the model structure and relationships.
  2. Building Advanced Logic: More complex calculations like year-over-year growth and percentage of total, where filter context starts to matter.
  3. Explaining Existing Measures: Handing Copilot DAX that is already in the model and asking it to translate the logic into plain language.
  4. Debugging Logical Errors: Providing a measure that runs without error but returns incorrect results, and seeing whether Copilot can identify the real problem.
  5. Generating Measure Descriptions & Comments: Using Copilot to auto-document completed measures directly in the model, and evaluating whether the output is accurate and usable.

Because of the generative nature of Copilot, it may not produce the exact same DAX or explanations for you as shown here. The patterns and behaviors should be consistent, but the specific wording may vary.

This is not an attempt to break the tools with impossible requests. It is a straightforward evaluation of how much repetitive, manual DAX work we can safely offload to Copilot in a standard reporting environment.


TL;DR — Key Findings at a Glance

If you are short on time, here is the summary. We tested Copilot in Power BI Desktop across five scenarios using the sales model described above. Each scenario was tested using both the Copilot pane and the DAX query view inline Copilot where applicable.

The findings:

  • 🟢 Establishing the Basics: Copilot correctly identified table relationships and column names without being told. Basic measure and filtered aggregations came back clean and accurate.
  • 🟡 Building Advanced Logic – Copilot Pane: Strong DAX patterns for YoY growth and percentage contribution, but both measures carry dependencies that Copilot does not resolve on its own. The Date table requirement and the ALL vs ALLSELECTED design decision still need a human in the loop.
  • 🟢 Building Advanced Logic – DAX Query View: Writing measures through the inline Copilot added a meaningful validation step. The % of Total Sales by Product measure used ALLSELECTED correctly without prompting, and the Suggest measures feature returned six well-structured analytical measures in a single query with inline comments and a preview ready to run.
  • 🟡 Explaining Complex DAX Measures – Copilot Pane: Once the full DAX is pasted in, Copilot handles even the most complex measures accurately. A multi-variable measure combining SUMMARIZE, TOPN, CONCATENATEX, and HTML rendering was broken down step by step in plain language.
  • 🟢 Explaining Complex DAX Measures – DAX Query View: The Define and Evaluate workflow eliminates the manual paste step and produces a more structured, variable-by-variable explanation. The better entry point for thorough measure audits.
  • 🟡 Debugging DAX Logical Errors: Correctly diagnosed an ALL vs ALLSELECTED filter context issue and provided two well-reasoned solutions with clear guidance on when to use each. Yellow light only because the fix still requires the developer to understand which option fits the specific report design.
  • 🟢 Generating Measure Descriptions – Model View: Accurate, readable, and business-aware descriptions generated in seconds. Descriptions travel with the model metadata and surface as tooltips in the Fields pane.
  • 🟢 Adding Inline Comments – DAX Query View: Copilot added meaningful, explanatory comments to an existing measure, explaining the reasoning behind each step rather than just restating variable names. Combined with measure descriptions, this covers documentation for both developers and report authors

Copilot earned five Green Lights and three Yellow Lights across eight tests. No Red Lights. The gaps that produced Yellow Lights were not about DAX correctness. They were about model architecture decisions and report design context that Copilot simply does not have access to. The developers who get the most value from Copilot are those who know enough DAX to validate the results.


Getting Started: How to Access Copilot in Power BI Desktop

Before jumping into the scenarios, it helps to know where Copilot lives in Power BI Desktop and how to reach it for each type of task. There are three entry points, and each one serves a different purpose.

Note: Copilot in Power BI Desktop requires access to a paid Fabric capacity (F2 or higher) or Power BI Premium capacity (P1 or higher). A Power BI Pro or Premium Per User (PPU) license alone is not sufficient. If the Copilot button in your ribbon appears greyed out or disabled, confirm with your administrator that Copilot is enabled at the tenant level. Review licensing and other requirements or limitations here: Copilot for Power BI overview.

The Copilot Pane (Report View)

This is the primary entry point for writing measures, debugging logic, and asking questions about the data model.

  1. Open the Copilot pane: Open the report in Power BI Desktop and select Copilot in the ribbon. The first time we use Copilot, we will be prompted to select a compatible workspace to associate with the report.
  2. Describe the measure: Once connected, the Copilot pane opens on the right side of the screen. Describe the measure in plain language. Reference the relevant table or column and be as specific as the business requirement calls for.
  3. Debug with Copilot: For debugging, paste the broken measure directly into the chat and describe what it should return versus what it is actually returning.

The more specific our prompt, the better the output. “Calculate total sales” will get us a basic SUM. “Calculate total sales for employees in the Retail department only” gives Copilot enough context to apply the correct filter across the right table relationship.

The DAX Query View (Inline Copilot)

The DAX query view has its own separate inline Copilot, distinct from the report view pane. It is particularly useful for writing DAX queries, understanding existing measures, and learning DAX concepts in context.

  1. Open the inline Copilot. Switch to DAX query view using the left navigation bar. It is the fourth icon from the top. Open the inline Copilot by clicking the Copilot button in the query editor or pressing CTRL + I.
  2. Use the inspire buttons. Three inspire buttons give us a quick starting point for the most common tasks:
    • Write DAX query: describe what you want in plain language, and Copilot generates the DAX, with automatic syntax validation and a retry if the first attempt contains errors.
    • Suggest measures: reviews your data and suggests new measures in a DAX query for further analysis.
    • Explain a DAX topic: ask about a specific function or concept and get a contextual explanation tied to your actual model.
  3. Work with an existing measure. To use Copilot with an existing measure, right-click it in the Data pane and select Define and Evaluate. This loads the full measure definition into the query editor, where we can then use the Explain or Write prompts to work with it.

The Model View (Measure Descriptions)

This entry point is specifically for generating measure descriptions and is easy to miss if you spend most of your time in Report view.

  1. Select the measure. Switch to Model view in Power BI Desktop. In the Data pane, select the measure we want to document.
  2. Generate the description. In the Properties pane on the right, locate the Description field. Click Create with Copilot to generate a description, then select Keep it to save it.

The description is stored directly in the .pbix file and surfaces as a tooltip when users hover over the measure in the Fields pane. It is a small detail that makes a meaningful difference in shared models and self-service environments.

Note: Copilot will only generate descriptions for measures in a valid state with no errors. Resolve any measure issues before attempting to generate a description.


Scenario 1: Establishing the Basics
Does Copilot Understand Your Data Model?

Before throwing complex logic at Copilot, the first step is confirming it understands the data model. If it cannot identify the correct columns for a simple aggregation, it will not stand a chance with more complex requests.

Test 1: Total Sales Amount

The Prompt: Provide a DAX measure that calculates the Total Sales Amount

Copilot’s Response:


🟢 Green Light

Test 2: Total Retail Sales

The Prompt: Create a measure for Total Retail Sales that calculates the total sales amount for employees in the Retail department only

Copilot’s Response:

🟢 Green Light


Scenario 2: Building Advanced DAX Logic
Where Filter Context Starts to Matter

With the basics confirmed, the next step is to see how Copilot handles more complex measures. This is where filter context starts to matter and where the quality of the data model structure begins to influence the output.

Test 1: Year-Over-Year Growth

Calculating Year-over-Year (YoY) growth is one of the more common requirements in sales reporting. It requires a solid understanding of date filters and is one of the first places where DAX relies on model structure rather than just formula syntax.

The Prompt: Create a measure that calculates year-over-year growth for total sales.

Copilot’s Response:

🟡 Yellow Light — Strong DAX, external dependency required

Overlooked Detail: One thing that is easy to miss here is the Date Table assumption. Copilot explicitly noted this measure requires a “proper date table.”

In our sample dataset, we have a SalesDate column in the Sales table, but for time intelligence functions like SAMEPERIODLASTYEAR to work reliably, Power BI requires a continuous date column and a correctly configured, marked Date table. If the measure references SalesDate directly from the fact table, it may appear to work but can return incomplete or incorrect results under certain filter conditions. This dependency is not always obvious from the generated DAX alone.

Alternative Approach:

Copilot used SAMEPERIODLASTYEAR, which is a common and readable pattern. An alternative for more complex or flexible requirements is DATEADD:

Total Sales YoY Growth % =
VAR CurrentSales = [Total Sales Amount]
VAR LastYearSales =
CALCULATE(
[Total Sales Amount],
DATEADD(DateTable[Date], -1, YEAR)
)
RETURN
DIVIDE(CurrentSales - LastYearSales, LastYearSales)

Both approaches achieve similar results. SAMEPERIODLASTYEAR is more concise and easier to read. DATEADD provides more flexibility and uses the same structure whether shifting by days, months, quarters, or years. In practice, the choice often comes down to readability versus flexibility and how the Date table is structured.

Test 2: Percentage Contribution by Region

Let’s look at how Copilot handles a different kind of problem: calculating how much a specific segment contributes to the overall total. In this case, we want to understand how much each region contributes to total sales.

This requires calculating a value in the current filter context and comparing it against a total that ignores that filter.

The Prompt: Provide a measure that calculates the percentage contribution of each region to total sales.

Copilot’s Response:

🟡 Yellow Light — Correct answer, incomplete solution

Overlooked Detail: The difference between ALL and ALLSELECTED is easy to miss here but it matters significantly in interactive reports. ALL(Region) removes every filter on the Region table, including filters applied by slicers, meaning the denominator is always the grand total regardless of what users have selected. ALLSELECTED would preserve slicer selections while still ignoring the visual’s own row context. Copilot applied a standard pattern correctly, but it did not account for the nuances of how the report will actually be used.

Alternative Approach:

An alternative is to use REMOVEFILTERS, which makes the intent of the measure clearer to anyone reading it later:

% of Total Sales by Region =
DIVIDE(
[Total Sales Amount],
CALCULATE(
[Total Sales Amount],
REMOVEFILTERS(Region)
)
)

REMOVEFILTERS is generally preferred in team environments because its name explicitly describes what it does. ALL can be used both to clear filters and to return a table of values, which can lead to confusion when someone else reads the measure later.

DAX Query View Approach
Writing Measures with Inline Copilot

The Copilot pane is not the only way to generate DAX. For developers who prefer to validate output before committing it to the model, the DAX query view inline Copilot offers a useful alternative. Write the query, run it, review the results, and keep it if it looks right.

Here is a % of Total Sales by Product measure generated using DAX query view:

  1. Open a new query tab. Switch to DAX query view using the left navigation bar. Open a new query tab and press CTRL + I to open the inline Copilot.
  2. Enter the prompt and press Enter.
  3. Preview the results. Before keeping the query, select Run or press F5 to preview the results directly in the query editor.
  4. Add the measure to the model. Once satisfied with the output, select Keep query. Then use the Update model with changes button to add the measure to the data model.

The Prompt: Create a % of Total Sales by Product measure

Copilot’s Response:

Before keeping the query, select Run or press F5 to preview the results directly in the query editor.

🟢 Green Light

Bonus: Let Copilot Suggest What to Build Next

One of the more practical features in DAX query view is the Suggest measures inspire button. Rather than prompting Copilot for a specific measure, we can ask it to analyze the model and recommend what to build next.

The Prompt: Based on my data suggest new measures in a DAX query for further analysis and try them out with one or more suitable columns

Copilot’s Response:

🟢 Green Light

Overlooked Detail: Two of the six suggested measures — Sales Last 12 Months and Sales 3M Rolling Avg — reference 'DateTable'[Date], which assumes a properly configured Date table is present in the model.

As noted in the YoY Growth scenario, these measures will not work reliably without one. Copilot surfaced the right patterns, but the model still needs the right foundation beneath them.


Scenario 3: Explaining Complex DAX Measures
From Black Box to Plain Language

One of the most practical ways to use Copilot is not writing new DAX. It is making sense of DAX that already exists in the model. Whether we are inheriting a report from a colleague or revisiting logic written six months ago, Copilot can help translate complex measures into plain language.

This test uses the [Top Performers] measure — a multi-variable measure that summarizes sales by region and employee, identifies the top three performers, and returns a fully formatted HTML string for use in a custom visual. It is a realistic example of the kind of measure that is genuinely difficult to read cold.

Test 1: Asking Copilot to Explain Without the DAX

The Prompt: Explain the logic of the [Top Performers] measure. What is it calculating, and how is it displaying results?

Copilot’s Response:

The Follow-Up Prompt: Here is the DAX formula for [Top Performers] so you can explain it:

Top Performers =
VAR _groupedByRegionEmployee =
SUMMARIZE(
Sales,
Regions[Region],
Employee[EmployeeID],
"EmployeeTotalSalesMetricCY", [Sales Metric (CY)]
)
VAR _top3Performers =
TOPN(
3,
_groupedByRegionEmployee,
[EmployeeTotalSalesMetricCY], DESC,
Employee[EmployeeID], ASC
)
VAR _belowTop5Count =
COUNTROWS(_groupedByRegionEmployee) - COUNTROWS(_top3Performers)
VAR _top5HTML =
CONCATENATEX(
TOPN(3, _top3Performers, [EmployeeTotalSalesMetricCY], DESC),
VAR _imageURL = LOOKUPVALUE(Employee[Image], Employee[EmployeeID], Employee[EmployeeID])
VAR _salesMetricCY = [EmployeeTotalSalesMetricCY]
VAR _rank = RANKX(_top3Performers, [EmployeeTotalSalesMetricCY],,DESC,Dense)
RETURN
"<div style='width:50px; height:50px; border-radius:50%; overflow:visible; border:2px solid #0b4d78;'>
<img src='" & _imageURL & "' style='width:100%; height:100%; object-fit:cover; border-radius:50%;' />
<div style='position:absolute; bottom:-8px; right:-8px; width:35px; height:35px; border-radius:50%; background-color:#0b4d78; color:white; font-size:9px;'>
" & FORMAT(_salesMetricCY, "$#,0,.0K") & "
</div>
</div>",
"",
[EmployeeTotalSalesMetricCY], DESC
)
VAR _additionalCircle =
IF(
_belowTop5Count > 0,
"<div style='width:30px; height:30px; border-radius:50%; background-color:#bfbfbf; color:white; font-size:16px; margin-left:5px;'>
+" & FORMAT(_belowTop5Count, "0") & "
</div>",
""
)
VAR _finalOutput =
"<div style='display:flex; align-items:center; justify-content:flex-end; overflow:hidden; max-width:400px; padding-right:5px; padding-bottom:8px'>
" & _top5HTML & _additionalCircle &"
</div>"
RETURN
_finalOutput

Copilot’s Response:

Certainly! Here’s a step-by-step explanation of what the [Top Performers] measure is doing and how it displays results:

What is it Calculating?

  1. Grouping Sales by Region and Employee:
    • The measure first creates a table (_groupedByRegionEmployee) that summarizes sales data by both region and employee. For each combination, it calculates the [Sales Metric (CY)]
  2. Selecting the Top 3 Performers:
    • It then identifies the top 3 employees (per region) based on their [Sales Metric (CY)] using the TOPN function. If there are ties it uses EmployeeID as a secondary sort.
  3. Counting Additional Employees:
    • It calculates how many employees are not in the top 3 by subtracting the count of top performers from the total count.
  4. Building the HTML for Display
    • For each of the top 3 employees, it constructs an HTML snippet:
      • Shows the employee’s profile image (from Employee[Image]).
      • Displays their sales metric as a badge
      • Uses CSS to style the image and badge for a visually appealing display
    • If there are more employees beyond the top 3, it adds a grey circle with a “+N” indicator, where N is the number of additional employees.
  5. Combining Everything for Output:
    • The final output is a single HTML string that visually displays the top 3 performers (with image and sales badges) and, if applicable, a circle indicating how many more employees there are.

How is it displaying the results?

Visual Output: The measure returns an HTML string designed for use in a Power BI visual that supports HTML rendering.

What You See:

  • Up to three employee profile images, each with a badge showing their sales metric
  • If there are more than three employees, a grey circle with “+N” to indicate additional performers not shown.
  • The layout is styled to be compact, visually appealing, and aligned to the right.

Summary

  • Purpose: Highlights the top 3 sales performers per region, visually, with profile images and sales badges
  • Display: Uses HTML and CSS for a rich, custom visual experience, including a “+N” indicator for additional employees beyond the top 3.

If you need a breakdown of any specific part of the DAX or want to know how to adapt this logic, let me know!

🟢 Green Light

Overlooked Detail: Copilot explained what the measure is doing accurately, but it did not surface deeper technical considerations unprompted, such as the performance implications of nesting TOPN inside CONCATENATEX, whether repeated TOPN calls could be consolidated, or the assumption that LOOKUPVALUE will always return a single match.

These are the kinds of questions that should always be asked when reviewing measure definitions.

If we need that level of analysis, we can follow up with a specific prompt such as “Are there any performance concerns with this measure?” Copilot responds well to targeted follow-up questions; it just does not volunteer that analysis on its own.

Alternative Approach — Explaining Measures in DAX Query View

In the main test, explaining the [Top Performers] measure required pasting the full DAX formula directly into the Copilot pane. The DAX query view offers a cleaner path to the same result and goes one step further by letting us run the measure and explain it in the same workflow.

  1. Load the measure into DAX query view. Switch to DAX query view using the left navigation bar. In the Data pane, right-click the [Top Performers] measure and select Define and Evaluate. This automatically loads the full measure definition into the query editor as a DEFINE MEASURE and EVALUATE statement, with no manual copying required.
  2. Open the inline Copilot and explain the query. Select the DAX query in the editor and press CTRL + I to open the inline Copilot. Use the Explain this query inspire button or type the prompt directly.

The Prompt: Explain this DAX query

Copilot’s Response:

🟢 Green Light

Overlooked Detail: Compare the two explanation workflows side by side. The report view pane required the full DAX to be pasted manually, and Copilot’s response, while accurate, was structured around the user’s framing of the question.

The DAX query view response was driven by the query structure itself, which produced a more methodical, variable-by-variable breakdown.

Neither is strictly better. The pane is faster for a quick question, while the query view is better for a thorough audit. Knowing which to reach for depending on the situation is the more important skill.


Scenario 4: Debugging DAX Logical Errors
Finding What the Red Squiggle Misses

Identifying syntax errors is straightforward. Power BI flags them immediately with a red underline. The real challenge is logical debugging: when a measure runs without error but returns results that are incorrect for the business requirement. This test determines whether Copilot can identify why a measure is technically functional but practically broken.

The Setup

For this test, we want to create a dynamic % of Total Sales measure that displays correctly across multiple dimensions — Products, Regions, and Departments — while also respecting report-level filters like Year. Here is the measure as it currently exists in the model:

% of Total Sales =
VAR CurrentSales = SUM('Sales'[Amount])
VAR TotalSales = CALCULATE(SUM('Sales'[Amount]), ALL('Sales'))
RETURN
DIVIDE(CurrentSales, TotalSales)

The measure uses ALL(Sales), which removes every filter from the Sales table, including slicers and report-level filters.

For our requirements, this is a common mistake that effectively freezes the denominator to the grand total of the entire dataset, regardless of what filters are applied.

For example, when users apply a Year slicer to focus on the previous year, the numerator correctly reflects only the previous year’s sales, but the denominator still reflects all years.

The result is percentages that appear unexpectedly small and never add up to 100% within the filtered context.

We see the percentages are smaller than expected and bring the problem to Copilot.

The Prompt:

I have a measure for % of Total Sales, but when I filter my report to the year 2025, the percentages are small when viewed on multiple visuals across various dimensions and don’t add up to 100%. How do I make this dynamic so I can display the percent of total for multiple dimensions?

% of Total Sales =
VAR CurrentSales = SUM('Sales'[Amount])
VAR TotalSales = CALCULATE(SUM('Sales'[Amount]), ALL('Sales'))
RETURN
DIVIDE(CurrentSales, TotalSales)

Copilot’s Response:

The issue you’re experiencing is due to the use of ALL(‘Sales’) in your measure. This removes all filters from the Sales table—including filters on Year, Product, Department, etc.—so your denominator is always the grand total sales for all years and all data, not just for the filtered year (e.g., 2025). As a result, when you filter to 2025, the numerator is only 2025 sales, but the denominator is still the total for all years, making the percentages appear small and not sum to 100% within the filtered context.

To make the measure dynamic and context-aware (so that it always sums to 100% within the current filter, such as year, product, or department), you should use the REMOVEFILTERS or ALLSELECTED function, depending on your needs:

  • REMOVEFILTERS: Removes filters only from the specified columns or tables, but keeps filters from outside the specified scope (such as report/page/visual filters).
  • ALLSELECTED: Keeps filters applied by slicers and visuals, but ignores filters from the current visual’s breakdown.

Here’s how you can rewrite your measure for dynamic percent of total within the current filter context:

Option 1: Using REMOVEFILTERS (recommended for most scenarios)

% of Total Sales (Dynamic) =
VAR CurrentSales = SUM('Sales'[Amount])
VAR TotalSales = CALCULATE(
SUM('Sales'[Amount]),
REMOVEFILTERS('Sales')
)
RETURN
DIVIDE(CurrentSales, TotalSales)

But this still ignores filters on the Sales table. If you want the denominator to respect external filters (like Year = 2025), but ignore breakdowns within the visual (like Product or Department), use ALLSELECTED.

Option 2: Using ALLSELECTED (best for visuals with slicers/filters)

% of Total Sales (Dynamic) =
VAR CurrentSales = SUM('Sales'[Amount])
VAR TotalSales = CALCULATE(
SUM('Sales'[Amount]),
ALLSELECTED('Sales')
)
RETURN
DIVIDE(CurrentSales, TotalSales)

How to choose:

  • Use REMOVEFILTERS if you want the denominator to always be the total for the current report/page/visual filter context (e.g., Year = 2026).
  • Use ALLSELECTED if you want the denominator to respect slicers and filters, but ignore breakdowns within the visual.

Try replacing your measure with the ALLSELECTED version above. When you filter to 2026, the percentages across your breakdown (e.g., by Product, Department, etc.) will sum to 100% for that year.

🟡 Yellow Light

If you are building reports where slicers play a central role in user decision-making, providing essential context before users make their selections can be extremely beneficial. 

The design of those slicers matters just as much as the DAX behind them. For a deeper look at how to turn static slicers into interactive decision aids that give users meaningful context before they make a selection, check out Design Meets Data: From Static Slicers to Interactive Decision Aids.

What if your slicers could answer questions before you even select them? With a little preview context, they can guide users toward the right decisions instantly.

Overlooked Detail: The distinction between REMOVEFILTERS and ALLSELECTED is subtle but consequential.

If a filter is applied at the visual level, ALLSELECTED will calculate the percentage based on the filtered data, whereas REMOVEFILTERS may still calculate against the broader filter context.

In an interactive report with multiple slicers and cross-filtering visuals, these two measures can return noticeably different results from the same visual.


Scenario 5: Generating Measure Descriptions and Comments
Documenting the Semantic Model

One of the most tedious parts of semantic model development is documentation. Measures get built, reports get published, and descriptions rarely get written. This test determines how useful Copilot can be in handling that documentation work and whether the output is accurate and professional enough to actually use.

Test 1 — Generating Measure Descriptions in Model View

For this test, we navigate to Power BI Desktop’s Model view, select the Total Sales YoY Growth % measure, and use the Create with Copilot option in the Properties pane.

Copilot’s Response:

🟢 Green Light

Test 2: Adding Inline Comments in DAX Query View

Measure descriptions in the Properties pane are valuable for self-service users browsing the Fields pane, but they are not visible to the developer editing the measure.

Inline comments inside the DAX formula serve a different purpose. They document the logic for anyone who opens the measure editor, making complex calculations easier to read, audit, and maintain over time.

DAX query view Copilot can add those comments automatically.

  1. Load the measure. Switch to DAX query view using the left navigation bar. In the Data pane, right-click the Total Sales YoY Growth % measure and select Define and Evaluate. This loads the full measure definition into the query editor.
  2. Add comments with Copilot, Press CTRL + I to open the inline Copilot. Enter the prompt and press Enter.
  3. Commit the changes. Select Keep query and then Update model with changes to commit the commented measure definition back to the model.

The Prompt: Add comments to this MEASURE

Copilot’s Response:

🟢 Green Light

Overlooked Detail: Copilot scoped the comments specifically to the DEFINE MEASURE block. It did not attempt to comment the EVALUATE portion of the query loaded by Define and Evaluate.

This is the right behavior. The comments that travel with the measure when we select Update model with changes are the ones inside the DEFINE MEASURE block. Copilot correctly focused its documentation effort where it counts.


Practitioner’s Verdict:
How Far Can You Trust Copilot for DAX Development?

After running Copilot through five scenarios — writing measures, building advanced logic, explaining existing DAX, debugging logical errors, and documenting the semantic model — here is the honest assessment.

Copilot is a capable and practical assistant for semantic model development. Across the five scenarios, it earned five Green Lights and three Yellow Lights. No Red Lights, but the Yellow Lights carry real weight and are worth understanding before we start offloading DAX work to AI.

Where Copilot hits its stride:

The results were strongest where the task was well-defined and the model was clean. Basic and intermediate measures came back quickly and accurately. The debugging scenario performed well — correctly diagnosing an ALL vs ALLSELECTED issue, explaining the underlying cause clearly, and offering two well-reasoned solutions with guidance on when to use each. Measure descriptions were accurate, readable, and business-aware rather than just restating the formula name.

The explanation scenario highlighted something equally valuable. Feed Copilot a complex, multi-variable measure with HTML rendering logic and it will walk through it step by step in plain language. For onboarding new team members, auditing inherited models, or simply revisiting logic written months ago, that capability is genuinely useful.

The DAX query view inline Copilot adds further value. The Suggest measures feature returned six well-structured analytical measures in a single prompt, complete with inline comments and a live preview. The Add comments workflow produced useful documentation rather than just variable name restatements. These are the features that make the biggest difference in team environments where maintainability matters.

Where we still have to drive:

The three Yellow Lights pointed to the same underlying pattern: Copilot answers the question we ask, not necessarily the question we should have asked.

The YoY Growth measure was syntactically strong but assumed a properly configured Date table without helping us verify or build one.

The percentage contribution measure was correct for the prompt, but did not anticipate how ALL versus ALLSELECTED would behave in an interactive report with slicers.

In both cases, Copilot did its job. The gap was in the prompt, the model structure, or the report design context that Copilot simply does not have access to.

This is the part that matters most: Copilot does not know our report. It does not know how users interact with slicers or what the business actually needs versus what was typed into the prompt. That context lives with the developer, not the AI.

The broader DAX community reinforces this point. The continued investment in deep technical content covering advanced concepts signals that DAX expertise is not becoming less relevant in an AI-assisted world. If anything, it is becoming more important. We need to know what correct DAX looks like to evaluate what Copilot hands us.

Copilot in Power BI Desktop is a practical productivity tool for Power BI semantic model development. It handles the repetitive, time-consuming parts of DAX work reliably enough to trust, with a close eye and a solid foundation in the fundamentals.

The tool earns its keep. Your expertise still earns its keep, too.

The sample report is available at the link below!

In an era where AI is moving from experimental to essential, this toolkit is designed to help Power BI developers explore AI workflows, learn Power BI AI tools, and get started on their journey to modernize the way they build!


What’s Next: A Look at What’s Coming

This post focused on one specific question: how useful is Copilot in Power BI Desktop for the day-to-day work of semantic model development?

Across five scenarios, the answer is encouraging. Copilot is a practical addition to a developer’s workflow, particularly for first drafts, documentation, and explaining DAX logic to team members.

But this post only covers one piece of what AI-assisted Power BI development looks like in 2026.

There is a meaningful difference between using AI as a conversational assistant inside a tool and configuring AI with the right context, tools, and environment to take real action on a model.

As that setup becomes more capable, the tasks we can delegate to AI grow significantly, from generating a single measure to planning and executing complex modeling workflows with minimal manual intervention.

That progression is what the rest of this series will explore:

  • Copilot in Power BI Desktop for Report Development: staying within the built-in experience but shifting focus from semantic model development to report design, visual creation, and data exploration
  • AI-Assisted Development: stepping outside Power BI Desktop to pair a more capable AI interface with direct, programmatic access to the semantic model. Two practical approaches worth exploring here: VS Code with GitHub Copilot and the Power BI Modeling MCP Server, and Claude Desktop with MCP Servers, both of which unlock capabilities the built-in Copilot pane cannot match.
  • Agentic Development: the most advanced tier, where community projects like GitHub – data-goblin/power-bi-agentic-development or articles such as Introducing AI and agentic development for Power BI explore what fully autonomous Power BI development workflows look like when agents plan and execute tasks end to end.

Each post in the series will follow the same format used here — real prompts, real outputs, and a clear-eyed view of where AI earns its keep and where the human still needs to drive.


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Design Meets Data: From Static Slicers to Interactive Decision Aids


Slicers are some of the most commonly used elements in Power BI, yet they often receive less attention than they deserve. They are frequently added to a report, filled with data, and then left for users to navigate on their own.

Users understand their selections, but often lack additional context to inform their decisions. Choosing a category or product line is straightforward, yet the impact of that choice remains unclear until after the selection is made.

We can design our slicers to do more. By displaying preview counts, averages, or percentages, they give users a better understanding of scale and significance before making decisions. This results in smoother exploration and more confident choices, allowing slicers to actively guide analysis rather than merely filtering data.

A sample report is available at the end of this post. Visit the sample report below to explore the examples in action!

From Static Slicers to Interactive Decision Aids – Sample Power BI Report


Building the First Dynamic Slicer with Count of Sales

Let’s begin with a straightforward example of this method, where we display counts next to each slicer value. Instead of selecting an option and waiting for a visual update, or using limited canvas space to show a simple count, the user can see the amount of data associated with each choice directly within the slicer.

Thanks to Davide Bacci (GitHub – PBI-David/PBI-Core-Visuals-SVG-HTML) for the idea and the DAX measures that got us started.

We are developing a sales region slicer. Typically, a user would only see the list of regions. By adding a measure that calculates the number of related sales records for each option, we can transform that list into a preview of the selection’s impacts.

The pattern utilizes the Power BI list slicer, coupled with an SVG measure used for its image property based on the button selection state (default, hover, pressed, selected).

Step #1 – Base Metric

The base metric is the measure displayed to the right of the sales region name. For our first example, this is a simple count of our sales records.

Sales Count = COUNTROWS(Sales)

Step #2 – Centralized Styling

We style the SVG with CSS, allowing us to set colors, fonts, and other aesthetics for the different button states (default, hover, selected, etc.).

SVG Slicer Styling = "
<style>
.slicerTextDefault {
    font:9pt Segoe UI Regular;
    fill:#9FC2D9;
    text-anchor: end;
    alignment-baseline: middle
    }

.slicerTextHover {
    font:9pt Segoe UI Regular;
    fill:#FFFFFF;
    text-anchor: end;
    alignment-baseline: middle
    }

.slicerTextPressed {
    font:9pt Segoe UI Regular;
    fill:#FFFFFF;
    text-anchor: end;
    alignment-baseline: middle
    }

.slicerTextSelected {
    font:9pt Segoe UI Regular;
    fill:#073450;
    text-anchor: end;
    alignment-baseline: middle
    }
</style>
"
    

Step #3 – SVG Measure

Next, we create the SVG measure for each state we are targeting.

SVG Slicer Default =
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 100% 100%'>" & [SVG Slicer Styling] & "
<rect x='0' y='0' width='100%' height='100%' fill='transparent'></rect>
<text x='100%' y='55%' class='slicerTextDefault'>(" & FORMAT([Sales Count], "#,#") & ")</text>
</svg>"

SVG Slicer Hover =
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 100% 100%'>" & [SVG Slicer Styling] & "
<rect x='0' y='0' width='100%' height='100%' fill='transparent'></rect>
<text x='100%' y='55%' class='slicerTextHover'>(" & FORMAT([Sales Count], "#,#") & ")</text>
</svg>"

SVG Slicer Pressed =
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 100% 100%'>" & [SVG Slicer Styling] & "
<rect x='0' y='0' width='100%' height='100%' fill='transparent'></rect>
<text x='100%' y='55%' class='slicerTextPressed'>(" & FORMAT([Sales Count], "#,#") & ")</text>
</svg>"

SVG Slicer Selected =
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 100% 100%'>" & [SVG Slicer Styling] & "
<rect x='0' y='0' width='100%' height='100%' fill='transparent'></rect>
<text x='100%' y='55%' class='slicerTextSelected'>(" & FORMAT([Sales Count], "#,#") & ")</text>
</svg>"

Step #4 – Build the List Slicer

We add the List Slicer visual to our report and set the Slicer settings, Layout, Callout values, and Buttons properties to meet our reporting needs and styling requirements.

Then we move to the Images property to take our slicer from basic to a decision aid. Using the State dropdown menu, we set the following measures for the field property.

  • Default: SVG Slicer Default
  • Hover: SVG Slicer Hover
  • Pressed: SVG Slicer Pressed
  • Selected: SVG Slicer Selected

The result is a dynamic slicer that provides context before a choice is made. This enhances the data exploration experience, allowing for informed and confident selections.


From Totals to Insight: Rich Previews on Your Slicer

Displaying record counts is a good starting point, but we can expand on this idea. Instead of showing just totals, we can surface insights that make the slicer a guide rather than just a filter.

Make it Reusable & Maintainable

Before jumping into examples, let’s first make our SVG Slicer measures easier to reuse and update. We will replace the explicit measure used for display (e.g. [Number of Sales]) with a single new measure, [SVG Slicer Display].

SVG Slicer Default =
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg'  viewBox='0 0 100% 100%'  >" & [SVG Slicer Styling] &"
<rect x='0' y='0' width='100%' height='100%' fill='transparent'> </rect>
<text x='100%' y='55%' class='slicerTextDefault'>" &  [SVG Slicer Display] & "</text>
</svg>"

// Do the same swap in Hover / Pressed / Selected

Referencing this new measure will allow us to set what we want to display in one location, and all of our SVG Slicer measures (Default, Hover, Pressed, Selected) will update accordingly.

Example #1: % of Total (Within Selection vs Overall)

Our report presents the user with the total sales amounts for each region and product. But totals don’t always tell the story. Showing percentages helps answer two key questions quickly: How big is this option compared to the other? and How much does it contribute overall?

We can provide better insights at a glance by presenting a % of Total measure for each slicer option.

SVG Slicer % of Total = 
--% of Total (Within Selection vs Overall)

-- Base value calculated total sales for each value
VAR _contextTotal = [Sales Metric (CY)]
-- Total sales of the dimension within scope
VAR _selectedTotal = 
    SWITCH(
        TRUE(),
        ISINSCOPE(Regions[Region]),CALCULATE([Sales Metric (CY)], REMOVEFILTERS(Regions[Region])),
        ISINSCOPE(Products[Product]),CALCULATE([Sales Metric (CY)], REMOVEFILTERS(Products[Product]))
    )
-- Annual overall total sales
VAR _overallTotal = CALCULATE([Sales Metric (CY)], REMOVEFILTERS(Products[Product]), REMOVEFILTERS(Regions[Region]))
-- The two percentage calculations to show
VAR __percentageWithinSelection = DIVIDE(_contextTotal, _selectedTotal)
VAR _percentageOverall = DIVIDE(_contextTotal, _overallTotal)
-- Logic determining which calculation to show
---- Region Slicer and Products are filtered -> show within selection
---- Product Slicer and Regions are filtered -> show within selection
---- Otherwise -> show overall
VAR _percentageDisplay = 
    SWITCH(
        TRUE(), 
        //Regional Slicer Conditions
        ISINSCOPE(Regions[Region]) && ISFILTERED(Products), __percentageWithinSelection,
        //Products Slicer Conditions
        ISINSCOPE(Products[Product]) && ISFILTERED(Regions), __percentageWithinSelection,
        //Default
        _percentageOverall
    )
RETURN
    IF( 
        NOT ISBLANK(_percentageDisplay) && _percentageDisplay <> 0, 
        FORMAT(_percentageDisplay, "0.0%"), 
        "-–"
    )

To avoid confusion and clarify the meaning of each percentage within its current context, we also add the following measures to the subtitle property of our slicers.

SVG Slicer Region SubTitle = IF(ISFILTERED(Products), "% w/in Product Selection", "% Overall")

SVG Slicer Product SubTitle = IF(ISFILTERED(Regions), " % w/in Regional Selection", "% Overall")

BONUS!: Curious about the Top Performers on the Regional data cards? Check out My Best Power BI KPI Card (So Far 😅) by Isabelle Bittar for the details!

Why show both “overall” and “within selection”?

  • Overall answers: “How important or significant is this choice globally?” Helpful in spotting major contributors or outliers without any filtering required.
  • Within selection answers: “Inside my current regional selection, how much does this item contribute or matter?” Perfect when users have already narrowed the context (e.g. filtered to a specific region and are now choosing products within it).

Example #2: Tracking Sales Trends

Totals and percentages show size and share, but they don’t show direction. By adding arrows and percent change to slicer items, users see momentum at a glance. What’s rising, what’s falling, and what’s holding steady.

Step #1: Calculating the 6-Month Trend

This measure compares sales from the past six months to those from the previous six months. The outcome is a percent change that serves as the basis for our trend indicator.

Sales Trend % (6M vs Prev 6M) = 
-- Trend preview (6M vs prior 6M)

-- Anchor date for the window (respects current filter context; falls back to max Sales date)
VAR _anchor = MAX(Sales[SalesDate])
-- Total sales for the last 6 months ending at the anchor date
VAR _last6m =
    CALCULATE ([Total Sales], DATESINPERIOD (DateTable[Date], _anchor, -6, MONTH))
-- Total sales for the prior 6 months (months -12 to -6 from the anchor)
VAR _prev6m =
    CALCULATE (
        [Total Sales],
        DATESBETWEEN (DateTable[Date], EDATE (_anchor, -12), EDATE (_anchor, -6))
    ) 

RETURN 
DIVIDE ( _last6m - _prev6m, _prev6m )

Step #2: Enhancing Percent Change With A Visual Indicator

Once we have identified the trend, the next step is to add a visual indicator to quickly determine whether the trend is moving up or down. A positive change is represented by an upward arrow, a negative change by a downward arrow, and any changes that fall within our defined noise band are indicated with a flat arrow.

SVG Slicer Trend Indicator = 
-- Render trend as arrow + % change label

-- Sales trend % from the prior measure
VAR _pct   = [Sales Trend % (6M vs Prev 6M)]
-- Noise band threshold (avoid flipping arrows on tiny changes)
VAR _band  = 0.02
-- Select arrow symbol based on trend direction
VAR _arrow =
    SWITCH ( TRUE(),
        ISBLANK ( _pct ),  "",
        _pct >  _band,     UNICHAR(11165),
        _pct < -_band,     UNICHAR(11167),
                           UNICHAR(11166)
    )
-- Format percentage text, fallback to “--” if blank
VAR pctTxt =
    IF (ISBLANK(_pct), "--", FORMAT(_pct, "0.0%"))
RETURN 
_arrow & " " & pctTxt

Step #3: Assigning Classes for Styling

To ensure our indicator remains readable across various slicer states (e.g. default, hover, pressed, selected), each state is assigned a simple class key: up, down, or flat. These keys correspond to different color rules in our SVG Slicer Styling measure.

SVG Trend Class Key = 
-- Assign trend class (up, down, flat) for slicer styling

-- Sales trend %
VAR pct  = [Sales Trend % (6M vs Prev 6M)]
-- Noise band threshold (avoid class flips on tiny changes)
VAR band = 0.02
-- Return class string for styling: up / down / flat
RETURN
    SWITCH ( TRUE(),
        ISBLANK ( pct ), "flat",
        pct >  band,     "up",
        pct < -band,     "down",
                         "flat"
    )

Step #4: Add Styling for State and Trend

Enhance the styling of our slicers by adding classes for trend and selection states. Defining these colors will ensure readability on both dark and light slicer backgrounds.

SVG Slicer Styling = 
"
<style>
.slicerTextDefault {
    font:9pt Segoe UI Regular; 
    fill:#9FC2D9;
    text-anchor:end; 
    alignment-baseline:middle
    }
.slicerTextHover {
    font:9pt Segoe UI Regular; 
    fill:#FFFFFF;
    text-anchor:end; 
    alignment-baseline:middle
    }
.slicerTextPressed {
    font:9pt Segoe UI Regular; 
    fill:#FFFFFF;
    text-anchor:end; 
    alignment-baseline:middle
    }
.slicerTextSelected {
    font:9pt Segoe UI Regular; 
    fill:#073450;
    text-anchor:end; 
    alignment-baseline:middle
    }

.up-default   { fill:#79D3B5; }  
.down-default { fill:#FFA19A; }  
.flat-default { fill:#BFD4E2; }

.up-hover     { fill:#79D3B5; }  
.down-hover   { fill:#FFA19A; }  
.flat-hover   { fill:#BFD4E2; }

.up-pressed   { fill:#79D3B5; }  
.down-pressed { fill:#FFA19A; }  
.flat-pressed { fill:#BFD4E2; }

.up-selected   { fill:#0F7B5F; }  
.down-selected { fill:#B23D33; } 
.flat-selected { fill:#4A6373; }
</style>
"

Step #4: Wrap Our Trend Indicator in our SVG Slicer Measures

Finally, we update our display measure, which determines what appears in our slicer. This wrapper combines the class key with the slicer’s state.

SVG Slicer Display = 
-- Display sales trend comparing the last 6M to the 6M prior 
    [SVG Slicer Trend Indicator] 
SVG Slicer Default = 
VAR _state = "default"
VAR _cls = [SVG Trend Class Key] & "-" & _state        
RETURN
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg'  viewBox='0 0 100% 100%'  >" & [SVG Slicer Styling] &"
<rect x='0' y='0' width='100%' height='100%' fill='transparent'> </rect>
<text x='100%' y='55%' class='slicerTextDefault " & _cls & "'>" &  [SVG Slicer Display] & "</text>
</svg>"

SVG Slicer Hover = 
VAR _state = "hover"
VAR _cls = [SVG Trend Class Key] & "-" & _state   
RETURN
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg'  viewBox='0 0 100% 100%'  >" & [SVG Slicer Styling] &"
<rect x='0' y='0' width='100%' height='100%' fill='transparent'> </rect>
<text x='100%' y='55%' class='slicerTextHover " & _cls & "'>" & [SVG Slicer Display] & "</text>
</svg>"

SVG Slicer Pressed = 
VAR _state = "pressed"
VAR _cls = [SVG Trend Class Key] & "-" & _state   
RETURN
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg'  viewBox='0 0 100% 100%'  >" & [SVG Slicer Styling] &"
<rect x='0' y='0' width='100%' height='100%' fill='transparent'> </rect>
<text x='100%' y='55%' class='slicerTextPressed " & _cls & "'>" &  [SVG Slicer Display] & "</text>
</svg>"

SVG Slicer Selected = 
VAR _state = "selected"
VAR _cls = [SVG Trend Class Key] & "-" & _state   
RETURN
"data:image/svg+xml;utf8,
<svg xmlns='http://www.w3.org/2000/svg'  viewBox='0 0 100% 100%'  >" & [SVG Slicer Styling] &"
<rect x='0' y='0' width='100%' height='100%' fill='transparent'> </rect>
<text x='100%' y='55%' class='slicerTextSelected " & _cls & "'>" &  [SVG Slicer Display] & "</text>
</svg>"

Why This Matters

By incorporating trend indicators directly into the slicer, users can see not only the magnitude of sales but also the momentum behind them. A mid-level product that is experiencing rising sales can stand out just as much as a top seller in decline. This additional context helps focus attention on what is changing, rather than just what is popular.


Slicers can be more than just passive filters in our reports. By incorporating a few DAX measures and some simple SVG styling, we can transform them into interactive visuals that provide insights and help users better understand their selections. Since our reports have limited canvas space, enhancing our slicers with these features enables us to utilize the entire canvas area effectively.

The next time you add a slicer, consider going beyond a simple list. Consider how the slicer can facilitate a dynamic interaction with the data, helping to inform decisions and answer key questions before users take action. This subtle design shift can significantly enhance users’ ability to explore our reports.

The example report can be found at the link below!

A practical series on blending UX and analytics in Power BI. Each entry includes a working PBIX and sample data so you can explore design patterns directly in your own environment.


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Power BI Object-Level Security Explained: Control Table and Column Access by Role


This is the third part of a series on security and design approaches in Power BI.

In the article Power BI Row-Level Security Explained: Protect Data by User Role, we examined Row-Level Security (RLS) and how to restrict access to specific rows of data based on the user’s identity.

Discover how to personalize your reports and show each user only the data they require in just a few clicks.

In the article Partial RLS Explained: Let Users See the Bigger Picture, we explored Partial RLS, a design pattern that allows users to view high-level context, such as company-wide totals, while still enforcing Row-Level Security (RLS) on detailed sales data.

Explore how to deliver personalized insights without losing the bigger picture.

We will now focus on another important aspect of data model security: Object-Level Security (OLS).

While RLS controls which rows within a table a user can access, OLS restricts visibility and interaction with specific tables and columns within the data model.

In this post, we will cover the following topics:

  • An overview of Object-Level Security (OLS)
  • A use case demonstrating how different user roles can view different data model objects
  • A step-by-step guide to implementing OLS
  • Key considerations and limitations

What is Object-Level Security

In Power BI, Object-Level Security (OLS) enables data modelers to restrict access to specific tables or columns based on the roles assigned to report viewers.

The key difference between RLS and OLS lies in what they restrict:

  • RLS controls which rows of data a user can access within a table.
  • OLS determines whether a user can see the table or specific columns.

OLS cannot be used to secure or hide measures directly. However, measures are impacted by OLS. If a measure references a column or table that is hidden for a specific role, the measure will also be automatically hidden for that role. It is important to consider this when designing experiences tailored to specific roles.

Power BI manages these data dependencies for us, ensuring that calculations based on secured data remain safe from exposure. However, there is a potential risk that some visuals in our report may not display correctly for viewers who do not have access to specific measures.


Use Case: Hide Reviews Tables and Sensitive Customer Columns

To examine and understand the value of OLS, let’s go through a scenario using a sample report.

Interested in following along? The Power BI sample report is available here: EMGuyant GitHub – Power BI Security Patterns.

We are developing a Power BI report for a sales organization. The data model includes two restricted areas:

  1. The Reviews table contains product reviews from customers and their demographic information.
  2. The Customers table includes several columns with customer details that should only be accessible to specific roles.

Access Requirements
Access to the report is structured around four user roles.

The Regional Sales Basic role serves as the foundational level, providing minimal access. Users assigned this role can view sales data related to their sales region and basic customer information. They are restricted from viewing the Reviews table and the detailed customer information columns.

Next is the Regional Sales Advanced role. Users in this role have all the same access as Regional Sales Basic users but this role is able view the detailed customer information columns.

The Product Analyst role has access to the Reviews table but cannot view the detailed customer information columns. They can also view the sales and review data for any region they are assigned to.

Finally, there is the Leadership role. These users can see all the data for any region they are assigned.


Step-by-Step: Configure OLS in Power BI

After creating our data model and defining the tables and columns to which we plan to restrict access, we can begin configuring OLS.

To configure OLS, we will use the Tabular Editor Power BI external tool. There are many external tools for Power BI Desktop; visit Featured open-source tools to view a list of common and popular external tools.

Tabular Editor is a lightweight tool that allows us to build, maintain, and manage tabular models efficiently.

1) Create Roles in Power BI Desktop
In Power BI Desktop, we navigate to the Modeling tab and select “Manage Roles.” We then create the four roles using the following DAX expression for RLS filtering on the User Access table. This table contains the user’s User Principal Name (UPN), region ID, and role for that region.

'User Access'[UPN] = USERPRINCIPALNAME()

2) Open Tabular Editor and Configure OLS
We navigate to External tools in Power BI Desktop and then open Tabular Editor. Under Model, select Roles. The roles we created in Step 1 will appear.

We expand the Table Permissions to set the permissions for each role we want to configure OLS for.

  • None: OLS is enforced, and the table or column is hidden from that role.
  • Read: The table or column is visible to the role.

3) Secure Specific Tables
To configure OLS for the Reviews table, we need to ensure that only users with the Product Analyst or Leadership roles have access to this table.

First, select the Reviews table and navigate to Object Level Security options under Translations, Perspectives, and Security. Set the permissions to “None” for the Regional Sales Basic and Regional Sales Advanced roles.

4) Secure Specific Columns
Next, we secure the Address, PreferredContactMethod, and ContactInformation columns within the Customers table. To do this, we locate the Customers table and expand it to view its columns.

Then, we select each column we want to secure and set each role’s permissions under Object Level Security. For each column above, we set the permissions for the Regional Sales Basic and Product Analyst roles to None.

Once we finish configuring our OLS rules, we save the changes in Tabular Editor and then publish the semantic model to the Power BI service. Depending on our combination of RLS and OLS, testing within Power BI Desktop using the View as > Other user will not function as expected. We will test and validate our OLS rules in the Power BI Service.

Note: If using the sample report, before testing in the Power BI Service the UPN column within the User Access table will have to contain valid user UPNs.

5) Assign Users to Roles in the Power BI Service
To add users to a role in the Power BI Service, we need to navigate to the workspace where the semantic model has been published. First, locate the semantic model, click on the “More options” ellipsis (…), and then select “Security.”

In the Row-Level Security screen, we can add users or security groups to each role we’ve created.

We have four users to test the OLS (with RLS) implementation:

  1. Talia Norridge: Leadership role for all regions
  2. Lena Marwood: Product Analyst for Europe and Asia regions
  3. Jasper Kellin: Regional Sales Advanced for North America
  4. Elara Voss: Regional Sales Basic for Asia

6) Test OLS Implementation
On the Security screen, we select the More options ellipsis (…) next to a role and then Test as role.

Then, at the top, we select Now viewing as and then Select a person to validate that the OLS rules function as expected.

Leadership Role
When we view the report as Talia Norridge in the Leadership role, we can see that all the regional sales information is displayed in the data cards at the bottom.

We confirm that Talia also has access to the Reviews table by hovering over the sales by product bar chart. The tooltip for this visual contains measures based on the product review data (e.g. average review rating).

Next, we verify that Talia has access to detailed customer information by hovering over a customer in the sales by customer bar chart. The tooltip for this visual shows the customer’s name and contact information (ContactInformation is a secured column).

Product Analyst Role
Reviewing the report as Lena Marwood in the Product Analyst role, we see that her assignment is limited to the Asia and Europe sales regions. As a result, the total sales value reflects only these regions, and the top performers on the North America data card are hidden.

We confirm that Lena can access the Reviews table by checking the sales by product tooltip, and we see that, like the Leadership role, the data appears as expected.

We confirm that Lena should not have access to detailed customer information. When we hover over the sales by customer visual, the tooltip shows an error when displaying the customer’s contact information.

The customer’s name is displayed without issue because this is not a secured column. However, Lena’s role does not have permission to access the ContactInformation column, which prevents the report from retrieving this data.

Regional Sales Advanced
When we view the report as Jasper Kellin, who holds the Regional Sales Advanced role, we confirm that the sales data only reflects his assigned region.

Next, we check the tooltips that display the review data and detailed customer information.

We verify that the review data produces the expected error because Jasper cannot access the Reviews table. As a result, he is unable to access the entire table and any measures that depend on it, such as the Average Score.

The Average Score measure is defined using the following DAX expression.

Average Score = AVERAGE(Reviews[SatisfactionScore])

After reviewing the customer sales data, we confirm that the contact information is presented in the tooltip to Jasper without any errors.

Regional Sales Basic
When we view the report as Elara Voss, who holds the Regional Sales Basic role, we confirm that the sales data only reflects their assigned region.

Next, we check the tooltips that display the review data and detailed customer information.

Both tooltips display the expected error since Elara does not have permissions to the Reviews table or the detailed customer information columns.


Considerations and Limitations

OLS in Power BI offers a robust layer of protection, but there are important limitations to consider before deploying it.

1) OLS applies only to users with the Viewer workspace role. Workspace members with Admin, Member, or Contributor roles have edit permissions on the semantic model, and OLS does not apply to them.

2) Combining OLS and RLS from different roles is not allowed; doing so may cause unintended access and generate an error.

3) Power BI automatically hides measures referencing a column or table restricted by OLS. Although Power BI does not offer a direct way to secure a measure, measures can be implicitly secured if they reference a secure table or column.

4) When users attempt to view visualizations dependent on a secured object with OLS configured, they encounter an error message. As a result, the report seems broken to these users. However, for specific roles this is expected. For example, the Regional Sales Basic role does not have permissions to the Reviews table, so it should not be available in the data set for these viewers.


BONUS: Mask Visual Errors and Control Page Navigation

When OLS hides a table or column, any visual that relies on that data will become unusable for users without access (refer to error message #4 above). While this error is anticipated, it may confuse users who might think the report is broken.

One possible workaround is to use a DAX measure and conditionally formatted shapes to cover the visual for users who cannot access the data.

In our sample report, we can create the following DAX measures to manage the visibility of the data on our tooltips.

Customer Detail Visible =
If([UserRole] <> "Regional Sales Basic", "#FFFFFF00","#f1f9ff")

Rating Detail Visible =
If([UserRole] = "Product Analyst" || [UserRole] = "Leadership", "#FFFFFF00","#f1f9ff")

We place a rectangle shape over the visuals that certain users cannot access, and then we conditionally format the fill color based on the measures.

It’s important to note that this is not an additional security layer or a replacement for OLS. This method only hides the error message to create a cleaner user experience.

However, this approach has a significant limitation. Our example works because the visuals underneath the shapes are not intended for user interaction. If the visuals are interactive for users with access to the data, the transparent shape overlay will prevent them from selecting or interacting with the visual. This means this workaround has a limited use case.

Certain design approaches can help manage which pages users can navigate to within a report. DAX-driven navigation buttons can create a user-friendly navigation experience, allowing users to navigate to the pages with data they have permission to view.

It’s important to note again that this approach does not provide security. However, it can help reduce the chances of users encountering error messages related to their access level based on typical report usage. Here is a brief walkthrough on this approach: RLS and OLS—Page Navigation.

While various design methods can enhance the user experience, OLS and RLS remain the only secure methods for controlling data access.


Wrapping Up

OLS in Power BI gives us a model-driven way to control access to specific tables and columns. Unlike Row-Level Security (RLS), which filters rows for authorized users, OLS prevents users from seeing certain objects of the model, removing entire tables and columns from the data experience.

When creating reports for broad audiences with different access needs, OLS can become essential to meet the requirements.


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Power BI Partial RLS Explained: Let Users See the Bigger Picture


This post is part two of a series on Power BI security patterns. Check out part one for a beginner-friendly overview of row-level security in Power BI.

Discover how to personalize your reports and show each user only the data they require in just a few clicks.

This post will take our security design pattern one step further.

While row-level security (RLS) allows us to restrict data based on user roles and identities, there are times when we don’t want everything locked down. Occasionally, users need to see their data in relation to broader metrics, such as company-wide totals or averages.

In the sample report, regional sales teams should see detailed information only about their assigned region and the total sales for other regions.

In the previous post, we noted that standard RLS filters out other regional sales, preventing the totals from other regions from being displayed. This is where partial RLS becomes useful and can fulfill this requirement.

In this post, we will walk through:

  • What Partial RLS is and when to use it
  • A real-world scenario that calls for it
  • Key limitations and design tips

What is Partial Row-Level Security

Row-Level Security (RLS) applies filters at the dataset level throughout the entire datamodel for a user. This means that if RLS restricts a user to view only sales data for the North America region, every visual and measure in the report will be automatically limited to data associated with the North America region.

Using RLS is effective for data protection, but it can be a limitation when a report needs to provide a broader context to the user.

With report-level filters and slicers, DAX provides functions like ALL() and REMOVEFILTERS(), which can bypass data filtering. However, DAX expressions cannot bypass RLS.

Partial RLS is a design approach in Power BI that helps mitigate RLS’s limitations when necessary. The concept involves separating secured data from summary data, allowing users to view filtered and unfiltered insights side by side.

To achieve this, we will create a summary table (Annual Sales Summary (Regional)) within our datamodel. This summary table will aggregate the total sales across all regions and will not be affected by RLS filters. It provides overall totals that offer essential context when needed, while the RLS still restricts access to detailed sales information within our sales table.

In our sample report, RLS is applied to the Region table, and the RLS filter propagates to the Sales table.

In the datamodel, filters applied to the Region table do not directly impact the rows in our summary table. This means that users can see the total aggregated sales for all regions while still having RLS filters applied to the detailed sales data.


Use Case: Display Regional Total Sales and Percentage of Company-Wide Sales

We are designing a Power BI report for a global sales team. Each regional sales team member should only be able to:

  • View detailed transaction-level data for their assigned region
  • See key metrics, like total sales for all regions, to provide a broader context

Step 1: Apply RLS rules to the Region table

First, we define our RLS rules in Power BI Desktop, see Power BI Row-Level Security Explained: Protect Data by User Role.

The implemented RLS rules filter the Sales table and limits access to the appropriate region per user.

A challenge arises when we implement RLS while also trying to meet the second requirement. As shown in the image above, the total sales figures for Asia and North America appear blank when viewing the report with the Europe role.

This is because the measure used to calculate the totals uses the following expression.

Total Sales = SUM(Sales[Amount])

When RLS is implemented, users only have access to data specific to their region. For instance, when the Total Sales measure is evaluated, the Sales table is filtered to include only the sales data associated with the Europe sales region. As a result, the Total Sales measure reflects the total sales relevant to the user’s region rather than the entire dataset.

Step 2: Create a summary table

To address this issue and meet our requirements, we will create a calculated summary table in our data model. This table will store pre-aggregated total sales and total transactions by year and region.

Annual Sales Summary (Regional) =       
   SUMMARIZECOLUMNS(
      Sales[SalesDate].[Year],
      Regions[Region],
      "TotalRegionalSales", SUM(Sales[Amount]),   
      "TotalRegionalTransactions", COUNTROWS(Sales),
      "DateKey", FORMAT(DATE(MAX(Sales[SalesDate].[Year]), 12, 31), "YYYYMMDD")
)

This table does not have a direct relationship with the Region table, which is under RLS control, and our RLS roles will not filter it.

Step 3: Build dynamic DAX measures

We can now utilize this table in our measures to establish company-wide or cross-region metrics while ensuring the security of the underlying transactional data.

We first create two new measures within our datamodel to calculate the entire company’s total sales and transaction counts.

Total Sales (nonRLS) =
SUM('Annual Sales Summary (Regional)'[TotalRegionalSales])

Transaction Count (nonRLS) =
SUM('Annual Sales Summary (Regional)'[TotalRegionalTransactions])

We can use these base measures to dynamically display total sales or total transactions for all regions in the data cards at the bottom of the report, utilizing the following expression and visual filters.

Regional Total Sales =
VAR _currentYear = YEAR(MAX(Sales[SalesDate]))
VAR _selectedMetric = SELECTEDVALUE('Sales Metric'[Sales Metric Fields])

RETURN
If(
    _selectedMetric = "'_Measures'[Total Sales]",
    CALCULATE(
        [Total Sales (nonRLS)], 
        'Annual Sales Summary (Regional)'[Year]=_currentYear),
    CALCULATE(
        [Transation Count (nonRLS)], 
        'Annual Sales Summary (Regional)'[Year]=_currentYear)
)

Note: we can also use the fact that RLS filters the region table and expressions such as COUNTROWS() or SELECTEDVALUE() to hide or show the top performers data card.

RLS still applies to the top-row visuals and bar charts, which provide detailed breakdowns of regional sales. However, the summary table enables us to present the total sales for all regions within the data card along the bottom of our report.

Step 4: Combine RLS-filtered and unfiltered measures

The non-RLS base measures can also compare regional total sales or transactions based on the current user (RLS-filtered) as a percentage of the company-wide measure (unfiltered).

% of CompanyWide Sales =
DIVIDE([Total Sales], [Total Sales (nonRLS)])

Considerations and Limitations

While the partial RLS pattern can enhance the usability and insightfulness of our Power BI report, we must consider its capabilities and limitations, as well as the associated technical and design trade-offs.

Partial RLS does not override existing RLS filters; instead, it isolates high-level summary data in a separate table unaffected by these filters. This allows partial RLS to be used for comparisons or to add additional global context without exposing detailed row-level information.

1) When implementing partial RLS, it’s important to remember that datamodel relationships matter. The summary table must be included in the datamodel to ensure that cross-filtering from tables affected by RLS does not impact it. If the summary table is related to a table with RLS filters applied at the datamodel level, it will also be subject to the RLS filters.

2) When combining measures filtered by RLS with unfiltered measures, users may need assistance interpreting the visuals associated with these measures. Visual cues or proper labeling, such as Total Sales Across All Regions versus Your Regional Sales may be necessary to help clarify what users are seeing.

3) Partial RLS can be implemented efficiently when the summary table is small and pre-aggregated without complex DAX filtering. However, keep in mind that if the summary table grows too large or includes too much granularity, it may negatively impact the performance of our reports.

4) Implementing partial RLS can add complexity when creating DAX measures. Since RLS is enforced on our Sales table, any attempts to calculate totals, even when using functions like ALL() or REMOVEFILTER(), will still be subject to our RLS rules. While partial RLS offers additional insights into our data, it does not grant any additional access.

5) We must assess edge cases, such as data gaps or undefined user roles. If a user is assigned a role that is not properly mapped in our datamodel, they may encounter an empty report or access to detailed data. We should always validate our RLS roles to ensure they function as expected.


Wrapping Up

Partial RLS is a design approach used in cases where RLS filtering restricts our ability to give users a broader context for their data. This approach allows us to ensure secure access to detailed, role-specific data while providing users insight into the overall picture and how their data fits into a larger context.

We can provide contextual insights without revealing specific row details by utilizing partial row-level security, enabling us to create more comprehensive and insightful reports.

Row-Level Security (RLS) enables us to filter data at the row level, but it does not allow us to secure entire tables or columns within our datamodel. Make sure to check back for the next post, or better yet, subscribe so you don’t miss it!

In the next post, we will explore Object-Level Security (OLS) in Power BI. OLS is essential because it allows us to secure specific tables and columns from report viewers.

If you’d like to follow along and practice these techniques, Power BI sample reports are available here: EMGuyant GitHub – Power BI Security Patterns.


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Power BI Row-Level Security Explained: Protect Data by User Role


When creating reports for teams across various departments or regions, it’s important to recognize that not everyone needs or wants to see the same data. This is where Row-Level Security (RLS) becomes essential.

We can use RLS to restrict data access for report viewers based on their identity. This eliminates the need to create separate reports for different departments or regions. With RLS, we can use a single report and dataset to generate personalized views for each individual or role.

In this guide, we will walk through:

  • What RLS is and why we should use it
  • Static vs Dynamic RLS
  • Step-by-step examples for each
  • Limitations and Considerations when implementing RLS

By the end of this guide, you will better understand RLS, its applications, and its limitations. Additionally, a PBIX example file will be provided for in-depth, hands-on exploration of this topic.


What is Row-Level Security

Row-level Security (RLS) within Power BI restricts data access for report viewers based on their identity by enabling us to establish rules within our data model to filter data rows.

Consider it as always adding another filter to the appropriate users. These filters are applied at the dataset level, ensuring that users can only see the information they can access, regardless of how they interact with the report

It is important to note that users who have more than viewer permissions to the Power BI workspace have access to the entire semantic model. RLS only restricts data access for users with Viewer permissions.


Why use Row-Level Security

There are several key advantages to using Row-Level Security (RLS) when creating our reports.

Security and Privacy

RLS helps prevent users from seeing data they shouldn’t have access to. This can become especially important when the dataset includes sensitive data.

Efficiency and Scalability

Managing separate reports for different departments or roles can be cumbersome. With RLS, we can create and customize one report for each user role. This approach is easier to maintain, scales better, and reduces the likelihood of inconsistencies and errors.

Improved User Experience

Users no longer need to apply slicers or filters to view only their data of interest. RLS automatically handles this when the report is rendered, resulting in a cleaner and more user-friendly report.


Static Row-Level Security

Static Row-Level Security (RLS) is the most straightforward form of RLS. In this approach, we manually define which users can access specific rows of data, usually by applying filters based on criteria such as Region or Department. This method works well when there is a small group of users and the access rules do not change frequently.

Step-by-Step: Static RLS

Static RLS applies a fixed value in the DAX filter when rendering our report. Configuring static RLS involves several steps.

1. Creating RLS roles in Power BI Desktop
To create RLS roles in Power BI Desktop, navigate to the Modeling tab and select Manage Roles.

Then, we select + New under Roles on the manage security roles dialog box. For this sample report, we create a role for each sales region.

We give each role a descriptive name, select the table to which the filter is applied, and create the filter condition.

After each role is created, any user we assign to the Asia, Europe, or United States role will only see data filtered to that specific region.

This approach is considered Static RLS since we use a static value to define each role.

2. Testing the roles in Power BI Desktop
Within Power BI Desktop, we can test each role to validate it by selecting the View as option on the modeling tab.

After selecting a role, we can see the report rendered as if we are a member of that role.

3. Deploying the report to the Power BI Service
Once we create and validate the roles, we publish the report to the Power BI Service as with any other report.

4. Add members to the role in the Power BI Service
To add a member to the role in the Power BI Service, we must navigate to the workspace where we published the report. We locate the semantic model and select the More options ellipsis (…) and then Security.

Within the Row-Level Security screen, we add users or security groups to each role we created. RLS rules will not apply to users who have access to the report but have not been added to a role.

5. Testing the roles in the Power BI Service
On the Security screen, we select the More options ellipsis (…) next to each role and then Test as role.

Once all roles have been validated, we have successfully implemented static row-level Security in Power BI.


Dynamic Row-Level Security

While static RLS can be great for smaller teams, it doesn’t scale well. Manually assigning roles to individuals quickly becomes unmanageable.

Dynamic RLS solves this issue using a User Access Table or a field that defines each user’s role inside our data model. The report utilizes this table to filter itself based on who is viewing the report.

For this example, our data model has an Employee table that contains each employee’s user principal name and role (e.g., Asia Sales, Europe Sales, US Sales, Management, or blank). We utilize this information to implement dynamic RLS by looking up the user’s role based on the user principal name of the viewer currently signed in.

Step-by-Step: Dynamic RLS

1. Set up a dynamic role using DAX
Navigate to the Modeling tab and select Manage Roles. Then, in the Manage Security Roles dialog box, select + New under Roles.

We name the role, select the table to which the filter is applied, and, this time, rather than creating a static filter condition, select Switch to DAX editor to create our dynamic rule.

We filter the Region table based on the current user’s role using the following DAX expression:

SWITCH(
    LOOKUPVALUE(
        Employee[Role], 
        Employee[UPN], 
        USERPRINCIPALNAME()
    ),
    "Asia Sales", Regions[RegionID]=3,
    "Europe Sales", Regions[RegionID]=2,
    "US Sales", Regions[RegionID]=1,
    "Management", TRUE(),
    FALSE()
)

The DAX expression looks up the user’s Role based on the current user’s user principal name (e.g. email).

2. Testing the dynamic filter
After creating the rule, we must test and validate that it functions as expected. We navigate to the Modeling tab and select View as. Then, in the dialog box, we check the Other User option and enter a user’s email. We also check the dynamic filter rule we just created and select OK.

The report will refresh, and we will validate that the TEMP-UPN-2 user, who has the role of US Sales, sees data only for their assigned region.

3. Publish the report and add members to the role
We publish the report to the Power BI Service. Then, to add members to the role in the Power BI Service, we must navigate to the workspace where we published the report. We locate the semantic model and select the More options ellipsis (…) and then Security.

Within the Row-Level Security screen, similar to what we did for static RLS, we can add users or security groups to each role we created.

Since we may implement dynamic row-level Security for better scalability, assigning a security group when adding members can be beneficial. Better yet, this security group can also give users their Viewer access to the report or app.

4. Validate roles in the Power BI Service
On the Security screen, we select the More options ellipsis (…) next to each role and then Test as role.

Then, at the top, we select Now viewing as and Select a person to validate that the RLS is functioning as expected. We will view the report as Jasper (TEMP-UPN-2) as we did in Power BI Desktop.

Note: The TEMP-UPN-# provided in the sample file will only function for testing in Power BI Desktop without requiring an actual email address. The UPN field must contain actual user email addresses to validate in the Power BI Service.

We can also validate the report for Diego (TEMP-UPN-4), who is assigned the Management role and should be able to see data for all sales regions

One last consideration is what happens when a user intentionally or unintentionally has access to view the report but does not have a Role defined in the Employee table.

If we review the DAX expression used for the filter:

SWITCH(
    LOOKUPVALUE(
        Employee[Role], 
        Employee[UPN], 
        USERPRINCIPALNAME()
    ),
    "Asia Sales", Regions[RegionID]=3,
    "Europe Sales", Regions[RegionID]=2,
    "US Sales", Regions[RegionID]=1,
    "Management", TRUE(),
    FALSE()
)

The last value, or the default if no other condition is true, is set to FALSE(). This means that if a user is in the Employee table but either does not have a role or their role doesn’t match one defined in the rule, the report will be empty.

When testing the report, Grady (TEMP-UPN-5) is contained in the Employee table but does not have a role assigned.

Now that everything is working as expected, we have successfully implemented dynamic row-level Security. The benefit is that by using dynamic RLS, we replace four roles with a single rule applied dynamically based on the current user. Additionally, we can add an extra layer and hide data for users who have access to the report but are not assigned a role in the Employee table.


Considerations and Limitations

Before implementing RLS across our reports, it is important to consider and evaluate the impacts of RLS limitations.

A list of limitations can be viewed here:

Learn more about: Row-Level Security with Power BI

It’s also important to understand when Row-Level Security (RLS) is and is not the right tool.

We may consider publishing multiple semantic models if we have only a few simple RLS rules. For example, if we have just two sales regions, we might publish a separate semantic model for each region. Although the semantic models do not enforce RLS, we can use query parameters to filter the source data to the specific sales region. The use of query parameters would still allow us to publish the same model while displaying the relevant data for each region.

Advantages of not using Row-Level Security (RLS) include:

  • Improved Query Performance: With fewer filters applied to the data model, queries can run faster.
  • Smaller Models: While avoiding RLS may increase the number of models, each individual model is generally smaller, which can enhance query performance and data refresh responsiveness.
  • Full Access to Power BI Features: Certain features, like “Publish to the Web”, do not work with RLS.

However, there are also disadvantages to not implementing RLS:

  • Multiple Workspaces Required: Each user audience for reports may require its own workspace.
  • Content Duplication: Reports and dashboards must be created in each workspace, leading to redundancy and increased maintenance efforts.
  • Lack of Consolidated View: Users who belong to multiple report user audiences must open various reports, resulting in no single, consolidated view of their data.

Wrapping Up

Row-Level Security (RLS) is a valuable feature in Power BI that enables us to protect and customize report data according to the identity of the report viewer. With static RLS, we can implement straightforward, fixed access rules, while dynamic RLS offers a more scalable and flexible solution. RLS allows us to provide tailored insights that meet the specific needs of different users.

It’s important to remember that Row-Level Security only filters rows in a table and does not limit access to model objects like tables or columns. If your requirements involve hiding entire tables or specific columns within a table, you should consider using Power BI Object-Level Security (OLS).

RLS can sometimes be overly restrictive when it comes to broader calculations. For example, we could display the total sales for each region to all users or calculate the percentage of total sales across all regions. However, RLS rules filter these total sales values, leading to complications. This is where the concept of partial RLS comes into play, allowing us to secure specific data while still accessing global context for certain calculations.

Stay tuned and subscribe so you won’t miss upcoming posts in this series, focusing on the partial RLS design pattern and object-level security.

If you’d like to follow along and practice these techniques, Power BI sample reports are available here: EMGuyant GitHub – Power BI Security Patterns.


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.