Design Meets Data: A Guide to Building Engaging Power BI Report Navigation


Before we begin developing the navigational element of our reports, we must decide what tools to use. The Design Meets Data series started with two previous posts in which we explored two approaches: using Figma to design and customize our navigation and using Power BI’s built-in shapes, bookmarks, and buttons to create a similar experience with native tools.

From Sketch to Screen: Bringing your Power BI report navigation to life for an enhanced user experience.

User-Centric Design: Next level reporting with interactive navigation for an enhanced user experience

This post will explore using Power BI’s Page Navigator to simplify our navigation setup. Each approach has its own level of flexibility and ease of maintenance, so it’s important to choose the right method based on the project’s requirements.


The Power BI Page Navigator

Power BI’s Page Navigator lets us create a dynamic and interactive navigation menu in our reports. It is an out-of-the-box solution that simplifies the navigation experience.

Since the Page Navigator is a native feature, it requires minimal setup and automatically adjusts when new pages are added to the report. However, some adjustments may still be needed, such as resizing or formatting changes to accommodate report updates.

To add a Page Navigator to our report, we go to the Insert tab in Power BI Desktop, then select Navigator > Page navigator under the Buttons dropdown.

Customization Options

Power BI provides various options to customize the appearance of our Page Navigator, such as changing the font, colors, shapes, borders, and even what pages are listed. This allows us to create a navigational element that fits seamlessly within our report’s aesthetic.

Although the design flexibility is not as extensive as in our previous example using Figma, the Page Navigator option strikes a balance between ease of use and visual appeal.


Building the Navigation Pane

Now that we have covered the basics of the Page Navigator, we can begin building the navigation pane for our report. The navigation pane will be located on the left-hand side of our report and will serve as both the page navigation and container for our page slicers.

Below, is the final product. Follow the detailed step-by-step guide provided to create your own navigation pane

Building the navigation pane

We begin building the navigational pane on the left-hand side of our report canvas. First, we add a rectangle to the report canvas and style it to meet our needs. The navigation pane has a height equal to the report canvas and a width of 140. We set the fill color to dark blue, which is our Theme 1, 50% darker.

Next, we add a rectangle at the top of the navigation pane to provide an accent and space for displaying an organizational logo within our report. This rectangle has a height of 40 and a width of 140, and its fill color is set to the accent color of our report theme.

With the basic structure of our navigation pane in place, we then move on to adding the Page Navigator.

Formatting the Page Navigator

We begin by selecting the Insert tab on the main ribbon. Then, under the Button dropdown, select Navigator > Page navigator.

Once the Page Navigator is added to the report canvas, go to the Format pane and select Vertical from the Orientation dropdown under the Grid layout property.

Under the Pages property, we find options to control what pages are displayed within our report navigation. In the Options group, we can set general options such as Show hidden pages, Show tooltip pages, and Show all by default. Additionally, in the Show group, we can explicitly set which pages are shown in the navigation by using the toggles next to each report page.

Once our navigation displays the required pages, we adjust the width of the navigator object to 130 and the height to 105. Note that the height should be adjusted based on the number of pages in the navigation. In this example, we use a basic calculation to calculate the height by multiplying the number of pages by 35 ( 3 pages x 35 = 105).

Lastly, we set the horizontal position of the navigator object to 10 and the vertical position to 100.

Styling the Page Navigator

Once we finish the basic formatting, we customize the navigator to match the look of our report. Our aim with the styling is to make it easier to identify the current page.

We do this by adjusting the Style properties of the Page Navigator and using the Apply settings to State dropdown.

Below are the key differences between our page navigator’s Default, Hover, Press, and Selected states.

Default
Text > Font: Segoe UI
Text > Font Color: light blue, Theme color 1, 60% lighter
Text > Horizontal alignment: left
Text > Padding: left 5 px
Fill > Color: dark blue (#06435F)
Border: off
Shadow: off
Glow: off
Accent bar: off

Hover
Text > Font: Segoe UI Semibold
Text > Font Color: white
Text >Horizontal alignment: left
Text> Padding: left 5 px
Fill > Color: dark blue, Theme color 1, 25% darker
Border: off
Shadow: off
Glow: off
Accent bar: off

Press
Text > Font: Segoe UI Semibold
Text > Font Color: dark blue, Theme color 1, 50% darker
Text >Horizontal alignment: left
Text> Padding: left 5 px
Fill > Color: white
Border: off
Shadow: off
Glow: off
Accent bar: off

Selected
Text > Font: Segoe UI Semibold
Text > Font Color: dark blue, Theme color 1, 50% darker
Text >Horizontal alignment: left
Text> Padding: left 5 px
Fill > Color: white
Border: off
Shadow: off
Glow: off
Accent bar > Position: right
Accent bar > Color: accent green (#6CBE4B)
Accent bar > Width: 6px

After configuring all the styling options, our report page will have an engaging and interactive navigation panel.


Comparing the Approaches

When building a navigational component within our reports, we have a wide variety of options and approaches to choose from. Each report has unique requirements, so it’s important to compare and contrast these different approaches based on performance, ease of maintenance, and design flexibility.

Performance Comparison

Figma: Our Figma-based designs require importing images into Power BI. This can improve the load time of our reports because the report does not have to render each shape and component independently.

Power BI Native Shapes and Buttons: This option has the advantage of only requiring our development work to occur in Power BI. However, each shape and component added to build our navigation pane has to be loaded each time our report page is viewed.

Page Navigator: This offers a fully integrated approach with minimal overhead. However, we lose full control over design and aesthetics.

Ease of Maintenance

Figma: Our Figma-based approach is the most complex to maintain. Every design update requires going back to Figma, redesigning, re-exporting from Figma, and re-importing and integrating into our Power BI report.

Power BI Native Shapes and Buttons: This approach has the advantage of only using Power BI to maintain our navigational element but still requires manual updates when adding or removing pages from our report.

Page Navigator: This is the easiest option to maintain because the page navigator updates with any page additions or changes. However, the size of the object may still need to be adjusted.

Design Flexibility

Figma: Offers the highest level of design customization. We can create any design we can think up, however, this comes with added complexity and time requirements.

Power BI Native Shapes and Buttons: Provides us more flexibility than the page navigator, allowing us to customize our design and layout. However, we may still encounter design limitations depending on the options provided by Power BI.

Page Navigator: The most straightforward to implement but offers limited customization options, which restricts design flexibility. It prioritizes ease of maintenance over complex designs.


Final Thoughts

Deciding on the proper navigation for our Power BI reports depends on balancing design flexibility, ease of maintenance, and performance. Leveraging Figma allows for fully custom designs but comes with more complex maintenance. Power BI offers various native tools to help design navigational elements, although building a custom navigational pane will still require manual updates. Power BI’s Page Navigator stands out for its simplicity, dynamic updating, and minimal maintenance effort. However, the cost of the simplicity is limited customization and design flexibility.

The Design Meets Data series explored three approaches to building a Power BI report navigational pane.

From Sketch to Screen: Bringing your Power BI report navigation to life for an enhanced user experience.

User-Centric Design: Next level reporting with interactive navigation for an enhanced user experience

There are seemingly endless other approaches to creating this critical part of our Power reports. How do you approach your report’s navigation?


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Power Query Functions: Building Flexible and Reusable Data Cleaning Routines


When extracting and transforming data in Power BI, we often repeat the same steps across different projects. This repetition raises the question: can these steps be generalized and reusable in future projects?

This post will explore a solution that breaks down these steps into flexible and reusable parameters and functions, eliminating the need to recreate these common steps across our projects. The solution leverages parameters, functions, and queries to standardize various parts of our data preparation process when CSV files in a SharePoint document library are the data source.

Don’t just read; get hands-on and view all the details in the template report available here:

This repository provides a Power BI template and sample CSV files for automating and standardizing CSV data transformation from SharePoint using Power Query functions.


Power Query Parameters

Parameters help keep our data preparation process as dynamic as possible. They provide a level of flexibility to our Power Query functions and allow us to modify their behavior without altering the underlying function definition. This flexibility enables this solution to adapt to different requirements and increases the reusability of our work.

In our solution, 7 parameters manage how data is loaded, cleaned, and transformed.

SharePoint Site Url (Text): specifies the SharePoint site URL where the CSV files are stored.

https://yourcompany.sharepoint.com/sites/yoursite

SharePoint Folder Path (Text): defines the folder path within the SharePoint site where the CSV files are located.

https://yourcompany.sharepoint.com/sites/yoursite/yourfolder/

textColumns (Text): a comma-separated list of column names that require text transformations.

Column Name 1, Column Name 2, Column Name 3

textFormatCase (Text): a text value that specifies how text columns should be formatted. Options include Proper, Lower, and Upper case.

columnDataTypes (Text): a semicolon-separated list of column names paired with their corresponding data type description, used to dynamically set the data types of each column.

Column Name 1,date;
Column Name 2,whole number;
Column Name 3,text;
Column Name 4,text;
Column Name 5,text;
Column Name 6,whole number;
Column Name 7,decimal number

errorHandlingOption (Text): sets how errors are handled within the dataset and provides the options of RemoveErrors, ReplaceErrors, or None.

viewErrorSummary (TRUE/FALSE): toggles between displaying a summary of rows that contain errors and returning the cleaned dataset.

Once we create our parameters and set their values, we can reference them throughout our functions and queries. This enables us to adjust the entire data transformation process by updating the parameter values.


Supporting Utility Functions

To enhance the reusability of our Power Query solution, we first break down complex data transformations into smaller, modular functions. Each utility function handles a specific task, improving its maintainability and allowing us to apply them individually or collectively across different datasets.

fxParameterToList

(parameterName as text) as list => 
   List.Transform(Text.Split(parameterName, ","), each Text.Trim(_))

Many of our parameters in this solution are comma-separated strings that we must convert to list objects before using them in a list-based transformation within Power Query. Since many of our parameters and functions require this, creating a function allows us to repeat this operation easily.

The function has a single input parameter (parameterName), a comma-separated string of values. It then splits this input by commas, trims surrounding spaces, and converts the result into a list. This is used when calling and passing parameter values to other utility functions such as fxTransformTextFormat and fxTransformTextTrim.

fxTransformTextTrim

The fxTransformTextTrim function trims any leading and trailing spaces from text values in the specified columns. Trimming can be essential when cleaning user-entered data or other text extracted from external sources.

(inputTable as table, columnsToTrim as list) as table =>
    Table.TransformColumns(inputTable, List.Transform(columnsToTrim, each {_, Text.Trim, type text}))

This function has two input parameters:

  • inputTable: a table containing the data to be trimmed
  • columnsToTrim: a list of text columns that require trimming

The fxTransformTextTrim function applies Text.Trim to each column specified in columnsToTrim, ensuring our text data is clean and consistent.

fxTransformTextFormat

The fxTransformTextFormat function standardizes the case formatting of our text data depending on the caseType input parameter. Depending on our requirements, the function can convert text data to Proper Case, Lower Case, or Upper Case.

(inputTable as table, columnsToTransform as list, caseType as text) as table =>
    let
        // Determine the transformation function based on caseType input
        CaseFunction = if caseType = "Proper" then Text.Proper
                       else if caseType = "Lower" then Text.Lower
                       else if caseType = "Upper" then Text.Upper
                       else error "Unsupported case type: Use 'Proper', 'Lower', or 'Upper'",

        // Apply the selected case transformation to each column in the list
        TransformTextCase = Table.TransformColumns(inputTable, List.Transform(columnsToTransform, each {_, CaseFunction, type text}))
    in
        TransformTextCase

This function has three input parameters:

  • inputTable: a table containing the data to be transformed
  • columnsToTransform: a list of columns that require text formatting
  • caseType: a text value specifying the desired text format (Proper, Lower, or Upper)

Using the caseType parameter, the function identifies the transformation type and then applies the transformation to the specified columns.

fxTransformRemoveDuplicates

The fxTransformRemoveDuplicates is a straightforward function that removes duplicated rows within our dataset.

(inputTable as table) as table =>
Table.Distinct(inputTable)

This function has a single parameter:

  • inputTable: a data table to remove duplicate rows from

The function uses Table.Distinct to identify and remove duplicated rows and returns a table with each row being unique.

fxTransformSetDataTypes

The fxTransformSetDataTypes function maps a textual description of the data type to the Power Query data type. The function allows dynamically setting column data types based on our columnDataTypes Power Query parameter.

(columnType as text) as type =>
    let
        lowerColumnType = Text.Lower(columnType),
        columnTypeResult = 
            if lowerColumnType = "decimal number" then type number
            else if lowerColumnType = "fixed decimal number" then type number
            else if lowerColumnType = "whole number" then Int64.Type
            else if lowerColumnType = "percentage" then type number
            else if lowerColumnType = "date/time" then type datetime
            else if lowerColumnType = "date" then type date
            else if lowerColumnType = "time" then type time
            else if lowerColumnType = "date/time/timezone" then type datetimezone
            else if lowerColumnType = "duration" then type duration
            else if lowerColumnType = "text" then type text
            else if lowerColumnType = "true/false" then type logical
            else if lowerColumnType = "binary" then type binary
            else type text // Default to text if the type is unrecognized
    in
        columnTypeResult

The function has a single input parameter:

  • columnType: a text value of the data type to set for a column

This function helps us leverage common textual descriptions to set Power Query data types across our dataset columns.

fxErrorHandling

The fxErrorHandling function allows us to manage how our dataset handles errors based on our requirements. We can use the function’s input parameter to remove rows containing errors, replace error values with null, or retain the errors within the dataset.

(inputTable as table, errorHandlingOption as text) as table =>
let
    ErrorRows = Table.SelectRowsWithErrors(inputTable),
    
    // Determine the action based on the error handling option provided
    ErrorHandlingResult = 
    if errorHandlingOption = "RemoveErrors" then
        // Remove rows with any errors
        Table.RemoveRowsWithErrors(inputTable)
    else if errorHandlingOption = "ReplaceErrors" then
        // Replace errors with a specified value (default is "Error" if replaceValue is not provided)
        let
            ReplaceErrorsStep = 
                let
                    // Get a list of column names
                    columnNames = Table.ColumnNames(inputTable),
                    // Create a list of replacement rules where each column is mapped to null in case of error
                    errorReplacementList = List.Transform(columnNames, each {_, null})
                in
                    Table.ReplaceErrorValues(inputTable, errorReplacementList)
        in
            ReplaceErrorsStep
    else
        // If the parameter is not set correctly, return the data as is
        inputTable,

    Output = if viewErrorSummary then ErrorRows else ErrorHandlingResult
in
    Output

The function has two input parameters:

  • inputTable: a data table to be evaluated for errors
  • errorHandlingOption: specifies how the errors within inputTable should be handled (RemoveErrors, ReplaceErrors, None)

The function also utilizes the viewErrorSummary Power Query parameter. This parameter can be set to TRUE/FALSE and determines whether to return an error summary or the cleaned dataset.

This function identifies rows with errors. Depending on the errorHandlingOption, it will remove the rows containing errors, replace the error values with null, or return inputTable without handling the errors.

If viewErrorSummary is set to TRUE, the function returns only the rows that contain errors, allowing us to inspect the errors further within our dataset.


Core Function – TranformCSV

The fxTransformCSV function is the start of our standardized data processing solution, designed to handle our CSV files in a modular and reusable way.

It takes the raw contents of our CSV files, applies a defined set of transformations, and then outputs a clean data table ready for our analysis. This function leverages the utility functions discussed in the previous section.

(FileContent as binary) =>
let
    //STEP 01: Importing CSV files
    ImportCSV = Csv.Document(FileContent,[Delimiter=",", Encoding=1252, QuoteStyle=QuoteStyle.None]),
    PromoteHeaders = Table.PromoteHeaders(ImportCSV, [PromoteAllScalars=true]),
    
    //STEP 02: Cleaning blank rows/columns
    //Clean blank rows
    RemoveBlankRows = Table.SelectRows(PromoteHeaders, each not List.IsEmpty(List.RemoveMatchingItems(Record.FieldValues(_), {"", null}))),
    //Clean blank columns
    RemoveBlankColumns = Table.SelectColumns(RemoveBlankRows, List.Select(Table.ColumnNames(RemoveBlankRows), each not List.IsEmpty(List.RemoveMatchingItems(Table.Column(RemoveBlankRows, _), {""," ", null})))),
    
    //STEP 03: Replace empty strings with  null
    ReplaceValue = Table.ReplaceValue(RemoveBlankColumns,"",null,Replacer.ReplaceValue,Table.ColumnNames(RemoveBlankColumns)),
    
    //STEP 04: Remove duplicates
    RemoveDuplicates = fxTransformRemoveDuplicates(ReplaceValue),

    //STEP 05: Text transformations
    // Convert the textColumns parameter into a list
    TextColumnsToTransform = fxParameterToList(textColumns),
    // Apply the TrimTextColumns function
    TrimTextColumns = fxTransfromTextTrim(RemoveDuplicates, TextColumnsToTransform),
    // Apply the TransformTextCase function for Proper Case (can also be "Lower" or "Upper")
    FormatTextColumns = fxTransformTextFormat(TrimTextColumns, TextColumnsToTransform, textFormatCase)

in
    FormatTextColumns

Breaking Down fxTransformCSV

STEP 01: Importing the CSV files

The function begins by reading the contents of the CSV files and then promotes the first row to headers.

//STEP 01: Importing CSV files
    ImportCSV = Csv.Document(FileContent,[Delimiter=",", Encoding=1252, QuoteStyle=QuoteStyle.None]),
    PromoteHeaders = Table.PromoteHeaders(ImportCSV, [PromoteAllScalars=true]),

STEP 02: Cleaning blank rows/columns

The next step, which begins our transformation steps, removes any empty rows or columns. By eliminating these blank rows or columns early in our process, we avoid potential issues downstream and reduce clutter in the dataset.

    //STEP 02: Cleaning blank rows/columns
    //Clean blank rows
    RemoveBlankRows = Table.SelectRows(PromoteHeaders, each not List.IsEmpty(List.RemoveMatchingItems(Record.FieldValues(_), {"", null}))),
    //Clean blank columns
    RemoveBlankColumns = Table.SelectColumns(RemoveBlankRows, List.Select(Table.ColumnNames(RemoveBlankRows), each not List.IsEmpty(List.RemoveMatchingItems(Table.Column(RemoveBlankRows, _), {""," ", null})))),

STEP 03: Replace empty strings with null

Empty strings can cause common issues in our analysis. This step replaces empty string values in the dataset with null, making the data more consistent, better understood, and easier to work with.

    //STEP 03: Replace empty strings with  null
    ReplaceValue = Table.ReplaceValue(RemoveBlankColumns,"",null,Replacer.ReplaceValue,Table.ColumnNames(RemoveBlankColumns)),

STEP 04: Remove duplicates

Using the fxTransformRemoveDuplicates utility function, this step removes duplicated rows from the dataset.

//STEP 04: Remove duplicates
    RemoveDuplicates = fxTransformRemoveDuplicates(ReplaceValue),

STEP 05: Text transformations

The text transformations start with trimming our text values to ensure there are no leading or trailing spaces, using the fxTransformTextTrim utility function.

Then, the text values get standardized using the fxTransformTextFormat utility function to make our data more readable and consistent.

//STEP 05: Text transformations
    // Convert the textColumns parameter into a list
    TextColumnsToTransform = fxParameterToList(textColumns),
    // Apply the TrimTextColumns function
    TrimTextColumns = fxTransfromTextTrim(RemoveDuplicates, TextColumnsToTransform),
    // Apply the TransformTextCase function for Proper Case (can also be "Lower" or "Upper")
    FormatTextColumns = fxTransformTextFormat(TrimTextColumns, TextColumnsToTransform, textFormatCase)

Core Function – LoadCSVData

The LoadCSVData function is the layer connecting the different elements of our Power Query solution. It loads multiple CSV files from our SharePoint source, applies the fxTransformCSV function to each file, and handles final data preparations.

This function is adaptable by leveraging Power Query parameters, allowing us to easily modify the data preparation process to meet our project’s requirements (e.g. switching to different SharePoint sites or folders).

() as table =>
let
    
    //STEP 01: Load files from SharePoint
    SharePointSite = SharePoint.Files(#"SharePoint Site Url", [ApiVersion = 15]),
    SharePointFolder = Table.SelectRows(SharePointSite, each ([Folder Path] = #"SharePoint Folder Path")),
    
    //STEP 02: Invoke the fxTransformCSV function for each file
    TransformCSVResult = Table.AddColumn(SharePointFolder, "TransformCSV", each fxTransformCSV([Content])),
    
    //STEP 03: Renaming and removing columns
    // Rename the "Name" column for clarity
    PrefixSouceName = Table.RenameColumns(TransformCSVResult,{{"Name", "source.Name"}}),
    // Remove other columns to focus on the transformed data
    RemoveColumns = Table.SelectColumns(PrefixSouceName,{"source.Name", "TransformCSV"}),
    
    //STEP 04: Expand the transformed CSV data
    ExpandTransformCSV = Table.ExpandTableColumn(RemoveColumns, "TransformCSV", Table.ColumnNames(TransformCSVResult{0}[TransformCSV])),

    //STEP 05: Set data types
    ColumnTypePairs = Text.Split(columnDataTypes, ";"),
    // Trim spaces and split each pair into a list of {column name, data type}
    ColumnTypeList = List.Transform(ColumnTypePairs, each 
        let
            PairParts = List.Transform(Text.Split(_, ","), Text.Trim),
            ColumnName = List.First(PairParts),
            ColumnType = fxTransformSetDataTypes(List.Last(PairParts))
        in
            {ColumnName, ColumnType}
    ),
    // Dynamically set the column data types
    SetDataTypes = Table.TransformColumnTypes(ExpandTransformCSV, ColumnTypeList),

    //STEP 06: Error handling
    ErrorHandlingOutput = fxErrorHandling(SetDataTypes, errorHandlingOption)
in
    ErrorHandlingOutput

Breaking down fxLoadCSVData

STEP 01: Load files from SharePoint

The function begins by connecting to the specified SharePoint site using the SharePoint Site Url parameter. Then, it filters the files to those located within the specified SharePoint Folder Path. Using both parameters allows us to target the specific CSV files required for our analysis.

//STEP 01: Load files from SharePoint
SharePointSite = SharePoint.Files(#"SharePoint Site Url", [ApiVersion = 15]),
SharePointFolder = Table.SelectRows(SharePointSite, each ([Folder Path] = #"SharePoint Folder Path"))

STEP 02: Invoke the fxTransformCSV function for each file

For each file in our SharePoint source, we add a new column and invoke our custom fxTransformCSV function. This step ensures each file is processed consistently.

//STEP 02: Invoke the fxTransformCSV function for each file
TransformCSVResult = Table.AddColumn(SharePointFolder, "TransformCSV", each fxTransformCSV([Content]))

STEP 03: Renaming and removing columns

To improve the clarity of our columns, we renamed the Name column to source.Name, indicating that it represents the original file name. This clearly indicates what this column is while providing information on the file the row of data is associated with.

The function then removes unnecessary columns, and we keep only source.Name and TransformCSV columns.

//STEP 03: Renaming and removing columns
// Rename the "Name" column for clarity
PrefixSouceName = Table.RenameColumns(TransformCSVResult,{{"Name", "source.Name"}}),
// Remove other columns to focus on the transformed data
RemoveColumns = Table.SelectColumns(PrefixSouceName,{"source.Name", "TransformCSV"})

STEP 04: Expand the transformed CSV data

We expand the TransformCSV column into individual files, which compiles all the transformed CSV data into a single table.

//STEP 04: Expand the transformed CSV data
ExpandTransformCSV = Table.ExpandTableColumn(RemoveColumns, "TransformCSV", Table.ColumnNames(TransformCSVResult{0}[TransformCSV]))

STEP 05: Set data types

In this step, the function dynamically sets the data types for each column based on the columnDataTypes Power Query parameter. The TransformSetDataTypes function maps each column to its specified data type.

//STEP 05: Set data types
ColumnTypePairs = Text.Split(columnDataTypes, ";"),
// Trim spaces and split each pair into a list of {column name, data type}
ColumnTypeList = List.Transform(ColumnTypePairs, each 
    let
        PairParts = List.Transform(Text.Split(_, ","), Text.Trim),
        ColumnName = List.First(PairParts),
        ColumnType = fxTransformSetDataTypes(List.Last(PairParts))
    in
       {ColumnName, ColumnType}
),
// Dynamically set the column data types
SetDataTypes = Table.TransformColumnTypes(ExpandTransformCSV, ColumnTypeList)

STEP 06: Error handling

The function concludes by handling errors in the dataset according to the errorHandlingOption parameter. If the viewErrorSummary is set to true, the function returns a summary of rows that contain errors, otherwise it returns the cleaned dataset.

//STEP 06: Error handling
ErrorHandlingOutput = fxErrorHandling(SetDataTypes, errorHandlingOption)

Applying and Extending the Power Query Solution

We have explored and started constructing a flexible approach to standardize and ensure our data preparation steps are applied consistently within and across our Power BI projects.

The final query loaded to our Power BI data model is rpt_SharePointCSVData. This query utilizes the fxLoadCSVData function to bring each step together and deliver a clean and consistent data set to our Power BI report.

let
    Source = fxLoadCSVData()
in
    Source

This simple query calls the fxLoadCSVData function, which loads and processes the CSV files according to previously defined transformations and parameters. The result is a clean, ready-to-use dataset.

This solution is designed to be adaptable and can be applied to different Power BI projects that require loading CSV files from a SharePoint source.

A Power BI template and sample data files are available at the link below to view an example of how this solution can be applied. This file can be used as a starting point for our Power BI projects, or you can copy and paste specific folders or elements of it into an existing project.

This repository provides a Power BI template and sample CSV files for automating and standardizing CSV data transformation from SharePoint using Power Query functions.

To successfully apply this solution:

Start with the Parameters: define the parameters relevant to the project.

Open the template file and, when prompted, populate each parameter with the required information.

For example, the parameters can be set using the sample files to the following.

SharePoint Site Url: 
https://yourcompany.sharepoint.com/sites/yoursite

SharePoint Folder Path:
https://yourcompany.sharepoint.com/sites/yoursite/yourfolder/

textFormatCase:
Proper

textColumns:
Event Name, Location, Organizer Name

columnDataTypes:
Event Date,date;Event ID,whole number;Event Name,text;Location,text;Organizer Name,text;Number of Attendees,whole number;Feedback Score,decimal number

errorHandlingOption:
ReplaceErrors

viewErrorSummary:
FALSE

Then, under the Load dropdown, select Edit to explore this solution in the Power Query Editor.

Once the data is loaded, you can modify the textFormatCase, error handling option, and viewErrorSummary parameters in the Power Query Editor to view the results when you select different options.

Customize the Utility Functions: if different projects have unique requirements, consider modifying or extending the utility functions.

Leverage the modular structure: This solution’s modular design makes adding or removing steps easy based on the project’s specific needs.

Extend with new functions: as our data processing needs evolve, we can extend this solution by adding new functions or adding more flexibility to our existing ones.


Wrapping Up

This post explores a solution that leverages Power Query parameters, functions, and queries to streamline our data preparation process. Combining custom functions with parameters provides us with a framework that can be adapted to meet the demands of different projects.

The design approach used in this solution aims to enhance the maintainability and reusability of each of its components. As we continue to work with and expand this solution, we will build a library of reusable functions to increase our efficiency in future projects.


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Explore Power BI Core Visualizations: Part 2 – Line and Area Charts


Welcome back to our exploration of the core visuals in Power BI. In Explore Power BI Core Visualizations Part 1 we explored bar and column charts and how they can be used and customized within our report to provide insights.

Now, we will explore another set of visuals in Power BI: Line and Area charts. These visuals are essential tools for displaying trends and patterns over time, making them ideal for analyzing time series data, monitoring performance, or illustrating cumulative values.

Let’s dive in and explore how to create, customize, and enhance our Line and Area Charts in Power BI to elevate our reports.


Types of Line Charts

Line charts are a fundamental tool in data visualization, especially when it comes to displaying trends and changes over time. A line chart provides our viewers a continuous flow of connected data points that are easy to follow and makes them ideal for visualizing trends.

In Power BI, we have several varieties of line charts, each serving a unique purpose and offering different way to analyze and display different data. Let’s explore these main types.

Line Chart

A standard line chart is the simplest form, and typically used to display time on our x-axis and a metric being measured on the y-axis. This type of chart is excellent for tracking sales or other metrics over months, quarters, or years.

Line and Stacked Column Chart

Line and stacked column charts combines a line chart with a stacked column chart, providing a dual perspective on our data. The line represents one metric, while the stacked column represents another. This type of chart can be useful for comparing the overall distribution of different categories along with the trend of a key metric.

For example, we can use a line and stacked column chart to display our year-to-date cumulative sales as a continuous series while showing the monthly contribution of each sales region.

Line and Clustered Column Chart

The line and clustered column chart is another hybrid visual that combines a line and clustered column charts. In this chart type, columns represent individual categories clustered together, while the line element shows the trend of a specific metric. For example, we can show a comparison of the previous year and current year monthly total sales as our clustered columns and the average sales for each year over time on our line axis.

Each of these chart types provides a unique way to visualize data, offering flexibility and depth to our analysis. Understanding when and how to use these different line charts can greatly enhance our ability to communicate data insights effectively.


Customizing Line Charts

Customizing line charts in Power BI allows us to create visually appealing and informative visuals that effectively communicate our data trends to our viewers. There are various ways and opportunities for out of the box thinking when it comes to our customizations. Here we will focus on some of the main customization options available across all of the line chart types.

Line Color and Styles

To make our lines more visually distinct we can leverage the properties found under the Line property group. These options let us define the color, style, and thickness of all the lines on our chart or for specific series.

Let’s explore these options to improve the Total & Average Sales Line and Clustered Column visual shown above.

We start by adjust our colors, we will keep a blue color theme for 2024 data and use a gray color theme for 2023 data. Under our Lines property setting we select our Average Sale 2024 in the Series drop down and select 60% lighter color of our dark blue in our Power BI theme, keeping it consistent and corresponding to the 2024 column.

Next, we select the Average Sale 2023 series in the Series dropdown and set the color to a gray. We select a color that is light enough to contrast with the dark blue bars, yet dark enough to contrast the light-colored background of the visual.

This slight change already improves our visual’s clarity, but we can go one step further. In addition to using different colors to differentiate our lines we can use different styles to further differentiate our series. Here, with the Average Sale 2023 series selected, under the Line options we set the Line style to Dashed and reduce the width to 2px.

Data Markers and Labels

In addition to setting different colors and styles for our lines, we can also set the data marker style or not show them at all.

Data markers make it easier to identify specific values and can help compare different values. For example, in the above visual we only show data labels for the Average Sale 2024 series. However, since we show the data markers for both series, we can present our user the current year’s average and its value, and then it can easily be compared to the previous year’s data marker.

Let’s further clarify our different series by setting the data marker for our Average Sale 2023 to a diamond shape and reduce its size to 4px. We select the series we want to modify from the drop down, and then select diamond in the shape dropdown and then reduce the size.

Adding data labels to our visuals provides our viewer additional context and precise information by displaying the value of each data point on the chart. We find the options to customize our data labels under the Data labels property group. Here, we find options to set the position of our data labels, display a title, customize the value displayed and how it is formatted, add additional detail to display another value, set the background of the label, and define whether the label should appear on a single line or multiple lines.

To help not clutter our Total & Average Sales visual we turn off the labels for the 2023 column and series. To do this we select each series in the Series dropdown and toggle off the Show for this series option.

Then we modify our data labels for our 2024 columns and Average Sale 2024 series. We start with our 2024 column data label and select it in the Series dropdown. Then we set the background of the data label to the same color as the column, set the value color to white, and under the options settings we select Outside end in the Position (column) dropdown.

Then we select our Average Sale 2024 in the Series dropdown, set the background to the same color as the line, the value color to a dark blue, and select Above in the Position (line) dropdown.

By customizing our line color, style, data markers and data labels our visual now more clearly differentiates between our 2024 and 2023 data through the use of different colors and styles. Clearly labeling our 2024 data points provide precise information for the current year data, while also being able to relatively compare this value to the previous year data.

Other Customizations

There are many other customization options available. There are other line chart specific options to explore including series labels and shading the area under the line series. Then there are also customization options that are common across many different visual types including formatting the x- and y-axis, the legend, gridline, tooltips, etc. For examples and an exploration of these common options visit Part 1 of this series.

Chart your data journey! Transform data into insights with Power BI core visualizations. 

By customizing our line charts, we are able to create compelling visuals that highlight key trends and insights in our data. By leveraging these different customization options, we can enhance the clarity and impact of our visuals.


Types of Area Charts

Area charts are a variation of line charts that fills the area beneath the line with color, emphasizing the magnitude of the values. They can be particularly useful for visualizing cumulative data, where the area fill highlights the overall value in addition to the trend.

Area Chart

A standard area chart fills the area below the series line making it easy to see both the trend and volume of the dataset over time. We can use this chart to display continuous data where the emphasis is on the total value represented by the area. For example, we can use an area chart to visualize our year-to-date running total sales for our dataset.

Stacked Area Chart

The Stacked Area Chart is an extension of the basic area chart, where multiple data categories are stacked on top of each other. Each category is represented by a different colored area, showing the contribution of each category to the total value. Similar to the stacked bar/column charts, this chart type is useful for visualizing the contribution of different categories to the whole.

For example, we will update our Running Total Sales to show each region’s contribution to the running total over time. By stacking the areas, it becomes clear how each region’s sales trend and compare to the others. This chart is effective when we need to show the relationship between multiple data series and their cumulative impact on a metric.

100% Stacked Area Chart

The 100% Stacked Area Chart is similar to the stacked area chart but shows the relative percent contribution of each category to the total. Each area is normalized to fill 100% of the y-axis, making it easy to see the proportional contribution of each category over time. This type of area chart is ideal for when we need to compare the relative size of different categories.

For example, with a 100% Stacked Area chart we can update our Running total sales visual to show how each sales region contribution to our annual total sales has changed throughout the year. This provides a different method to visualize and understand how each of our sales regions contribute to our total sale through time.

Each of these area chart types provides a unique way to visualize data, offering us flexibility and depth to our data analysis. Understanding when and how to use each different type can greatly enhance our ability to communicate data insights effectively.


Customizing Area Charts

Customizing our area charts allows us to ensure our charts are informative as well as visually appealing and effectively communicate our data insights. Let’s explore several customization options to enhance our area charts.

Fill Colors and Transparency

To make sure our area charts are visually consistent with other visuals in our report we can set the color of each series and adjust the transparency to get the visual just right. For example, a common theme within our Sales report is to visualize our regions using a green color palette.

So, when creating the Running Sales stacked area chart above, we set the color for each sales region explicitly to ensure consistency.

Power BI provides us two property groups to help customize the line and fill color of our area charts. The first is setting the line color, here we can select each series we want to customize from the Series dropdown and apply the desired formatting. The other is the Shade area properties that we can customize. By default, the Match line color is toggled on however, if required we can toggle this option off and set the area color to a specific color.

Data Labels and Markers

By using data labels and markers we can enhance the clarity of our area charts by displaying the exact values and providing our viewers the information they require at just a glance. To label our area charts we can utilize data and series labels and for our stacked area charts we also have a total label property. Let’s explore each of these on Running Total Sales stacked area chart.

We will start by formatting the data labels. The data labels will be formatted the same for all our regional sales series. We set the color to a dark green color, and then under Options we set the position to inside center. This will show each regional contribution to the total cumulative sales and display it in the center of the shaded area.

Next, we toggle on the Total labels to show the cumulative total of our sales through time. To distinguish this value from the data labels, we set the background of the label to a darker green, transparency to 0%, and then set the color of our values to white.

Lastly, to better clarify what each area on our chart represents, we turn on the series labels. We position the series labels on the right side of the chart and format the series colors to correspond with the line/area color for each series.

In addition to using data labels, we can identify each data point using markers. We can also set different marker for each data series to continue to provide our viewers visual cues to differentiate our series. Here, we set the Asia sales regions to use circle markers, the Europe sales region to use diamond markers, and the United States sales region to use square markers.

Other Customizations

Like our line charts there are many other customization options available then we explored here. Check out Part 1 of this series for details on customizing our visual’s x- and y-axis, legend, gridlines, tooltips, and more.

Chart your data journey! Transform data into insights with Power BI core visualizations. 

By customizing our area chart, we can create a compelling visual narrative that highlight the key data trends in our data. To maximize our visual’s clarity when it includes multiple data series (e.g. stacked area charts) it’s important to use clearly distinguished colors and to avoid displaying areas that excessively overlap. By effectively leveraging our customization options, we can enhance our visuals ensuring our data tells a compelling story.


Ribbon Chart

A ribbon chart is a unique and powerful visual that showcases rankings of different categories. These charts highlight both the rank of the categories within and across different groupings. A ribbon chart is particularly effective in scenarios where we want to understand the flow and change of ranks over different groupings, such as the totals sales of product categories across our different sales regions.

The customization options available for us for our ribbon charts are similar to the options we have already explored in the previous sections, so we won’t explore then again here. For example, in the above visual we leveraged total labels, data labels, and specific colors for our columns.

Ribbon charts can be a powerful tool for visualizing ranking and trends, providing valuable insights into the relative performance of different categories. By effectively utilizing ribbon charts in our reports, we can provide our viewers an informative and unique visual that drives deeper insights.


Advanced Techniques and Customizations

Enhancing our Line and Area charts with advanced techniques and customizations can help us elevate the clarity and impact of our visuals. Let’s explore some examples.

Plotting Year over Year Comparisons

Year-over-year comparisons can help improve our understanding of trends and annual patterns in our data. Using a line chart allows us to clearly visualize these patterns.

Here is the plot we will be creating to provide year-over-year insights to our viewers.

First, we ensure our data includes the proper time dimension, in this example we use the total sales for each month. We start building the visual by adding a Total Sales measure to the y-axis of our line chart and our Month time dimension to the x-axis.

We can see now that our plot displays the Total Sales for each month. To separate and distinguish the monthly total sales for each year we add our Year time dimension to the chart’s Legend field.

The line chart now shows the monthly Total Sales for each year within our dataset. To enhance the clarity of this visual and highlight the current year we continue to format this visual.

To start, we use the Line properties for each series to improve the visual distinction between the previous years and the current year. We set the color of the 2022 series to a light blue and set the line style to dotted, 2023 we set the color to a slightly darker blue and the line style to dotted. Lastly, we set 2024 to a dark blue, keep the line style as solid and increase the width to 4px.

Then we move to adding line markers to give our viewers more precise information on each monthly value. For all of the data series we set the marker type to a circle. For the 2022 and 2023 series the marker size is 3px and then to better highlight the 2024 values we increase the size to 5px.

Lastly, to display the specific monthly values for only 2024 we enable Data labels and then for the 2022 and 2023 we toggle Off the Show for this series option. To ensure the data labels for 2024 sales stands out and are easy to read we set the background of the label to the same color as the series line and set the transparency to 0%, and the Value color to white.

These formatting options gives us our final visual.

By using this approach, we can quickly compare our year-over-year total sales while highlighting the data our viewers are most interested in.

Conditional Formatting Based on Data Thresholds

In the above year-over-year line chart we used line style and color to distinguish our data series and highlight data. Conditional formatting provides us another excellent way to draw attention to key data points. By using conditional formatting, we can automatically change the color of various chart components based on defined thresholds.

The goal with this plot is to highlight for our viewers when the current year’s average monthly sales are above or below the previous year’s average monthly sales. Here is the final visual.

To create this visual we have two measures, the first calculates the monthly average sales for the current year and the second calculates the monthly average sales for the previous year.

In addition to these two measures, we create three new measures to assist with labeling and formatting the specific data points.

Annual Comparision = [Average Sale Current Year] - [Average Sale Previous Year]

Annual Comparison CY < LY Label = 
If([Annual Comparision] < 0,[Average Sale Current Year])

Annual Comparison CY > LY Label = 
If([Annual Comparision] > 0,[Average Sale Current Year])

The first measure provides us a comparison of the current year sales and the previous year sales. We then use this to create the label measures.

Once we have our measures, we start constructing our visual by adding a Line and Clustered Column chart to our report. To create this visual we use the Line and Cluster chart, this allows us to add multiple series to the Line y-axis field.

We start by adding both of the average monthly sales measures to the line y-axis data field of the visual and our MonthName time dimension field to the x-axis data field. Then format the lines in a similar fashion that we did in the above example.

We then add the Annual Comparison CY < LY Label and Comparison CY > LY Label measures to the visual. On this visual the ordering of the data series is important. The order of the measures in the Line y-axis data field is Average Sale Current Year, Average Sale Previous Year, Annual Comparison CY < LY, and lastly Annual Comparison CY > LY.

Now that all of the series we need are added to our visual we need to enhance it with our customization and formatting.

Since the only lines we want on the visual are the Average Sale Current Year and Average Sales Previous Year, we start by setting the Line width of the Annual Comparison measures to 0px and the color to the same color as the visual background color so the dot for these series cannot be seen in the legend.

We then move to formatting the markers for the Annual Comparison series. For the Annual Comparison CY > LY Label series we set the marker size to 6px and color to green, and for the Annual Comparison CY < LY Label series we set the marker size to 6px and the color to red.

Lastly, for the data labels that we will apply to the Annual Comparison series we add another measure that we can use to show the percent difference between the previous year average sales and the current year average sales.

Annual Comparison Percent Difference = 

VAR _percentDiff = ([Average Sale Current Year]-[Average Sale Previous Year])/(([Average Sale Current Year]+[Average Sale Previous Year])/2)

RETURN

IF(_percentDiff <0, "▼ " & FORMAT(ABS(_percentDiff), "0.0%"), "▲ " & FORMAT(_percentDiff, "0.0%"))

We toggle on our Data labels property, and then toggle off Show for this series for our Average Sales Current Year and Average Sales Previous Year series.

Then for the Annual Comparison CY < LY Label series under the Options property settings we set the Position (line) setting to Under, set the Value field to the Annual Comparision Percent Difference measure, the color of the value to white, and the background of the label to the same red color as the marker with a transparency of 0 %.

We format the Annual Comparison CY > LY Label series in a similar fashion, however, we set the Position (line) to Above and the background color to the same green we used for the marker.

Lastly, we rename the Annual Comparison measures for the visual and clear the series name so like the marker in the legend the series name is not shown.

Using conditional formatting helps us make our visuals more informative by visually emphasizing data points that meet certain criteria.

Visualizing Thresholds with Area Charts

Visualizing threshold or ranges with area charts can be a useful and powerful way to compare actual performance against a benchmark or target. We can use this approach to compare our historic actual sales to the forecasted value and visualize the forecasted data projected into the future.

This is the visual we will be creating.

We start to build this visual by adding an Area chart to our report. Then a time dimension hierarchy to the x-axis to plot the values for each year and month and add our Actuals Sales and Forecast measures to the plot.

We then format these series. For our Actual Sales series we set the line color to green with a width of 4px, we toggle off the Shade area setting and set the markers to circles with a dark green color. Then for the Forecast series we set the line color to gray with a width of 3px, we toggle off the Shade area setting, and toggle off the markers.

Then we add our Forecast Upper Limit and Forecast Lower Limit measures to the visual. The order of the series is essential to have the visual display correctly. The order should be Sales Actual, Forecast, Forecast Upper Limit, and lastly Forecast Lower Limit.

After adding the Upper and Lower Limit series we must format them to display just the way we want. For both the Forecast Upper and Lower Limit series we set the Line color to a light gray, and toggle off the Show for this series option for both the Line and Markers properties. Then we move to the Shade area properties, and for the Forecast Lower Limit series we toggle off Match line color and set the share area color to the same color as our visual background with the area transparency set to 0%.

Lastly, we ensure the visual is just right by setting the x-axis and y-axis titles and toggling off the built-in Legend. In place of the built-in legend, we construct a custom Legend to better identify each component on the visual.

Once all our customizations are complete, we have a visual packed with information that allows our report viewers to easily compare our historic actual sales to the forecasted values and view the forecasted sales projected into the future.


Wrapping Up

In this second part of this series on Power BI core visuals, we explored the versatility and power of Line and Area Charts along with their variations and combinations. Line Charts help us track changes over time with clarity, making them a perfect tool for trend analysis and performance monitoring. Area Charts add a layer of visual emphasis on cumulative data allowing for a visual understanding of volumes in addition to trends. The hybrid charts, combing Line and Column Charts offer a multifaceted view of our data allowing us to provide our report viewers comprehensive insights.

By understanding the strengths and weaknesses of these charts we are able to identify opportunities to leverage them to create more informative and visually appealing reports in Power BI.

Stay tuned for Part 3, where we will delve into Pie, Donut, and Treemap visuals. We explore how they can help us provide a quick snapshot of proportional data, discuss their limitations, and where they fit into our visualization toolkit.


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Design Meets Data: Crafting Interactive Navigations in Power BI


An essential aspect of any multi-page report is an effective and intuitive way to navigate between the various report pages. A well-designed navigational element in our Power BI reports enhances our users’ experience and guides them to the data and visualizations they require.

This post outlines and explores how Power BI native tools and functionalities can be utilized to create a similar navigational experience that was created in the previous post using Figma and Power BI: Design Meets Data: A Guide to Building Interactive Power BI Report Navigation.

Looking for another approach to building report navigation that uses built-in Power BI tools? Visit Part 3 of this Power BI Navigation series.

Streamlined report navigation with built-in tools to achieve functional, maintainable, and engaging navigation in Power BI reports.

For those interested in implementing the navigation element presented in this post, there are 2-, 3-, 4-, and 5-page templates available for download, with more details at the end of the post.

Revisiting Interactive Navigation in Power BI

Welcome back to another Design Meets Data exploration focused on interactive report navigation in Power BI. In the first part, we dove into using Figma to design and develop a user-friendly report interface.

Now, it is time to shift our focus towards leveraging Power BI’s native arsenal of tools, primarily bookmarks, buttons, and tool tips, to achieve similar, if not enhanced, functionalities.

Why go native? Utilizing Power BI’s built-in tools streamlines support and maintenance and provides a reduction in external complexities and dependencies. Plus, staying within a single platform makes it easier to manage and update our reports.

This post will highlight the nuances of Power BI’s navigation capabilities. It will demonstrate how to replicate the interactive navigation from Design Meets Data: A Guide to Building Interactive Power BI Report Navigation using tools available directly within Power BI. These tools will help simplify our report while maintaining an engaging and interactive navigational element.

Let’s get started!


Setting the Stage with Power BI Navigation

Before diving into the details, let’s step back with a quick refresher on the Power BI tools that we can leverage for crafting our report navigation. Power BI is designed to support complex reporting requirements with ease, thanks to features like bookmarks, buttons, and tooltips that can be intricately configured to guide our users through our data seamlessly.

Bookmarks

Bookmarks in Power BI save various states of a report page, allowing users to switch views or data contexts with a single click. We can use bookmarks to allow our users to toggle between different data filters or visual representations without losing context or having to navigate multiple pages.

For our navigational element, bookmarks will be key to creating the collapsing and expanding functionality. To create a bookmark, we get the report page looking just right, then add a bookmark to save the report state in the bookmark pane.

The new bookmark can now act as a restore point, bringing the user back to this specific view whenever it is selected. To keep our bookmarks organized it is best to rename them with a description name, generally including the report page and an indication of what the bookmark is used for (e.g. Page1-NavExpanded).

Buttons

Buttons take interactivity to the next level. We can use buttons to trigger various events, such as bookmarks, and also serve as navigation aids within the report. Buttons within our Power BI reports can be styled and configured to react dynamically to user interactions.

To create a button, we simply add the button object from the Insert ribbon onto the report canvas. Power BI offers a variety of button styles, such as a blank button for custom designs, or predefined icons for common actions like reset, back, or informational buttons.

Each button can be styled to match our report’s theme, including colors, text, and much more. Another key property to configure is the button action. Using this, we can define whether the button should direct our users to a different report page, switch the report context to a different bookmark, or another one of the many options available.

Tooltips

Tooltips in Power BI can provide simple text hints, but when properly utilized, they can provide additional insights or contextual data relevant to specific visuals without cluttering the canvas. This provides detail when required while keeping our reports clean and simple.

Power BI allows us to customize tooltips to show detailed information, including additional visuals. This can turn each tooltip into a tool to provide context or additional layers of data related to a report visual when a user hovers over the element.

By effectively using tooltips we transform user interaction from just viewing to an engaging, exploratory experience. This boosts the usability of our reports and ensures that users can make informed decisions based on the data view provided.


The Navigation Framework

Now that we have explored some details of the elements used to create our navigation, let’s dive into building the navigational framework. We will craft a minimalistic navigation on the left-hand side of our report, with the functionality to expand when requested by user interaction. This approach to our navigation is focused on making the navigation pane both compact and informative, ensuring that it does not overpower the content of the report.

In the Design Meets Data: A Guide to Building Interactive Power BI Report Navigation blog post the navigational element was built using Figma. Although Figma is a powerful and approachable design tool, in this guide, we will explore creating a similar navigation pane using native Power BI tools and elements. We will use Power BI’s shapes, buttons, and bookmarks to construct the framework and functionality.

The Navigation Pane Base Elements

We will start by creating the navigation pane by adding the base elements. In this compact and expandable design, this includes the background of the navigation pane, which will contain the page navigation and menu icons.

Collapsed Navigation Pane

The base of the navigation consists of three main components that we add to our Power BI report to start building our interactive navigational element.

The collapsed navigation pane starts by adding the shape of the pane itself. The color is set to theme color 1, 50% darker of the Power BI theme. Using the theme color will help our navigation remain dynamic when changing Power BI themes.

The next base element is the menu icon, which expands and collapses our navigation pane. The button is configured to slightly darken when hovered over and darken further when pressed. Additionally, when the button is disabled, the icon color is set to the same color as the navigation pane and is used to contrast the current page indicator bar. This configuration is used for all buttons contained within the navigation pane (both the bookmark and page navigation buttons).

The last base element is the current page indicator. This is a lighter-colored (theme color 1, 60% lighter) rectangle tab that clearly indicates what page in the navigation pane is currently being viewed.

Here is the collapsed navigation pane containing the base elements.

Expanded Navigation Pane

The expanded navigation consists of the same base elements, with the addition of a close icon, and a click shield to prevent the user from interacting with the report visuals when the navigation is expanded.

The additional elements of the expanded menu provide the user with multiple methods to collapse the navigation pane. The close (X) button is added as a flyout from the base navigation pane background, so it is easily identifiable.

When the navigation pane is expanded, we want to prevent users from interacting with the report visuals. To achieve this, we use a partially transparent rectangle to serve as a click shield. If the user clicks anywhere on the report page outside of the navigation pane, the navigation pane will collapse returning the user to the collapsed report view.

Navigation Bookmarks

The last base element required for the interactive navigation is creating the required bookmarks to transition between the collapsed and expanded view. This is done by creating two bookmarks to store each of the required report page views, Page1-Default-NavCollapsed and Page1-NavExpanded.

We can now build on these base elements and bring our navigation to life with Power BI buttons and interactive features.


Navigation Interactive Features

The interactive features in the navigation pane consist of two types of buttons: (1) bookmark buttons and (2) page navigation buttons.

Expanding and Collapsing the Navigation Pane

The previous section added the base elements of the navigation pane which included a menu icon on both the collapsed and expanded navigation panes, and a close button and click shield on the expanded navigation screen.

Building the interactive elements of the navigation starts by assigning actions to each of these bookmark buttons, allowing the user to expand and collapse the navigation pane seamlessly.

The action property for each of these buttons is set to a bookmark type, with the appropriate bookmark selected. For example, for the menu icon button on the collapsed menu, the bookmark selected corresponds to the expanded navigation bookmark. This way, when a user selects this button on the collapsed navigation, it expands, revealing the additional information provided on the expanded navigation pane.

Page Navigation Buttons

The last element to add to the report navigation is the report page navigation buttons.

Each report page button is a blank button configured and formatted to meet the report’s requirements. For this report, each page button contains a circular numbered icon to indicate the report page it navigates to. When the navigation is expanded, an additional text element displays the report page title.

At the end of this post, there are details on obtaining templates that implement this report navigational element. The templates are fully customizable, so they will come with the numbered icons and default page titles, but these can simply be updated to match the aesthetic of any reporting needs.


Wrapping Up: Elevating Your Power BI Reports with Interactive Navigation

As Power BI continues to evolve, integrating more engaging and interactive elements into our reports will become crucial for creating dynamic and user-centric reports. The transition from static to interactive reports empowers our users to explore data in a more meaningful and memorable way. By leveraging bookmarks, buttons, and tooltips, we can transform our reports from a simple presentation of data into engaging, intuitive, and powerful analytical tools.

For those eager to implement the navigational element outlined in this post, there are 2-, 3-, 4-, and 5-page templates available for download. Each template has all the functionality built in, requiring only updating the button icons, if necessary, to better align with your reporting needs.

The template package is available here!

You will get individual template files for a 2-, 3-, 4-, and 5-page report provided in the PBIX, PBIT, and PBIP (12 total files) formats! 


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Dive into DAX: Decoding Data with Power BI Logical Functions


The missing piece to decoding our data and unlocking its full potential is often the strategic application of DAX Logical Functions. These functions are pivotal in dissecting complex datasets, applying business logic, and enabling a nuanced approach to data analysis that goes beyond surface-level insights. Better understanding DAX Logical Functions allows us to create more sophisticated data models that respond with agility to analytical queries, turning abstract numbers into actionable insights.

In this post we will explore this group of functions and how we can leverage them within Power BI. We will dive in and see how we can transform our data analysis from a mere task into an insightful journey, ensuring that every decision is informed, every strategy is data-driven, and every report illuminates a path to action.


The Logical Side of DAX: Unveiling Power BI’s Brain

Diving into the logical side of DAX is where everything begins to become clear. Logical functions are the logical brain behind Power BI’s ability to make decisions. Just like we process information to decide between right and wrong, DAX logical functions sift through our data to determine truth values: true or false.

Functions such as IF, AND, OR, NOT, and TRUE/FALSE, are the building blocks for creating dynamic reports. These functions allow us to set up conditions that our data must meet, enabling a level of interaction and decision-making that is both powerful and nuanced. Whether we are determining if sales targets were hit or filtering data based on specific criteria, logical functions are our go-to tools for making sense of the numbers.

For details on these functions and many others visit the DAX Function Reference documentation.

Learn more about: DAX Logical Functions

The logical functions in DAX can go far beyond the basics. The real power happens when we start combining these functions to reflect complex business logic. Each function plays its role and when used in combination correctly we can implement complex logic scenarios.

For those eager to start experimenting there is a Power BI report pre-loaded with the same data used in this post ready for you! So don’t just read, follow along and get hands-on with DAX in Power BI. Get a copy of the sample data report here:

This dynamic repository is the perfect place to enhance your learning journey.


Understanding DAX Logical Functions: A Beginner’s Guide

When starting our journey with DAX logical functions we will begin to understand the unique role of each function within our DAX expressions. Among these functions, the IF function stands out as the decision-making cornerstone.

The IF function tests a condition, returning one result if the condition is TRUE, and another if FALSE. Here is its syntax.

IF(logical_test, value_if_true, value_if_false)

The logical_test parameter is any value or expression that can be evaluated to TRUE or FALSE, and then the value_if_true is the value that is returned if logical_test is TRUE, and the value_if_false is optional and is the value returned when logical_test is FALSE. When value_if_false is omitted, BLANK is returned when the logical_test is FALSE.

Let’s say we want to identify which sales have an amount that exceeds $5,000. To do this we can add a new calculated column to our Sales table with the following expression.

Sales Target Categorization = 
IF(
  Sales[Amount] &gt; 5000, 
  "Above Target", 
  "Below Target"
)

This expression will evaluate each sale in our Sales table, labeling each sale as either “Above Target” or “Below Target” based on the Sales[Amount].

The beauty of starting our journey with IF lies in its simplicity and versatility. While we continue to explore logical functions, it won’t be long before we encounter TRUE/FALSE.

As we saw with the IF function these values help guide our DAX expressions, they are also their own DAX function. These two functions are the DAX way of saying yes (TRUE) or no (FALSE), often used within other logical functions or conditions to express a clear binary choice.

These functions are as straightforward as they sound and do not require any parameters. When used with other functions or conditional expressions we typically use these to explicitly return TRUE or FALSE values.

For example, we can create another calculated column to check if a sale is a high value sale with an amount greater than $9,000.

High Value Sale = 
IF(
  Sales[Amount] &gt; 9000, 
  TRUE, 
  FALSE
)

This simple expression checks if the sales amount exceeds $9,000, marking each record as TRUE if so, or FALSE otherwise.

Together IF and TRUE/FALSE form the foundation of logical expressions in DAX, setting the stage for more complex decision-making analysis. Think of these functions as essential for our logical analysis, but just the beginning of what is possible.


The Gateway to DAX Logic: Exploring IF with AND, OR, and NOT

The IF function is much more than just making simple true or false distinctions; it helps us unlock the nuanced layers of our data, guiding us through the paths our analysis can take. By effectively leveraging this function we can craft detailed narratives from our datasets.

We are tasked with setting sales targets for each region. The goal is to base these targets on a percent change seen in the previous year. Depending on whether a region experienced a growth or decline, the sales target for the current year is set accordingly.

Region Specific Sales Target = 
IF(
    HASONEVALUE(Regions[Region]),
    IF(
        [Percent Change(CY-1/CY-2)] &lt; 0, 
        [Total Sales (CY-1)]*1.1, 
        [Total Sales (CY-1)]*1.2
    ),
    BLANK()
)

In this measure we make use of three other measures within our model. We calculate the total sales for the previous year (Total Sales (CY-1)), and the year before that (Total Sales (CY-2)). We then determine the percentage change between these two values.

If there is a decline (negative percent change), we set the current year’s sales target to be 10% higher than the previous year’s sales, indicating a more conservative goal. Conversely, if there was growth (positive percent change), we set the current year target 20% higher to keep the momentum going.

As we dive deeper, combining IF with functions like AND, OR, and NOT we begin to see the true flexibility of these functions in DAX. These logical operators allow us to construct more intricate conditions, tailoring our analysis to very specific scenarios.

The operator functions are used to combine multiple conditions:

  • AND returns TRUE if all conditions are true
  • OR returns TRUE if any condition is true
  • NOT returns TRUE if the condition is false

Let’s craft a measure to determine which employees are eligible for a quarterly bonus. The criterion for eligibility is twofold: the employee must have made at least one sale in the current quarter, and their average sale amount during this period must exceed the overall average sale amount.

To implement this, we first need to calculate the average sales and compare each employee’s average sale against this benchmark. Additionally, we check if the employee has sales recorded in the current quarter to qualify for the bonus.

Employee Bonus Eligibility = 
VAR CurrentQuarterStart = DATE(YEAR(TODAY()), QUARTER(TODAY()) * 3 - 2, 1)
VAR CurrentQuarterEnd = EOMONTH(DATE(YEAR(TODAY()), QUARTER(TODAY()) * 3, 1), 0)
VAR OverallAverageSale = CALCULATE(AVERAGE(Sales[Amount]), ALL(Sales))
VAR EmployeeAverageSale = CALCULATE(AVERAGE(Sales[Amount]), FILTER(Sales, Sales[SalesDate] &gt;= CurrentQuarterStart &amp;&amp; Sales[SalesDate] = CurrentQuarterStart &amp;&amp; Sales[SalesDate]  0

RETURN
IF(
    AND(HasSalesCurrentQuarter, EmployeeAverageSale &gt; OverallAverageSale),
    "Eligible for Bonus",
    "Not Eligible for Bonus"
)

In this measure we define the start and end dates of the current quarter, then we calculate the overall average sale across all data for comparison. We then determine each employee’s average sale amount and check if the employee has made any sales in the current quarter to qualify for evaluation.

If an employee has active sales and their average sale amount during the period is above the overall average, they are deemed “Eligible for Bonus”. Otherwise, they are “Not Eligible for Bonus”.

This example begins to explore how we can use IF in conjunction with AND to streamline business logic into actionable insights. These logical functions provide a robust framework for asking detailed questions about our data and receiving precise answers, allowing us to uncover the insights hidden within the numbers.


Beyond the Basics: Advanced Logical Functions in DAX

As we venture beyond the foundational logical functions we step into a world where DAX’s versatility shines, especially when dealing with complex data models in Power BI. More advanced logical functions such as SWITCH and COALESCE bring a level of clarity and efficiency that is hard to match with just basic IF statements.

SWITCH Function: Simplifying Complex Logic

The SWITCH function is a more powerful version of the IF function and is ideal for scenarios where we need to compare a single expression against multiple potential values and return one of multiple possible result expressions. This function helps us provide clarity by avoiding multiple nested IF statements. Here is its syntax.

SWITCH(expression, value, result[, value, result]...[, else])

The expression parameter is a DAX expression that returns a single scalar value and is evaluated multiple times depending on the context. The value parameter is a constant value that is matched with the results of expression, the result is any scalar expression to be evaluated if the result of expression matches the corresponding value. Finally, the else parameter is an expression to be evaluated if the result of expression does not match any value arguments.

Let’s explore. We have a scenario where we want to apply different discount rates to products based on their categories (Smartphone, Laptop, Tablet). We could achieve this by using the following expression for a new calculated column, which uses nested IFs.

Product Discount Rate (IF) = 
IF(
    Products[Product]="Smartphone", 0.10,
    IF(Products[Product]="Laptop", 0.15,
        IF(Products[Product]="Tablet", 0.20,
            0.05
        )
    )
)

Although this would achieve our goal, the use of nested if statements can make the logic of the calculated column hard to read, understand, and most importantly hard to troubleshoot.

Now, let’s see how we can improve the readability and clarity by implementing SWITCH to replace the nested IF statements.

Product Discount Rate = 
SWITCH(
    Products[Product],
    "Smartphone", 0.10,
    "Laptop", 0.15,
    "Tablet", 0.20,
    0.05
)

The expression simplifies the mapping of each Product to its corresponding discount rate and provides a default rate for categories that are not explicitly listed.

COALESCE Function: Handling Blank or Null Values

The COALESCE function offers a straightforward way to deal with BLANK values within our data, returning the first non-blank value in a list of expressions. If all expressions evaluate to BLANK, then a BLANK value is returned. Its syntax is also straightforward.

COALESCE(expression, expression[, expression]...)

Here, expression can be any DAX expression that returns a scalar value. These expressions are evaluated in the order they are passed to the COALESCE function.

When reporting on our sales data, encountering blanks can sometimes communicate the wrong message. Using COALESCE we can address this by providing a more informative value when there are no associated sales.

Product Sales = COALESCE([Total Sales], "No Sales")

With this new measure if our Total Sales measure returns a blank, for example due to filters applied in the report, COALESCE ensures this is communicated with a value of “No Sales”. This approach can be beneficial for maintaining meaningful communication in our reports. It ensures that our viewers understand the lack of sales being reported, rather than interpreting a blank space as missing or erroneous data.

These logical functions enrich our DAX toolkit, enabling more elegant solutions to complex problems. By efficiently managing multiple conditions and safeguarding against potential errors, SWITCH and COALESCE not only optimize our Power BI models but also enhance our ability to extract meaningful insights from our data.

With these functions, our journey into DAX’s logical capabilities becomes even more exciting, revealing the depth and breadth of analysis we can achieve. Let’s continue to unlock the potential within our data, leveraging these tools to craft insightful, dynamic reports.


Logical Comparisons and Conditions: Crafting Complex DAX Logic

Delving deeper into DAX, we encounter scenarios that demand a blend of logical comparisons and conditions. This complexity arises from weaving together multiple criteria to craft intricate logic that precisely targets our analytical goals.

We touched on logical operators briefly in a previous section, the AND, OR, and NOT functions are crucial for building complex logical structures. Let’s continue to dive deeper into these with some more hands-on and practical examples.

Multi-Condition Sales Analysis

We want to identify and count the number Sales transactions that meet specific criteria: sales above a threshold and within a particular region. To achieve this, we create a new measure using the AND operator to count the rows in our sales table that meet our criteria.

High Value Sales in US (Count) = 
COUNTROWS(
    FILTER(
        Sales,
        AND(
            Sales[Amount] &gt; 5500,
            RELATED(Regions[Region]) = "United States"
        )
    )
)

This measure filters our Sales table to sales that have an amount greater than our threshold of $5,500 and have a sales region of United States.

Excluding Specific Conditions

We need to calculate total year-to-date sales while excluding sales from a particular region or below a certain amount. We can leverage the NOT function to achieve this.

Sales Excluding Asia and Low Values = 
CALCULATE(
    SUM(Sales[Amount]),
    AND(
        NOT(Sales[RegionID] = 3), // RegionID=3 (Asia)
        Sales[Amount] &gt; 5500
    ),
    Sales[IsCurrentYear]=TRUE()
)

This measure calculates the sum of the sales amount that are not within our Asia sales region and are above $5,500. Using NOT we exclude sales from the Asia region and we us the AND function to also impose the minimum sales amount threshold.

Special Incentive Qualifying Sales

Our goal is to identify sales transactions eligible for a special incentive based on multiple criteria: sales amount, region, employee involvement, and a temporal aspect of the sales data. Here is how we can achieve this.

Special Incentive Qualifying Sales = 
CALCULATE(
    SUM(Sales[Amount]),
    OR(
        Sales[Amount] &gt; 7500,
        Sales[RegionID] = 2
    ),
    NOT(
        Sales[EmployeeID] = 4
    ),
    Sales[IsCurrentYear] = TRUE()
)

The OR function is used to include sales transactions that either exceed $7,500 or are made in Europe (RegionID = 2) and the NOT function excludes transaction made by an employee (EmployeeID = 4), who is a manager and exempt from the incentive program. The final condition is that the sale occurred in the current year.

The new measure combines logical tests to filter our sales data, identifying the specific transactions that qualify for a special incentive under detailed conditions.

By leveraging DAX’s logical functions to construct complex conditional logic, we can precisely target specific segments of our data, uncover nuanced insights, and tailor our analysis to meet specific business needs. These examples showcase just the beginning of what is possible when we combine logical functions in creative ways, highlighting DAX’s robustness and flexibly in tackling intricate data challenges.


Wrapping Up: From Logic to Action

Our journey through the world of DAX logical functions underscores their transformative power within our data analysis. By harnessing IF, SWITCH, AND, OR, and more, we’ve seen how data can be sculpted into actionable insights, guiding strategic decisions with precision. To explore other DAX Logical Functions or get more details visit the DAX Function Reference.

Learn more about: DAX Logical Functions

Logical reasoning in data analysis is fundamental. It allows us to uncover hidden patterns and respond to business needs effectively, demonstrating that the true value of data lies in its interpretation and application. DAX logical functions are the keys to unlocking this potential, offering clarity and direction in our sea of numbers.

As we continue to delve deeper into DAX and Power BI, let the insights derived from logical functions inspire action and drive decision-making. To explore other functions groups that elevate our data analysis check out the Dive into DAX series, with each post comes the opportunity to enhance your data analytics and Power BI reports.

Explore the intricate landscape of DAX in Power BI, revealing the potential to enhance your data analytics with every post. 


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment and explore new DAX functions, and challenge yourself with real-world data scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.