Dive into DAX: Simplify Data Models with Relationship Functions


The Basics of DAX Relationship Functions

DAX Relationship Functions are an essential part of our data modeling toolkit. These functions allow us to navigate the relationships connecting our data model tables facilitating complex calculations and deriving the insights they provide.

Relationships in our data models matter because they help maintain the integrity and consistency of our data. They connect different tables, enabling us to create insightful and dynamic reports. When creating our Power BI reports understanding these relationships becomes crucial since they dictate how data filters and aggregations are applied throughout our reports.

DAX Relationship Functions allow us to control and manipulate these relationships to suite our specific needs. Using these functions, we can perform in-depth calculations that involve multiple tables. They can be particularly useful in scenarios where we need to bring data from different sources into a single coherent view. Understanding and utilizing these functions can significantly elevate our data analysis.

For details on these functions visit the DAX Function Reference documentation.

Learn more about: DAX Relationship Functions

For those eager to start experimenting there is a Power BI report pre-loaded with the same data used in this post ready for you! So don’t just read, follow along and get hands-on with DAX in Power BI. Get a copy of the sample report here:

This dynamic repository is the perfect place to enhance your learning journey.


RELATED: Fetching Related Values

The RELATED function in DAX is designed to fetch a related value from another table. Its syntax is straightforward.

RELATED(column_name)

Here, column_name is the column from the related table that we want to retrieve. This function can be particularly useful in calculated columns when we need to access data from a lookup table in our calculations.

The RELATED function requires that a relationship exists between the two tables. Then the function can navigate the existing many-to-one relationship and fetch the specified column from the related table. In addition to an existing relationship, the RELATED function requires row context.

Our Sales table has a ProductID which is used to establish a relationship in our data model to the Products table. Let’s bring in the Product field from the Products table into our Sales table.

Product Category = RELATED(Products[Product])

We can use this DAX formula to add a new calculated column to our Sales table showing the product category corresponding to each sales record. This can help make our sales data more informative and easier to analyze, as we can now see the product category directly in the sales table.

We can also use RELATED and the existing relationship between our Sales and Regions table to filter our Sales and create an explicit United States Sales. Let’s take a look at this measure.

United States Sales = 
SUMX(
    FILTER(
        Sales, 
        RELATED(Regions[Region]) = "United States"
    ), 
    Sales[Amount]
)

This formula is much more informative and clearer than filtering directly on the RegionID field contained within the Sales table. Using RELATED within our FILTER function like this makes our measure more readable and it can immediately be identified what this measure is calculating.

The RELATED function is a powerful tool for enhancing our data models by seamlessly integrating related information. This can help us create more detailed and comprehensive reports.


RELATEDTABLE: Navigating Related Tables

The RELATEDTABLE function in DAX allows us to navigate and retrieve a table related to the current row from another table. This function can be useful when we need to summarize or perform calculations on data from a related table based on the current context. Here is its syntax.

RELATEDTABLE(table_name)

Here, table_name is the name of an existing related table that we want to retrieve. The table_name parameter cannot be an expression.

Let’s consider a scenario where we want to calculate the total sales amount for each product using the RELATEDTABLE function. Here is how we can use it to create a new calculated column in our Products table.

Total Sales by Product = 
SUMX(
    RELATEDTABLE(Sales),
    Sales[Amount]
)

In the DAX expression, we sum the Amount column from the Sales table for each product. The RELATEDTABLE function fetches all the rows from the Sales table that are related to the current product row in the Products table, and SUMX sums the Amount column for these rows.

When we use RELATEDTABLE, we can navigate and perform calculations across related tables, enhancing our ability to analyze data in a more granular and insightful way.


USERELATIONSHIP: Activating Inactive Relationships

The USERELATIONSHIP function in DAX is designed to activate an inactive relationship between tables in a data model. This is useful when a table has multiple relationships with another table, and we need to switch between these relationships for different calculations. Here is its syntax.

USERELATIONSHIP(column1, column2)

Here, column1 is the name of an existing column and typically represents the many side of the relationship to be used. The column2 is the name of an existing column and typically represents the one side or lookup side of the relationship to be used.

The USERELATIONSHIP returns no value and can only be used in functions that take a filter as argument (e.g. CALCULATE, TOTALYTD). The function uses existing relationships in the data model and cannot be used when row level security is defined for the table in which the measure is included.

Let’s take a look at a scenario where we are interested in calculating the number of employees who have left the organization based on their end dates using the USERELATIONSHIP function.

The Employee table includes each employee’s StartDate and EndDate. Each of these columns are used to establish a relationship with the DateTable in the data model. The relationship with StartDate is set to active, while the relationship with EndDate is inactive.

We can use the following DAX formula to define our Employee Separations measure.

Employees Separations USERELATIONSHIP = 
CALCULATE(
    COUNT(Employee[EmployeeID]),
    USERELATIONSHIP(Employee[EndDate], DateTable[Date]),
    NOT(ISBLANK(Employee[EndDate]))
)

This measure calculates the number of employees who have left the organization based on their EndDate by activating the inactive relationship between Employee[EndDate] and DateTable[Date] and ensuring that it only counts employees who have an EndDate.

We can better understand the power of USERELATIONSHIP by comparing these results to the results of the same measure but this time without activating the inactive relationship.

Employee Separations No USERELATIONSHIP = 
CALCULATE(
    COUNT(Employee[EmployeeID]),
    NOT(ISBLANK(Employee[EndDate]))
)

In the No USERELATIONSHIP measure we try to calculate the number of employees who left the company based on EndDate. However, we can see that without activating the relationship the active relationship is used in the context of the calculation.

Of the 9 employees that have left the organization, we can see that for 2022 the No USERELATIONSHIP measure is counting the 8 employees that started in 2022 rather than the 3 that left in 2022.


CROSSFILTER: Controlling Cross-Filtering Behavior

The CROSSFILTER function in DAX helps us manage the direction of cross-filtering between two tables in our data model. With this function we specify whether the filtering direction is one-way, both ways, or none, providing control over how data flows between our tables. This becomes useful in complex models where bidirectional filtering can lead to unintended results. Here is its syntax.

CROSSFILTER(column1, column2, direction)

The parameters column1 and column2 are similar to the parameters of USERELATIONSHIP, where column1 is the name of an existing column and typically represents the many side of the relationship and column2 is a column name and typically represents the one side of the relationship.

The direction parameter specifies the cross-filter direction to be used and must be one of the following values.

  • None – no cross-filtering occurs along this relationship
  • Both – filters on either side filters the other side
  • OneWay – filters on the one side of a relationship filter the other side. This option cannot be used with a one-to-one relationship and is not recommended for many-to-many relationships.
  • OneWay_LeftFiltersRight – filters on the side of column1 filter the side of column2. This option cannot be used with a one-to-one or many-to-one relationship.
  • OneWay_RightFiltersLeft – filters on the side of column2 filter the side of column1. This option cannot be used with a one-to-one or many-to-one relationship.

The CROSSFILTER function returns no value and can only be used within functions that take a filter as an argument (e.g. CALCULATE, TOTALYTD). When we establish relationships in our data model we define the cross-filtering direction, when we use the CROSSFILTER function it overrides this setting.

Let’s consider the scenario where we want to analyze the distinct products sold and the total sales amount by month and year. We start by creating a Distinct Product Code Count measure.

Distinct Product Code Count = 
DISTINCTCOUNT(Products[Product Code])

If we add this measure to a table visual, we will notice an issue with the count. The count is returning the total product code count, and not the intended results of the count of distinct products sold that month.

We see this because the relationship is one-to-many (Product-to-Sales) with single direction relationship (i.e. Product filters Sales). This default set up does not allow for our Sales table to filter our Product tables leading to the unintended results.

Now, we could correct this by changing the cross-filtering direction property on the Product-Sales relationship. However, this would change how filters work for all data between these two tables, which may not be a desired or an acceptable outcome.

Another solution is to utilize the power of the CROSSFILTER function. We can use this function to change how the Product-Sales relationships works within a new measure.

Distinct Product Code Count Bidirectional = 
CALCULATE(
    [Distinct Product Code Count],
    CROSSFILTER(Sales[ProductID], Products[ProductID], Both)
)

We can add this new measure to our table and see we get the expected results. This measure gathers all the sales records in the current context (e.g. Jan 2022), then filters the Product table to only related products, and finally returns a distinct count of the products sold.

This measure and the Sales Amount can now be used to analyze our sales data with details on the number of different products sold each month.

By using CROSSFILTER, we maintain control over our data relationships, ensuring our reports reflect the precise insights we need without unintended data flows. This level of control is crucial for building robust and reliable Power BI models.


Wrapping Up

DAX relationship functions are powerful tools that significantly enhance our ability to manage and analyze data in Power BI. We have explored how these essential functions empower us to connect and manipulate data and relationships within our data model. By understanding and knowing when to leverage these functions we can create dynamic, accurate, and insightful reports. Here is a quick recap of the functions.

  • RELATED simplifies data retrieval by pulling in values from a related table, making our data more informative and easier to analyze
  • RELATEDTABLE enables us to navigate and summarize related tables, providing deeper insights into our data.
  • USERELATIONSHIP gives us the flexibility to activate inactive relationships, allowing us to create more complex and context-specific calculations.
  • CROSSFILTER allows us to control the direction of cross-filtering between tables, ensuring our data flows precisely as needed.

To further explore and learn the details of these functions visit the DAX Relationship Function reference documentation.

Learn more about: DAX Relationship Functions

By adding these functions into our DAX toolkit, we enhance our ability to create flexible and robust data models that ensure our reports are both visually appealing and deeply informative and reliable.

To explore other function groups that elevate our data analysis check out the Dive into DAX series, with each post comes the opportunity to enhance your data analytics and Power BI reports.

Explore the intricate landscape of DAX in Power BI, revealing the potential to enhance your data analytics with every post. 


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment and explore new DAX functions, and challenge yourself with real-world data scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

FP20 Analytics ZoomCharts Challenge 15: My Journey to the Top 5


Over the last two years I have ended each of my posts with two main messages, (1) stay curious and happy learning, and (2) continuously experiment, explore and challenge yourself. However, at times it can be hard to identify open ended opportunities to fulfill these.

One opportunity available is participating in various data challenges. I recently participated in and was a top 5 finalist in the FP20 Analytics ZoomCharts Challenge 15. This data challenge was a HR data analysis project with a provided dataset to explore and a chance to expand your report development skills.

What I enjoyed about the challenge is along with the dataset, it provided a series of questions to help guide the analysis and provide direction and a focus for the report.

Here is the resulting report submitted to the challenge and discussed in this post. View, interact, and get the PBIX file at the link below.

The report was built with the HR Analysis dataset and includes ZoomCharts custom Drill Down PRO visuals for Power BI. 


About the Challenge

The FP20 Analytics challenge was in collaboration with ZoomCharts and provided an opportunity to explore custom Power BI ZoomCharts drill down visuals.

The requirements included developing a Power BI report with a default canvas size, a maximum of 2 pages, and include at least two ZoomCharts Drill Down Visuals.

The goal of the challenge was to identify trends within the dataset and develop a report that provides viewers the answers to the following questions.

  1. How diverse is the workforce in terms of gender, ethnicity, and age?
  2. Is there a correlation between pay levels, departments, and job titles?
  3. How about the geographic distribution of the workforce?
  4. What is the employee retention rate trend yearly?
  5. What is the employee retention rate in terms of gender, ethnicity, and age?
  6. Which business unit had the highest and lowest employee retention rate?
  7. Which business unit and department paid the most and least bonuses annually?
  8. What is the annual historical bonus trend? Can we show new hires some statistics?
  9. How about the pay equity based on gender, ethnicity, and age?
  10. What is the employee turnover rate (e.g., monthly, quarterly, annually) since 2017?

There are countless ways to develop a Power BI report to address these requirements. You can see all the FP20 Analytics ZoomCharts Challenge 15 report submissions here.

This post will provide an overview and some insight into my approach and the resulting report.


Understanding the Data

With any analysis project, before diving into creating the report, I started by exploring and getting an understanding of the underlying data. The challenge provided a single table dataset, so I loaded the data into Power BI to use the Power Query editor’s column distribution, column profile, and column quality to help get an understanding of the data.

Using these tools, I was able to identify missing values, data errors, data types, and get a better sense of the distribution of the data. This initial familiarity will help inform the analysis and help identify what data could be used to answer the requirement questions, identify data gaps, and help ask the right questions to create an effective report.

The dataset contained 16 columns and provided the following data on each employee.

  • Employee/Organizational characteristics
    • Employee ID, full name, job title, department, business unit, hire date, exit date
  • Demographic information
    • Age, gender, ethnicity
  • Salary information
    • Annual salary and bonus percentage
  • Location information
    • City, country, latitude, longitude

The dataset was already clean. No columns contained any errors that had to be addressed, and the only column that had empty/null values was exit date, which is expected. One thing I noted at this stage is that the Employee ID column did not provide a unique identifier for an employee.

Additionally, I used a temporary page in the Power BI report containing basic charts to visualize distributions and experiment with different types of visuals to see which ones best represent the data and help reach the report objectives. Another driver of using this approach was start experimenting with and understanding the different ZoomCharts and their customizations.


Identifying Key Data Attributes

Once I had a basic understanding of the dataset, it is always tempting to jump right into data manipulation and visualization. However, I find it helpful at this stage to pause and review the questions the report should answer.

During this review I was able to further define these goals, with my new understanding of the data, which guided the selection of relevant data within the dataset.

I then broke the questions down into 3 main areas of focus and began to think about what data attributes within the dataset can be used, and possibility more importantly, think about what is missing or how I can enrich the dataset to create a more effective report.

Workforce Diversity (Question #1 and #3)

To analyze the workforce diversity, the dataset provided a set of demographic information fields that aligned with these questions.

Salary & Bonus Structure (Questions #2, #7, #8, #9)

The next set of questions I focused on revolved around the salary and bonus structure of the organization. I identified I could use the demographic fields along with the salary information to provide insights.

Employee Retention & Turnover (Questions #4, #5, #6, and #10)

The dataset did not directly include the retention and turnover rates and required enriching the dataset to calculate these values. To do this I used the hire date and exit date. Once calculated I am able to add an organization context to the analysis by using the business unit, department, and job title attributes.


Dataset Enrichment

After identifying key data attributes that can be used to answer the objectives of the report, it becomes clear that there are opportunities for enriching the dataset to aid in making a more effective visualization (e.g. age bins) and address data gaps or require calculations (e.g. employee retention and turnover).

For this report I achieved this through the use of both calculated columns and measures.

Creating Calculated Columns

Calculated columns are a great tool to add new data based on existing information in the dataset. For this report I created 7 calculated columns which were required because I wanted to use the calculated result in axes of report visuals or as a filter condition in a DAX query.

  • Age Bin: categorized the employee’s age based on their age decade (20s, 30s, 40s, 50s, or 60s). Here I used a calculated column rather than the built-in data group option to provide more flexibility and control over the bins.
  • Tenure (Years): while exploring salary across departments and demographic categories, I also wanted to include context for how long the employee has been with the organization as this might influence their salary and/or bonus.
  • Total Compensation: the dataset provided annual salary and bonus percent. The bonus percent was helpful when examining this attribute specifically, however when analyzing the organization pay structure across groups, I found the overall page (salary + bonus) to be more insight and provide the entire picture of the employee’s compensation.
  • Employee Status: the dataset included current and past employees. To ease analysis and provide the users the ability to filter on the employee’s status I included a calculated column to label the employee as active or inactive.
  • Abbreviation: the report required providing insight broken down by business unit, country, and department all of which could have long names and clutter the report. For each of these columns I included a calculated column providing a 3-letter abbreviation to be used in the report visuals.

Defining Measures

In addition to calculated columns, the report included various calculated measures. These dynamic calculations are versatile and aid the interactive nature of the Power BI report.

For this report I categorized my measure into the following main categories.

  • Explicit Summaries: these measures are not strictly required. However, I prefer the use of explicit aggregation measures over implicit auto-aggregation measures on the visuals due to the increased flexibility and reusability.
    • Average Total Compensation
    • Average Bonus ($)
    • Average Bonus (%)
    • Highest Average Bonus % (Dept) Summary
    • Lowest Average Bonus % (Dept) Summary
    • Maximum Bonus %
    • Minimum Bonus %
    • Average Annual Salary
    • Maximum Annual Salary
    • Median Annual Salary
    • Minimum Annual Salary
    • Active Employee Count
      • Total count
      • Each ethnicity count
      • Male/Female count
    • Inactive Employee Count
  • Report Labels: these measures were used to add additional context and information to the user when interacting with drill down visuals. On the drill down visuals when a user selects a data category or data point the visual drills down and shows the next level of the data hierarchy. What is lost, is what top level category was selected, so these labels are used to provide that information.
    • Selected Age Bin
    • Selected Business Unit Retention
    • Selected Business Unit Turnover
    • Selected Dept Retention
    • Selected Dept Turnover
  • Retention & Turnover: 4 of the report objectives revolved around employee retention and turnover rates. The dataset only provided employee hire dates and exits which are used to calculate these values.
    • Cumulative Total Employee Count (used in retention rate)
    • Employee Separations (used in retention rate)
    • Employee Retention
    • Brazil Retention Rate
    • China Retention Rate
    • United Sates Retention Rate
    • Employee Turnover Rate
    • Brazil Turnover Rate
    • China Turnover Rate
    • United States Turnover Rate

Report Development

After understanding the dataset, identifying key data attributes, and enriching the dataset I moved onto the report development.

Report Framework

From the requirements I knew the report would be 2 pages. The first focused on Workforce Diversity and Salary & Bonus Structure. The second focused on Employee Retention & Turnover.

I started the report with a two-page template that included all the functionality of an expandable navigational element. For details on how I created this, and where to find downloadable templates see my post below.

This navigation is a compact vertical navigation that can be expanded to provide the user page titles and was highlighted as a strong point of the report in the Learn from the Best HR Reports: ZoomCharts TOP Picks.

User-Centric Design: Next level reporting with interactive navigation for an enhanced user experience

Then I selected the icons used in the navigation and updated the report page titles on each page and within the expanded navigation.

Once the template was updated for the specifics of this report, I applied a custom theme to get the aesthetic just right. For more on creating custom themes and where to find downloadable themes, including the one used in this report (twilight-moorland-plum), see the following post.

Dive into the details of blending design and data with custom Power BI themes.

After updating the navigation template and implementing the report theme, I was set with a solid foundation to begin adding report visuals.

Demographic & Compensation Analysis

The first page of the report focused on two main objectives, the demographic distribution of the workforce and an in-depth analysis of the organizational compensation structure.

Demographic Distribution

The first objective was to provide the user insights into the diversity of the workforce in terms of gender, ethnicity, and age. This was a perfect fit for the Drill Down Combo PRO (filter) by ZoomCharts visual. The visual displays the percentage of the workforce broken down by gender and displayed by employee age. Each age bin then can be drilled into to reveal additional insights into the age bins ethnicity make up.

In addition to the core visual, I included a card visual displaying the Selected Age Bin measure to provide context to the data when viewing an age bins ethnicity make up.

Geographic Distribution

The other component of this analysis was objective #3 focused on the geographic distribution of the workforce. In my submitted report this comprised of two elements the first and primary visual is the Drill Down Map PRO (Filter) by ZoomCharts visual. The second is a Drill Down Combo Bar PRO (Filter) by ZoomCharts visual.

The Map visual shows the ethnicity of the workforce as a percentage of the total workforce for each geographic location.

This visual in the report had noted limitation. Mainly the initial view of the map did not show all the data available. The inclusion of the country break provided an effective means to filter to a specific country however, it crowded the visual. Additionally, the colors in the report for the ethnicity groups of Asian and Black used the same colors used throughout the report for Male and Female which can be a source of confusion. See the Feedback and Improvements sections to see the updates to more effectively visual this data.

Organization Compensation Structure – Compensation Equity

The first component of the compensation structure analysis was to examine the median total compensation (salary + bonus) by departments, business units and job title. The second was to provide insights into compensation equity among the demographic groups of age, ethnicity, and gender.

I used the Drill Down Combo PRO (Filter) visual to analyze the median total compensation for each organizational department broken down by gender. Each department can be drilled into to extract insights about the business unit and further drilled into each job title. I also included the average tenure in years of the employees within each category to better understand the compensation structure of the organization.

This report section contained another Drill Down Combo PRO (Filter) visual to provide insights on the median total compensation by ethnicity and gender. These two visuals when used in tandem and leveraging cross-filtering can provide a nearly complete picture of the compensation structure between departments and equity across demographic groups.

When the two Median Total Compensation visuals are used along with the demographic and geographic distributions visuals a full understanding and in-depth insights can be extracted. The user can interact and cross-filter all of these visuals to tailor the insights to meet their specific needs.

Organization Compensation Structure – Departmental & Historic Bonus Analysis

The second component of the compensation structure analysis was to provide an analysis of departmental bonuses and historical bonus trends.

To provide detailed insights into the bonus trends I utilized a set of box and whisker plots to display granular details and card visuals to provide high-level aggregations. I will note that box and whisker plots may not be suitable in every scenario. However, for an audience that is familiar and comfortable interpreting these plots they are a great tool and were well suited for this analysis.

Workforce Retention & Turnover

The second page of the report focused on the analysis of employee retention and turnover. For this report the retention rate was calculated as the percentage of employees that remained with the organization during a specific evaluation period (e.g. annually) and the turnover rate is the rate at which employees left the organization expressed as a percentage of the total number of employees.

For this analysis, I thought it was key to provide the user and quick and easy way to flip between these metrics depending on their specific requirement. I did this by implementing a button experience at the top of the report, so the user can easily find and reference what metric they are viewing.

Another key aspect to enhance the clarity of the report is the visuals remain the same regardless of the metric being viewed. This eases the transition between the different views of the data.

Across the top of the report page is a set of Drill Down Combo Bar PRO (Filter) visuals to analyze the selected metric by department and business unit on the left and age, gender, and ethnicity in the right-side grouping.

Each of these visuals also use the threshold visual property to display the average across all categories. This provides a clear visual indicator of how a specific category is performing compared to the overall average (e.g. retention for the R&D Business Unit is slightly worse (87%) than the organizational average of 92%).

All of these visuals can be used to cross-filter each other to get highly detailed and granular insights when required.

In addition to examining the retention and turnover rate among organizational and demographic groups there was an objective to provide insight to the temporal trends of these metrics. The Drill Down Timeline PRO (Filter) visual was perfect for this.

This visual provides a long-term view of the retention and turnover rate trend while providing the user an intuitive and interactive way to zoom into specific time periods of interest.

Additional Features

Outside to of the main objectives of the report outlined by the 10 specified questions there were additional insights, features, and functionalities built into the report to enhance usability and the user experience.

On the Demographic & Compensation Analysis page this included a summary statistics button to display a high-level overlay of demographic and salary summaries.

On both pages of the report there was also a clear slicer button to clear all selected slicer values. However, this clear slicer button did not reset the report to an initial state or reset data filters due to user interactions with the visuals. See the Feedback and Improvements section for details and the implemented fix.

Lastly, each page had a guided tutorial experience to inform new users about the report page and all the different features and functionalities the report and the visuals offered.

There are various other nuanced and detailed features of this reports and too much to all cover here. But please check out and interact with the report here:

The report was built with the HR Analysis dataset and includes ZoomCharts custom Drill Down PRO visuals for Power BI. 


Feedback and Improvements

My submitted report was discussed in the Learn from the Best HR Reports: ZoomCharts TOP Pick webinar which provided excellent feedback on areas to improve the report.

You can view the webinar discussion below.

The first improvement was to address the sizing of the map. Initially, when the report loads the map visual is too small to view all of the data it provides.

To address and correct this the Employee Count by country visual was removed. This visual provided helpful summary information and an effective way to filter the map by country, however, the benefits of displaying all the data on the map outweigh the benefits of this visual.

Also mentioned in the discussion of the map is the limitation in colors. Initially the ethnicity groups Black and Asian used the same colors used to visualized gender and a source of potential confusion.

To address this, I extended the color palette to include two additional colors to better distinguish these groups. These groupings are also visible on the Workforce Retention & Turnover page. These visuals were also updated to ensure consistency across the report.

The next area of feedback was around the clear slicer button. As shown in the webinar it only clears the selected slicer values and uses Power BI’s built-in Clear all slicers button.

The functionality of the clear filter button on both pages was updated to reset the report context to a base state with no filters applied to the page. The size of the button was also increased to make it easier to identify for the user.

Another point of feedback was regarding the navigation icon tooltips. I did not make an adjustment to the report to address this. As shown in the webinar if you hover over the active page icon there is not a visible tool tip. I left the report this way because the current page icon indicator on the navigational element and the report page has a title providing this information to the user.

However, on each page if you hover over an icon to a different page, there should be a tool tip that displays and addresses the main objective of this feedback. This functionality is correct on the Demographic & Compensation Analysis page but required correcting on the Workforce Retention & Turnover page to be consistent.

Lastly, there was feedback on the use of the Box and Whisker plot within the report. I agree the use of this visual is heavily dependent on the end user’s comfortability with interpreting these visuals and is not suitable in all cases. However, for this report I think they provide a helpful visualization of the bonus data and remained in the report.


Wrapping Up

Getting started with participating in these type of data challenges can be an intimidating task. With Power BI and report development there is always more to learn and areas to improve so there is not some static skill level or point to begin with these challenges. The best method to move forward is to just start and show yourself patience as you learn more and grow your skills.

For me, the main take aways from participating in this challenge and why I plan to participate in more moving forward are:

  1. When learning a new skill repetition and continued use is essential, and with report development the more reports you create the better and more experienced you will be. This challenge provided an excellent opportunity to use a unique dataset to create a report from scratch.
  2. Others are using the same data and creating the own report to share. Viewing these different reports to see how others solved the same data challenge can be extremely helpful in growing your skills and expanding the way you approached the challenge.
  3. Participating in the ZoomCharts Challenge provided tailored feedback on the report submission. Providing helpful insight on how others viewed my report and highlighting areas for improvement.
  4. Access to custom visuals. Through this challenge I was able to learn and work with ZoomCharts custom visuals. I really enjoyed learning these and adding that experience to my skillset. Find out more about these extremely useful visuals here.

Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Design Meets Data: Crafting Interactive Navigations in Power BI


An essential aspect of any multi-page report is an effective and intuitive way to navigate between the various report pages. A well-designed navigational element in our Power BI reports enhances our users’ experience and guides them to the data and visualizations they require.

This post outlines and explores how Power BI native tools and functionalities can be utilized to create a similar navigational experience that was created in the previous post using Figma and Power BI: Design Meets Data: A Guide to Building Interactive Power BI Report Navigation.

Looking for another approach to building report navigation that uses built-in Power BI tools? Visit Part 3 of this Power BI Navigation series.

Streamlined report navigation with built-in tools to achieve functional, maintainable, and engaging navigation in Power BI reports.

For those interested in implementing the navigation element presented in this post, there are 2-, 3-, 4-, and 5-page templates available for download, with more details at the end of the post.

Revisiting Interactive Navigation in Power BI

Welcome back to another Design Meets Data exploration focused on interactive report navigation in Power BI. In the first part, we dove into using Figma to design and develop a user-friendly report interface.

Now, it is time to shift our focus towards leveraging Power BI’s native arsenal of tools, primarily bookmarks, buttons, and tool tips, to achieve similar, if not enhanced, functionalities.

Why go native? Utilizing Power BI’s built-in tools streamlines support and maintenance and provides a reduction in external complexities and dependencies. Plus, staying within a single platform makes it easier to manage and update our reports.

This post will highlight the nuances of Power BI’s navigation capabilities. It will demonstrate how to replicate the interactive navigation from Design Meets Data: A Guide to Building Interactive Power BI Report Navigation using tools available directly within Power BI. These tools will help simplify our report while maintaining an engaging and interactive navigational element.

Let’s get started!


Setting the Stage with Power BI Navigation

Before diving into the details, let’s step back with a quick refresher on the Power BI tools that we can leverage for crafting our report navigation. Power BI is designed to support complex reporting requirements with ease, thanks to features like bookmarks, buttons, and tooltips that can be intricately configured to guide our users through our data seamlessly.

Bookmarks

Bookmarks in Power BI save various states of a report page, allowing users to switch views or data contexts with a single click. We can use bookmarks to allow our users to toggle between different data filters or visual representations without losing context or having to navigate multiple pages.

For our navigational element, bookmarks will be key to creating the collapsing and expanding functionality. To create a bookmark, we get the report page looking just right, then add a bookmark to save the report state in the bookmark pane.

The new bookmark can now act as a restore point, bringing the user back to this specific view whenever it is selected. To keep our bookmarks organized it is best to rename them with a description name, generally including the report page and an indication of what the bookmark is used for (e.g. Page1-NavExpanded).

Buttons

Buttons take interactivity to the next level. We can use buttons to trigger various events, such as bookmarks, and also serve as navigation aids within the report. Buttons within our Power BI reports can be styled and configured to react dynamically to user interactions.

To create a button, we simply add the button object from the Insert ribbon onto the report canvas. Power BI offers a variety of button styles, such as a blank button for custom designs, or predefined icons for common actions like reset, back, or informational buttons.

Each button can be styled to match our report’s theme, including colors, text, and much more. Another key property to configure is the button action. Using this, we can define whether the button should direct our users to a different report page, switch the report context to a different bookmark, or another one of the many options available.

Tooltips

Tooltips in Power BI can provide simple text hints, but when properly utilized, they can provide additional insights or contextual data relevant to specific visuals without cluttering the canvas. This provides detail when required while keeping our reports clean and simple.

Power BI allows us to customize tooltips to show detailed information, including additional visuals. This can turn each tooltip into a tool to provide context or additional layers of data related to a report visual when a user hovers over the element.

By effectively using tooltips we transform user interaction from just viewing to an engaging, exploratory experience. This boosts the usability of our reports and ensures that users can make informed decisions based on the data view provided.


The Navigation Framework

Now that we have explored some details of the elements used to create our navigation, let’s dive into building the navigational framework. We will craft a minimalistic navigation on the left-hand side of our report, with the functionality to expand when requested by user interaction. This approach to our navigation is focused on making the navigation pane both compact and informative, ensuring that it does not overpower the content of the report.

In the Design Meets Data: A Guide to Building Interactive Power BI Report Navigation blog post the navigational element was built using Figma. Although Figma is a powerful and approachable design tool, in this guide, we will explore creating a similar navigation pane using native Power BI tools and elements. We will use Power BI’s shapes, buttons, and bookmarks to construct the framework and functionality.

The Navigation Pane Base Elements

We will start by creating the navigation pane by adding the base elements. In this compact and expandable design, this includes the background of the navigation pane, which will contain the page navigation and menu icons.

Collapsed Navigation Pane

The base of the navigation consists of three main components that we add to our Power BI report to start building our interactive navigational element.

The collapsed navigation pane starts by adding the shape of the pane itself. The color is set to theme color 1, 50% darker of the Power BI theme. Using the theme color will help our navigation remain dynamic when changing Power BI themes.

The next base element is the menu icon, which expands and collapses our navigation pane. The button is configured to slightly darken when hovered over and darken further when pressed. Additionally, when the button is disabled, the icon color is set to the same color as the navigation pane and is used to contrast the current page indicator bar. This configuration is used for all buttons contained within the navigation pane (both the bookmark and page navigation buttons).

The last base element is the current page indicator. This is a lighter-colored (theme color 1, 60% lighter) rectangle tab that clearly indicates what page in the navigation pane is currently being viewed.

Here is the collapsed navigation pane containing the base elements.

Expanded Navigation Pane

The expanded navigation consists of the same base elements, with the addition of a close icon, and a click shield to prevent the user from interacting with the report visuals when the navigation is expanded.

The additional elements of the expanded menu provide the user with multiple methods to collapse the navigation pane. The close (X) button is added as a flyout from the base navigation pane background, so it is easily identifiable.

When the navigation pane is expanded, we want to prevent users from interacting with the report visuals. To achieve this, we use a partially transparent rectangle to serve as a click shield. If the user clicks anywhere on the report page outside of the navigation pane, the navigation pane will collapse returning the user to the collapsed report view.

Navigation Bookmarks

The last base element required for the interactive navigation is creating the required bookmarks to transition between the collapsed and expanded view. This is done by creating two bookmarks to store each of the required report page views, Page1-Default-NavCollapsed and Page1-NavExpanded.

We can now build on these base elements and bring our navigation to life with Power BI buttons and interactive features.


Navigation Interactive Features

The interactive features in the navigation pane consist of two types of buttons: (1) bookmark buttons and (2) page navigation buttons.

Expanding and Collapsing the Navigation Pane

The previous section added the base elements of the navigation pane which included a menu icon on both the collapsed and expanded navigation panes, and a close button and click shield on the expanded navigation screen.

Building the interactive elements of the navigation starts by assigning actions to each of these bookmark buttons, allowing the user to expand and collapse the navigation pane seamlessly.

The action property for each of these buttons is set to a bookmark type, with the appropriate bookmark selected. For example, for the menu icon button on the collapsed menu, the bookmark selected corresponds to the expanded navigation bookmark. This way, when a user selects this button on the collapsed navigation, it expands, revealing the additional information provided on the expanded navigation pane.

Page Navigation Buttons

The last element to add to the report navigation is the report page navigation buttons.

Each report page button is a blank button configured and formatted to meet the report’s requirements. For this report, each page button contains a circular numbered icon to indicate the report page it navigates to. When the navigation is expanded, an additional text element displays the report page title.

At the end of this post, there are details on obtaining templates that implement this report navigational element. The templates are fully customizable, so they will come with the numbered icons and default page titles, but these can simply be updated to match the aesthetic of any reporting needs.


Wrapping Up: Elevating Your Power BI Reports with Interactive Navigation

As Power BI continues to evolve, integrating more engaging and interactive elements into our reports will become crucial for creating dynamic and user-centric reports. The transition from static to interactive reports empowers our users to explore data in a more meaningful and memorable way. By leveraging bookmarks, buttons, and tooltips, we can transform our reports from a simple presentation of data into engaging, intuitive, and powerful analytical tools.

For those eager to implement the navigational element outlined in this post, there are 2-, 3-, 4-, and 5-page templates available for download. Each template has all the functionality built in, requiring only updating the button icons, if necessary, to better align with your reporting needs.

The template package is available here!

You will get individual template files for a 2-, 3-, 4-, and 5-page report provided in the PBIX, PBIT, and PBIP (12 total files) formats! 


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment, explore, and challenge yourself with real-world scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Dive into DAX: Decoding Data with Power BI Logical Functions


The missing piece to decoding our data and unlocking its full potential is often the strategic application of DAX Logical Functions. These functions are pivotal in dissecting complex datasets, applying business logic, and enabling a nuanced approach to data analysis that goes beyond surface-level insights. Better understanding DAX Logical Functions allows us to create more sophisticated data models that respond with agility to analytical queries, turning abstract numbers into actionable insights.

In this post we will explore this group of functions and how we can leverage them within Power BI. We will dive in and see how we can transform our data analysis from a mere task into an insightful journey, ensuring that every decision is informed, every strategy is data-driven, and every report illuminates a path to action.


The Logical Side of DAX: Unveiling Power BI’s Brain

Diving into the logical side of DAX is where everything begins to become clear. Logical functions are the logical brain behind Power BI’s ability to make decisions. Just like we process information to decide between right and wrong, DAX logical functions sift through our data to determine truth values: true or false.

Functions such as IF, AND, OR, NOT, and TRUE/FALSE, are the building blocks for creating dynamic reports. These functions allow us to set up conditions that our data must meet, enabling a level of interaction and decision-making that is both powerful and nuanced. Whether we are determining if sales targets were hit or filtering data based on specific criteria, logical functions are our go-to tools for making sense of the numbers.

For details on these functions and many others visit the DAX Function Reference documentation.

Learn more about: DAX Logical Functions

The logical functions in DAX can go far beyond the basics. The real power happens when we start combining these functions to reflect complex business logic. Each function plays its role and when used in combination correctly we can implement complex logic scenarios.

For those eager to start experimenting there is a Power BI report pre-loaded with the same data used in this post ready for you! So don’t just read, follow along and get hands-on with DAX in Power BI. Get a copy of the sample data report here:

This dynamic repository is the perfect place to enhance your learning journey.


Understanding DAX Logical Functions: A Beginner’s Guide

When starting our journey with DAX logical functions we will begin to understand the unique role of each function within our DAX expressions. Among these functions, the IF function stands out as the decision-making cornerstone.

The IF function tests a condition, returning one result if the condition is TRUE, and another if FALSE. Here is its syntax.

IF(logical_test, value_if_true, value_if_false)

The logical_test parameter is any value or expression that can be evaluated to TRUE or FALSE, and then the value_if_true is the value that is returned if logical_test is TRUE, and the value_if_false is optional and is the value returned when logical_test is FALSE. When value_if_false is omitted, BLANK is returned when the logical_test is FALSE.

Let’s say we want to identify which sales have an amount that exceeds $5,000. To do this we can add a new calculated column to our Sales table with the following expression.

Sales Target Categorization = 
IF(
  Sales[Amount] > 5000, 
  "Above Target", 
  "Below Target"
)

This expression will evaluate each sale in our Sales table, labeling each sale as either “Above Target” or “Below Target” based on the Sales[Amount].

The beauty of starting our journey with IF lies in its simplicity and versatility. While we continue to explore logical functions, it won’t be long before we encounter TRUE/FALSE.

As we saw with the IF function these values help guide our DAX expressions, they are also their own DAX function. These two functions are the DAX way of saying yes (TRUE) or no (FALSE), often used within other logical functions or conditions to express a clear binary choice.

These functions are as straightforward as they sound and do not require any parameters. When used with other functions or conditional expressions we typically use these to explicitly return TRUE or FALSE values.

For example, we can create another calculated column to check if a sale is a high value sale with an amount greater than $9,000.

High Value Sale = 
IF(
  Sales[Amount] > 9000, 
  TRUE, 
  FALSE
)

This simple expression checks if the sales amount exceeds $9,000, marking each record as TRUE if so, or FALSE otherwise.

Together IF and TRUE/FALSE form the foundation of logical expressions in DAX, setting the stage for more complex decision-making analysis. Think of these functions as essential for our logical analysis, but just the beginning of what is possible.


The Gateway to DAX Logic: Exploring IF with AND, OR, and NOT

The IF function is much more than just making simple true or false distinctions; it helps us unlock the nuanced layers of our data, guiding us through the paths our analysis can take. By effectively leveraging this function we can craft detailed narratives from our datasets.

We are tasked with setting sales targets for each region. The goal is to base these targets on a percent change seen in the previous year. Depending on whether a region experienced a growth or decline, the sales target for the current year is set accordingly.

Region Specific Sales Target = 
IF(
    HASONEVALUE(Regions[Region]),
    IF(
        [Percent Change(CY-1/CY-2)] < 0, 
        [Total Sales (CY-1)]*1.1, 
        [Total Sales (CY-1)]*1.2
    ),
    BLANK()
)

In this measure we make use of three other measures within our model. We calculate the total sales for the previous year (Total Sales (CY-1)), and the year before that (Total Sales (CY-2)). We then determine the percentage change between these two values.

If there is a decline (negative percent change), we set the current year’s sales target to be 10% higher than the previous year’s sales, indicating a more conservative goal. Conversely, if there was growth (positive percent change), we set the current year target 20% higher to keep the momentum going.

As we dive deeper, combining IF with functions like AND, OR, and NOT we begin to see the true flexibility of these functions in DAX. These logical operators allow us to construct more intricate conditions, tailoring our analysis to very specific scenarios.

The operator functions are used to combine multiple conditions:

  • AND returns TRUE if all conditions are true
  • OR returns TRUE if any condition is true
  • NOT returns TRUE if the condition is false

Let’s craft a measure to determine which employees are eligible for a quarterly bonus. The criterion for eligibility is twofold: the employee must have made at least one sale in the current quarter, and their average sale amount during this period must exceed the overall average sale amount.

To implement this, we first need to calculate the average sales and compare each employee’s average sale against this benchmark. Additionally, we check if the employee has sales recorded in the current quarter to qualify for the bonus.

Employee Bonus Eligibility = 
VAR CurrentQuarterStart = DATE(YEAR(TODAY()), QUARTER(TODAY()) * 3 - 2, 1)
VAR CurrentQuarterEnd = EOMONTH(DATE(YEAR(TODAY()), QUARTER(TODAY()) * 3, 1), 0)
VAR OverallAverageSale = CALCULATE(AVERAGE(Sales[Amount]), ALL(Sales))
VAR EmployeeAverageSale = CALCULATE(AVERAGE(Sales[Amount]), FILTER(Sales, Sales[SalesDate] >= CurrentQuarterStart && Sales[SalesDate] = CurrentQuarterStart && Sales[SalesDate]  0

RETURN
IF(
    AND(HasSalesCurrentQuarter, EmployeeAverageSale > OverallAverageSale),
    "Eligible for Bonus",
    "Not Eligible for Bonus"
)

In this measure we define the start and end dates of the current quarter, then we calculate the overall average sale across all data for comparison. We then determine each employee’s average sale amount and check if the employee has made any sales in the current quarter to qualify for evaluation.

If an employee has active sales and their average sale amount during the period is above the overall average, they are deemed “Eligible for Bonus”. Otherwise, they are “Not Eligible for Bonus”.

This example begins to explore how we can use IF in conjunction with AND to streamline business logic into actionable insights. These logical functions provide a robust framework for asking detailed questions about our data and receiving precise answers, allowing us to uncover the insights hidden within the numbers.


Beyond the Basics: Advanced Logical Functions in DAX

As we venture beyond the foundational logical functions we step into a world where DAX’s versatility shines, especially when dealing with complex data models in Power BI. More advanced logical functions such as SWITCH and COALESCE bring a level of clarity and efficiency that is hard to match with just basic IF statements.

SWITCH Function: Simplifying Complex Logic

The SWITCH function is a more powerful version of the IF function and is ideal for scenarios where we need to compare a single expression against multiple potential values and return one of multiple possible result expressions. This function helps us provide clarity by avoiding multiple nested IF statements. Here is its syntax.

SWITCH(expression, value, result[, value, result]...[, else])

The expression parameter is a DAX expression that returns a single scalar value and is evaluated multiple times depending on the context. The value parameter is a constant value that is matched with the results of expression, the result is any scalar expression to be evaluated if the result of expression matches the corresponding value. Finally, the else parameter is an expression to be evaluated if the result of expression does not match any value arguments.

Let’s explore. We have a scenario where we want to apply different discount rates to products based on their categories (Smartphone, Laptop, Tablet). We could achieve this by using the following expression for a new calculated column, which uses nested IFs.

Product Discount Rate (IF) = 
IF(
    Products[Product]="Smartphone", 0.10,
    IF(Products[Product]="Laptop", 0.15,
        IF(Products[Product]="Tablet", 0.20,
            0.05
        )
    )
)

Although this would achieve our goal, the use of nested if statements can make the logic of the calculated column hard to read, understand, and most importantly hard to troubleshoot.

Now, let’s see how we can improve the readability and clarity by implementing SWITCH to replace the nested IF statements.

Product Discount Rate = 
SWITCH(
    Products[Product],
    "Smartphone", 0.10,
    "Laptop", 0.15,
    "Tablet", 0.20,
    0.05
)

The expression simplifies the mapping of each Product to its corresponding discount rate and provides a default rate for categories that are not explicitly listed.

COALESCE Function: Handling Blank or Null Values

The COALESCE function offers a straightforward way to deal with BLANK values within our data, returning the first non-blank value in a list of expressions. If all expressions evaluate to BLANK, then a BLANK value is returned. Its syntax is also straightforward.

COALESCE(expression, expression[, expression]...)

Here, expression can be any DAX expression that returns a scalar value. These expressions are evaluated in the order they are passed to the COALESCE function.

When reporting on our sales data, encountering blanks can sometimes communicate the wrong message. Using COALESCE we can address this by providing a more informative value when there are no associated sales.

Product Sales = COALESCE([Total Sales], "No Sales")

With this new measure if our Total Sales measure returns a blank, for example due to filters applied in the report, COALESCE ensures this is communicated with a value of “No Sales”. This approach can be beneficial for maintaining meaningful communication in our reports. It ensures that our viewers understand the lack of sales being reported, rather than interpreting a blank space as missing or erroneous data.

These logical functions enrich our DAX toolkit, enabling more elegant solutions to complex problems. By efficiently managing multiple conditions and safeguarding against potential errors, SWITCH and COALESCE not only optimize our Power BI models but also enhance our ability to extract meaningful insights from our data.

With these functions, our journey into DAX’s logical capabilities becomes even more exciting, revealing the depth and breadth of analysis we can achieve. Let’s continue to unlock the potential within our data, leveraging these tools to craft insightful, dynamic reports.


Logical Comparisons and Conditions: Crafting Complex DAX Logic

Delving deeper into DAX, we encounter scenarios that demand a blend of logical comparisons and conditions. This complexity arises from weaving together multiple criteria to craft intricate logic that precisely targets our analytical goals.

We touched on logical operators briefly in a previous section, the AND, OR, and NOT functions are crucial for building complex logical structures. Let’s continue to dive deeper into these with some more hands-on and practical examples.

Multi-Condition Sales Analysis

We want to identify and count the number Sales transactions that meet specific criteria: sales above a threshold and within a particular region. To achieve this, we create a new measure using the AND operator to count the rows in our sales table that meet our criteria.

High Value Sales in US (Count) = 
COUNTROWS(
    FILTER(
        Sales,
        AND(
            Sales[Amount] > 5500,
            RELATED(Regions[Region]) = "United States"
        )
    )
)

This measure filters our Sales table to sales that have an amount greater than our threshold of $5,500 and have a sales region of United States.

Excluding Specific Conditions

We need to calculate total year-to-date sales while excluding sales from a particular region or below a certain amount. We can leverage the NOT function to achieve this.

Sales Excluding Asia and Low Values = 
CALCULATE(
    SUM(Sales[Amount]),
    AND(
        NOT(Sales[RegionID] = 3), // RegionID=3 (Asia)
        Sales[Amount] > 5500
    ),
    Sales[IsCurrentYear]=TRUE()
)

This measure calculates the sum of the sales amount that are not within our Asia sales region and are above $5,500. Using NOT we exclude sales from the Asia region and we us the AND function to also impose the minimum sales amount threshold.

Special Incentive Qualifying Sales

Our goal is to identify sales transactions eligible for a special incentive based on multiple criteria: sales amount, region, employee involvement, and a temporal aspect of the sales data. Here is how we can achieve this.

Special Incentive Qualifying Sales = 
CALCULATE(
    SUM(Sales[Amount]),
    OR(
        Sales[Amount] > 7500,
        Sales[RegionID] = 2
    ),
    NOT(
        Sales[EmployeeID] = 4
    ),
    Sales[IsCurrentYear] = TRUE()
)

The OR function is used to include sales transactions that either exceed $7,500 or are made in Europe (RegionID = 2) and the NOT function excludes transaction made by an employee (EmployeeID = 4), who is a manager and exempt from the incentive program. The final condition is that the sale occurred in the current year.

The new measure combines logical tests to filter our sales data, identifying the specific transactions that qualify for a special incentive under detailed conditions.

By leveraging DAX’s logical functions to construct complex conditional logic, we can precisely target specific segments of our data, uncover nuanced insights, and tailor our analysis to meet specific business needs. These examples showcase just the beginning of what is possible when we combine logical functions in creative ways, highlighting DAX’s robustness and flexibly in tackling intricate data challenges.


Wrapping Up: From Logic to Action

Our journey through the world of DAX logical functions underscores their transformative power within our data analysis. By harnessing IF, SWITCH, AND, OR, and more, we’ve seen how data can be sculpted into actionable insights, guiding strategic decisions with precision. To explore other DAX Logical Functions or get more details visit the DAX Function Reference.

Learn more about: DAX Logical Functions

Logical reasoning in data analysis is fundamental. It allows us to uncover hidden patterns and respond to business needs effectively, demonstrating that the true value of data lies in its interpretation and application. DAX logical functions are the keys to unlocking this potential, offering clarity and direction in our sea of numbers.

As we continue to delve deeper into DAX and Power BI, let the insights derived from logical functions inspire action and drive decision-making. To explore other functions groups that elevate our data analysis check out the Dive into DAX series, with each post comes the opportunity to enhance your data analytics and Power BI reports.

Explore the intricate landscape of DAX in Power BI, revealing the potential to enhance your data analytics with every post. 


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment and explore new DAX functions, and challenge yourself with real-world data scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.

Dive into DAX: Elevated Insights with Information Functions


In data analytics effectively navigating, understanding, and interpreting our data can be the difference between data-driven insights and being lost in a sea of data. In this post we will focus on a subset of DAX functions that help us explore our data – the Information Functions.

These functions are our key to unlocking deeper insights, ensuring data integrity, and enhancing the interactivity of our reports. Let’s embark on a journey to not only elevate our understanding of Power BI and DAX but also to harness the full potential of our data.

The Role of Information Functions in DAX

Information functions play a crucial role in DAX. They are like the detective of our Power BI data analysis, helping us identify data types, understand relationships, handling errors, and much more. Whether we are checking for blank values, understanding the data type, or handling various data scenarios, information functions are our go-to tools.

Diving deep into information functions goes beyond today’s problems and help prepare for tomorrow’s challenges. Mastering these functions enables us to clean and transform our data efficiently, making our analytics more reliable and our insights more accurate. It empowers us to build robust models that stand the test of time and data volatility.

In our exploration through the world of DAX Information Functions, we will explore how these functions work, why they are important, and how we can use them to unlock the full potential of our data. Ready to dive in?

For those of you eager to start experimenting there is a Power BI report pre-loaded with the same data used in this post ready for you. So don’t just read, follow along and get hands-on with DAX in Power BI. Get a copy of the sample data file here:

This dynamic repository is the perfect place to enhance your learning journey.


The Heartbeat of Our Data: Understanding ISBLANK and ISEMPTY

In data analysis knowing when our data is missing, or blank is key. The ISBLANK and ISEMPTY functions in DAX help us identify missing or incomplete data.

ISBLANK is our go-to tool to identify voids within our data. We can use this function to locate and handle this missing data either to report on it or to manage it, so it does not unintentionally impact our analysis. The syntax is straightforward.

ISBLANK(value)

Here, value is the value or expression we want to check.

Let’s use this function to create a new calculated column to check if our sales amount is blank. This calculated column can then be used to flag and identify these records with missing data, and ensure they are accounted for in our analysis.

MissingSalesAmount = ISBLANK(Sales[Amount])

We can now use this calculated column to provide a bit more context to card visuals showing our total sales for each of our products.

In this example we can see the total sales for each product category. However, if we only show the total sales value it could lead our viewers to believe that the TV category has no sales. By adding a count of TV Sales records that have a blank sales amount we can inform our viewers this is not the case and that there is an underlying data issue.

While ISBLANK focuses on the individual values, ISEMPTY takes a step back to consider the table as a whole. We can use this function to check whether the table we pass to this DAX expression contains data. Its syntax is also simple.

ISEMPTY(table_expression)

The table_expression parameter can be a table reference or a DAX expression that returns a table. ISEMPTY will return a value of true if the table has no rows, otherwise it returns false.

Let’s use ISEMPTY in combination with ISBLANK to check if any of our sales records have a missing or blank employee Id. Then we can use this information, to display the count of records (if any) and the total sales value of these records.

In this example we see that Sales with No EmployeeId returns a value of false, indicating that the table resulting from filtering our sales records to sales with a blank employee Id is not empty and contains records. We can then also make use of this function to help display the total sales of these records and a count.

These two functions are the tools to use when investigating our datasets for potentially incomplete data. ISBLANK can help us identify rows that need attention or consideration in our analysis and ISEMPTY helps validate if a subset of data exists in our datasets. To effectively utilize these two functions, it is important to remember their differences. Remember ISBLANK checks if an individual value is missing data, while ISEMPTY examines if a table contains rows.


The Data Type Detectives: ISTEXT, ISNONTEXT, ISLOGICAL and ISNUMBER

In the world of data, not every value is as it seems and that is where this set of helpful DAX expressions come into play. These functions help identify the data types of the data we use within our analysis.

All of these functions follow the same syntax.

ISTEXT(value)
ISNONTEXT(value)
ISLOGICAL(value)
ISNUMBER(value)

Each function checks the value and returns true or false. We will use a series of values to explore how each of these functions work. To do this we will create a new table which uses each of these functions to check the values: TRUE(), 1.5, “String”, “1.5”, and BLANK(). Here is how we can do it.

TestValues = 
VAR _logical = TRUE()
VAR _number = 1.5
VAR _text = "String"
VAR _stringNumber = "1.5"

VAR _testTable =
{
   ("ISLOGICAL", ISLOGICAL(_logical), ISLOGICAL(_number), ISLOGICAL(_text), ISLOGICAL(_stringNumber), ISLOGICAL(BLANK())),
   ("ISNUMBER", ISNUMBER(_logical), ISNUMBER(_number), ISNUMBER(_text), ISNUMBER(_stringNumber), ISNUMBER(BLANK())),
   ("ISTEXT", ISTEXT(_logical), ISTEXT(_number), ISTEXT(_text), ISTEXT(_stringNumber), ISTEXT(BLANK())),
   ("ISNONTEXT", ISNONTEXT(_logical), ISNONTEXT(_number), ISNONTEXT(_text), ISNONTEXT(_stringNumber), ISNONTEXT(BLANK()))
}

RETURN
SELECTCOLUMNS(
   _testTable,
   "Function", [Value1],
   "Test Value: TRUE", [Value2],
   "Test Value: 1.5", [Value3],
   "Test Value: String", [Value4],
   "Test Value: '1.5'", [Value5],
   "Test Value: BLANK", [Value6]
)

An then we can add a table visual to our report to see the results and better understand how each function treats our test values.

The power of these functions lies in their simplicity, the clarity they can bring to our data preparation process, and their ability to be used in combination with other expressions to handle complex data scenarios.

DAX offers a number of other functions that are similar to the ones explored here such as ISEVEN, ISERROR, and ISAFTER. Visit the DAX reference guide for all the details.

Learn more about: DAX Information Functions

A common issue we can run into in our analysis is assuming data types based on a value’s appearance or context, leading to errors in our analysis. Mistakenly performing a numerical operation on a text field that appear numeric can easily throw our results into disarray. Using these functions early in our process to understand our data paves the way for clean, error-free data processing.


The Art of Data Discovery: CONTAINS & CONTAINSSTRING

When diving into our data, pinpointing the information we need can be a daunting task. This is where DAX steps in and provides us CONTAINS and CONTAINSSTRING. These functions help us uncover the specifics hidden within our data.

CONTAINS can help us identify whether a table contains a row that matches our criteria. Its syntax is as follows.

CONTAINS(table, columnName, value[, columnName, value]...)

The table parameter can be any DAX expression that returns a table of data, columnName is the name of an existing column, and value is any DAX expression that returns a scalar value that we are searching for in columnName.

CONTAINS will return a value of true if each specified value can be found in the corresponding columnName (i.e. columnName contains value), otherwise it will return false.

In our previous ISBLANK example we created a Blank Count measure to help us identify how many sales records for our product categories are missing sales amounts.

Blank Count = 
COUNTROWS(
   FILTER(Sales, Sales[MissingSalesAmount]=true)
)

Now, if we are interested in knowing just if there are missing sales amounts, we could update this expression to return true if COUNTROWS returns a value greater than 0, however this is where we can use CONTAINS to create a more effective measure.

Missing Sales Amount = 
CONTAINS(Sales, Sales[MissingSalesAmount], TRUE())

CONTAINS can be a helpful tool, however, it is essential to distinguish when it is the best tool for the task versus when an alternative might offer a more streamlined approach. Alternatives to consider include functions such as IN, FILTER, or TREATAS depending on the need.

For example, CONTAINS can be used to establish a virtual relationship between our data model tables, but functions such as TREATAS may provide better efficiency and clarity. For details on this function and its use check out this post for an in-depth dive into DAX Table Manipulation Functions.

Discover how to Reshape, Manipulate, and Transform your data into dynamic insights.

For searches based on textual content, CONTAINSTRING is our go to tool. It specializes in revealing rows where text columns contain specific substrings. The syntax is straightforward.

CONTAINSSTRING(within_text, find_text)

The within_text parameter is the text we want to search for the text passed to the find_text parameter. This function will return a value of true if find_text is found, otherwise it will return false.

We can use CONTAINSSTRING to dissect our Product Code and enrich our dataset by adding a calculated column containing a user-friendly color value of the product. Here is how.

Color = SWITCH(
    TRUE(),
    CONTAINSSTRING(Products[Product Code], "BK"), "Black",
    CONTAINSSTRING(Products[Product Code], "rd"), "Red",
    CONTAINSSTRING(Products[Product Code], "GR"), "Gray",
    CONTAINSSTRING(Products[Product Code], "SL"), "Silver",
    CONTAINSSTRING(Products[Product Code], "BL"), "Blue"
)

This new calculated column provides us the color of each product that we can use in a slicer, or we can visualize our totals sales by the product color.

CONSTAINSSTRING is case-insensitive, as shown in the red color statement above. When we require case-sensitivity CONSTAINSSTRINGEXACT provides us this functionality.

CONTAINSSTRING just begins to scratch the surface of the DAX functions available to use for in-depth textual analysis, to continue exploring visit this post that focuses solely on DAX Text Functions.

Stringing Along with DAX: Dive Deep into Text Expressions

By leveraging CONTAINS and CONTAINSTRING – alongside understanding when to employ their alternatives – we are equipped with the tools required for precise data discovery within our data analysis.


Deciphering Data Relationships: ISFILTERED & ISCROSSFILTERED

Understanding the dynamics of data relationships is critical for effective analysis. In the world of DAX, there are two functions we commonly turn to that guide us through the relationship network of our datasets: ISFILTERED and ISCROSSFILTERED. These functions provide insights into how filters are applied within our data model, offering a deeper understanding of the context in which our data operates.

ISFILTERED offers a window into the filtering status of a table or column, allowing us to determine whether a filter has been applied directly to that table or column. This insight is valuable for creating responsive measures that adjust based on user selections or filter context. The syntax is as follows.

ISFILTERED(tableName_or_columnName)

A column or table is filtered directly when a filter is applied to any column of tableName or specifically to columnName.

Let’s create a dynamic measure that leverages ISFILTERED and reacts to user selections. In our data model we have a measure that calculates the product sales percentage of our overall total sales. This measure is defined by the following expression.

Product Percentage of All Sales = 
VAR _filterdSales = [Total Sales]
VAR _allSales = CALCULATE([Total Sales], ALL(Products[Product]))

RETURN
DIVIDE(_filterdSales, _allSales, 0)

We can see this measure displays the percentage of sales for the selected product. However, when no product is selected it displays 100%, although this is expected and correct, we would rather not display the percentage calculation when there is no product selected in our slicer.

This is a perfect case to leverage ISFILTERED to first check if our Products[Product] column is filtered, and if so, we can display the calculation, and if not, we will display “N/A”. We will update the measure’s definition to the following.

Product Percentage of All Sales = 
VAR _filterdSales = [Total Sales]
VAR _allSales = CALCULATE([Total Sales], ALL(Products[Product]))

RETURN
IF(
    ISFILTERED(Products[Product]),
    //The Product column is directly filtered
    DIVIDE(_filterdSales, _allSales, 0),
    //The Product column is not directly filtered
    "N/A"
)

And here are the updated results, we can now see when no product is selected in the slicer the measure displays “N/A”, and when the user selects a product, the measure displays the calculated percentage.

While ISFILTERED focuses on direct filter application, understanding the impact of cross-filtering, or how filters on one table affect another related table, is just as essential in our analysis. ISCROSSFILTERED goes beyond ISFILTERED and helps us identify if a table or column has been filtered directly or indirectly. Here’s its syntax.

ISCROSSFILTERED(tableName_or_columnName)

ISCROSSFILTERED will return a value of true when the specified table or column is cross-filtered. The table or column is cross-filtered when a filter is applied to columnName, any column of tableName, or any column of a related table.

Let’s explore ISCROSSFILTERED and how it differs from ISFILTERED with a new measure similar to the one we just created. We define the new measure as the following.

Product Percentage of All Sales CROSSFILTER = 
VAR _filterdSales = [Total Sales]
VAR _allSales = CALCULATE([Total Sales], ALL(Sales))

RETURN
IF(
    ISCROSSFILTERED(Sales),
    //Sales table is cross-filtered
    DIVIDE(_filterdSales, _allSales, 0),
    //The Sales table is not cross-filterd
    "N/A"
)

In this measure we utilize ISCROSSFILTERED to check if our Sales table is cross-filtered, and if it is we calculate the proportion of filtered sales to all sales, otherwise the expression returns “N/A”. With this measure we can gain a nuanced view of product performance within the broader sales context.

When our Sales table is only filtered by our Product slicer, we see that the ISFILTERED measure and the ISCROSSFILTERED measure return the same value (below on the left). This is because as before the column Products[Product] is directly filtered by our Product slicer, so the ISFILTERED measure carries out the calculation and returns the percentage.

But also, since our data model has a relationship between our Product and Sales table, the selection of a product in the slicer indirectly filters, or cross-filters, our Sales table leading to our CROSSFILTER measure returning the same value.

We start to see the difference in these functions when we start to incorporate other slicers, such as our region slicer. In the middle image, we can see if no product is selected our Product Performance card display “N/A”, because our Products[Product] column is not being directly filtered.

However, the Sales Performance card that uses our CROSSFILTER measure is dynamically updated to now display the percentage of sales associated with the selected region. Again, this is because our data model has a relationship between our Region and Sales table, so the selection of a region is cross-filtering our sales data.

Lastly, we can see both measures in action when a Product and Region are selected (below on the right).

Using ISFILTERED, we can create reports that dynamically adjust to user interactions, offering tailored insights based on specific filters. ISCROSSFILTERED takes this a step further by allowing us to understand and leverage the nuances of cross-filtering impacts within our data, enabling an even more sophisticated analytical approach.

Applying these two functions within our reports allows us to enhance data model interactivity and depth of analysis. This helps us ensure our reports respond intelligently to user interactions and reflect the complex interdependencies within our data.


Wrapping Up

Compared to other function groups DAX Information Functions may be overlooked, however these functions can hold the key to unlocking insights, providing a deeper understanding of our data’s structure, quality, and context. Effectively leveraging these functions can elevate our data analysis and integrating them with other DAX function categories can lead to the creation of dynamic and insightful solutions.

As we continue to explore the synergies between different DAX functions, we pave the way for innovative solutions that can transform raw data into meaningful stories and actionable insights. For more details on Information Functions and other DAX Functions visit the DAX Reference documentation.

Learn more about: DAX Functions


Thank you for reading! Stay curious, and until next time, happy learning.

And, remember, as Albert Einstein once said, “Anyone who has never made a mistake has never tried anything new.” So, don’t be afraid of making mistakes, practice makes perfect. Continuously experiment and explore new DAX functions, and challenge yourself with real-world data scenarios.

If this sparked your curiosity, keep that spark alive and check back frequently. Better yet, be sure not to miss a post by subscribing! With each new post comes an opportunity to learn something new.