It is good to look after the DAX MEASURE calculation performance and user experience. Consider when it might be smarter to use Power Query or SQL or DAX CALCULATED COLUMN and make a certain data edit in the data load instead of the real time DAX MEASURE calculations loading for every user separately. In addition, typically SQL or Power Query is relatively static and easy syntax to write compared to DAX which allows more dynamic end results and is likely to be harder to maintain or reverse engineer in the future.
Typically, calculations and data editing should be done before DAX MEASURE calculation layer if some of the following apply, the more there is data the more likely DAX MEASURE is NOT a good solution:
- The calculation is not dynamic, typically considering the most granular level of the data
- The calculation formula is complex and would take a lot of memory & time with DAX
- DAX MEASURE can also be used only in the final calculation logic, for example with dividing it is possible to calculate the divided figure and the divider separately without DAX MEASURE and do the DIVIDE with DAX MEASURE.
- Calculations are more like documented and tested processes with multiple steps and joins. Here even DAX CALCULATED COLUMNS and tables start to be tricky in the sense of documentation and joining of the data compared to SQL.
- Static & transparent end results with documentation & testing have a high importance. For example this could be the case with selected Activity Based Costing calculations to drive the profitability and pricing of a company. In addition, here even the DAX CALCULATED COLUMNS can be challenging compared to SQL in the sense of documentation, testing and relationships.
DAX CALCULATED COLUMN vs MEASURE is covered really well in this video by SQLBI: https://www.youtube.com/watch?v=ePPi1LLX0sA or in this blog post: https://www.bluegranite.com/blog/understanding-the-differences-between-calculated-columns-measures-in-power-bi. Key highlights from the blog:
With calculated columns, data in the column is stored in the xVelocity in-memory database, meaning that a calculation is made before the model is ever queried by the user. From a performance perspective, calculated columns are positive given that there is a smaller virtual memory requirement as the user interacts with the report. The downside is that calculated columns take up more storage in your database. Calculated columns can give users other advantages as well, such as the ability to view values in the column, or use of the calculated column in a slicer or in a measure to filter data under certain criteria.
In general, measures are used to calculate aggregates, such as the sum or average of a column. Measures are calculated at the time of your query, which means that they aren’t stored in your database, but use processing power to execute a query at the time of your request. Because measures are not stored in memory, they are generally faster, although it is important to recognize the tradeoff between utilizing in-memory storage or processing power in an instance where either a measure or a calculated column could be used.
It also important to understand the cost / resource usage of context transition. This happens when using CALCULATE function (see for example https://www.sqlbi.com/articles/understanding-context-transition/). The real cost comes with filtering on tables with millions of rows. Even more resources, and especially memory, are used if there is a row level evaluation like with SUMX function. You do not want to use SUMX with millions of rows, you will want to find other ways to do the calculation first in static pieces. In the end, MEASURE could combine these static pieces with simple calculation operations.