In this series of blogs, I’m looking at several areas that FP&A departments must address to...
In this series of blogs, I am looking at the areas FP&A departments must address to add value in this technology-driven age. In this blog, I consider the applications that can be developed using the latest database technologies.
The Holy Grail of any FP&A department is to help an organisation achieve its overall objectives. A key part of this involves:
- Determining the relationships that affect current performance. For example, what drives sales, how materials and associated personnel affect production costs.
- Understanding how these relationships change over time - is there a pattern, and are they getting better or worse?
- Forecasting what these relationships may produce in the future based on current understanding.
- Modelling what changes could be made regarding those relationships that would improve performance. For example, would we exceed our financial goals if we outsourced part of customer services and what other objectives would then be affected?
In the past, the analytic tools used to support the above would be based on some form of relational or multidimensional technology. But as the breadth and volume of data and the speed of business increases, these technologies struggle to cope.
Part of the reason is the internal design of these database types. Most have a pre-defined structure where data items such as volume, sales, production costs, and market size are held as individual records. Calculations, such as the correlation of activities to outcomes, take place between individual records as variables tend to be stored as rows.
Unfortunately, this design is inefficient for these calculations as each record needs to be identified and loaded into memory. When looking at relationships between variables, this can run into hundreds of thousands or even millions of individual records being read and processed.
There are ways of getting around this in relational technology, but the computing resources required are high, which effectively reduces the number of data items and the volume of those data items that can be correlated. As mentioned in my last blog on the emergence of NoSQL databases, these limitations can be overcome as data and metadata is stored differently from the standard relational model. Four types of NoSQL databases are available — Key-value, Document, Column and Graph — each being designed for a specific kind of analysis.
In looking at the analyses mentioned at the start of this blog, let’s consider how a Column type database has the potential to transform the services offered by Financial Planning & Analysis (FP&A).
Determining relationships
Correlation is a statistical method for assessing the dependency between data items. Column databases store data in a way ideal for this type of analysis, as each variable to be correlated stored as a column. This reduces the number of individual data elements that need to be read; hence more variables can be correlated, and in a fraction of the time it would take if the data was stored in a standard relational format. It allows more possible combinations to be assessed and relationships identified.
Relationship trends
Because more correlations can be performed — some column databases profess to be 100,000 times faster than their relational equivalent — these can be produced on a much smaller time increment. For example, a faster database structure would allow this to happen daily rather than assessing correlations at the end of each month. The results can then be used to plot trends that would be far more relevant. This type of analysis could not be done on a spreadsheet or most current-day analytic systems because of the sheer volume of data involved. Interestingly it’s how systems that monitor online trading work to highlight anomalies that may be fraudulent.
Forecasting the impact of relationships
Once we have a defined set of relationships, the next step is to use them to predict future performance. Traditionally this requires them to be ‘re-coded’ as a set of rules in a model that is then used to calculate future results. What is needed is a way to transfer these relationships to the model automatically or, better still, have the model itself defined by the analytic system. This is the basis of Machine Learning — systems that can automatically learn and improve their performance based on data. Currently, I’ve not seen anyone do this within the realm of an FP&A model, but it is only a matter of time.
What’s next?
In many case studies, ‘Predictive Analytics’ is used to determine future events, such as machine failures, but in real-time. That is, they collect data and model future outcomes continuously. I believe that these abilities will soon become common within FP&A.
There must be a big increase in the amount of data being analysed and its frequency, which is then used to produce/alter sophisticated models of how the organisation works to make it a reality. The models would still need input from management regarding the objectives they want to achieve and to override any model-generated assumption they feel is incorrect. Still, they pave the way to continuous planning based on the latest, up-to-date evidence.
There are a couple of things to remember, though:
- These newer technologies are relatively immature, and there are very few case studies of how they have transformed the role of FP&A as described above.
- They are technically complex for the average accountant, so FP&A has to acquire or partner with IT to understand their potential.
But despite the above, there is no doubt these and other analytical technologies will become mainstream. The key question is whether your company will be the one that benefits or who has to play catch up to competitors who have found out how to make it work.
This article was first published in Unit4/Prevero.