The FP&A Trends Webinar. Five Critical Roles for Building a World-Class FP&A Team
Click here to view details and register
By Michael Coveney, Head of Research at FP&A Trends Group
In this series of blogs, I am looking at the areas FP&A departments must address to add value in this technology-driven age. In this blog, I consider the applications that can be developed using the latest database technologies.
The Holy Grail of any FP&A department is to help an organisation achieve its overall objectives. A key part of this involves:
In the past, the analytic tools used to support the above would be based on some form of relational or multidimensional technology. But as the breadth and volume of data and the speed of business increases, these technologies struggle to cope.
Part of the reason is the internal design of these database types. Most have a pre-defined structure where data items such as volume, sales, production costs, and market size are held as individual records. Calculations, such as the correlation of activities to outcomes, take place between individual records as variables tend to be stored as rows.
Unfortunately, this design is inefficient for these calculations as each record needs to be identified and loaded into memory. When looking at relationships between variables, this can run into hundreds of thousands or even millions of individual records being read and processed.
There are ways of getting around this in relational technology, but the computing resources required are high, which effectively reduces the number of data items and the volume of those data items that can be correlated. As mentioned in my last blog on the emergence of NoSQL databases, these limitations can be overcome as data and metadata is stored differently from the standard relational model. Four types of NoSQL databases are available — Key-value, Document, Column and Graph — each being designed for a specific kind of analysis.
In looking at the analyses mentioned at the start of this blog, let’s consider how a Column type database has the potential to transform the services offered by Financial Planning & Analysis (FP&A).
Correlation is a statistical method for assessing the dependency between data items. Column databases store data in a way ideal for this type of analysis, as each variable to be correlated stored as a column. This reduces the number of individual data elements that need to be read; hence more variables can be correlated, and in a fraction of the time it would take if the data was stored in a standard relational format. It allows more possible combinations to be assessed and relationships identified.
Because more correlations can be performed — some column databases profess to be 100,000 times faster than their relational equivalent — these can be produced on a much smaller time increment. For example, a faster database structure would allow this to happen daily rather than assessing correlations at the end of each month. The results can then be used to plot trends that would be far more relevant. This type of analysis could not be done on a spreadsheet or most current-day analytic systems because of the sheer volume of data involved. Interestingly it’s how systems that monitor online trading work to highlight anomalies that may be fraudulent.
Once we have a defined set of relationships, the next step is to use them to predict future performance. Traditionally this requires them to be ‘re-coded’ as a set of rules in a model that is then used to calculate future results. What is needed is a way to transfer these relationships to the model automatically or, better still, have the model itself defined by the analytic system. This is the basis of Machine Learning — systems that can automatically learn and improve their performance based on data. Currently, I’ve not seen anyone do this within the realm of an FP&A model, but it is only a matter of time.
In many case studies, ‘Predictive Analytics’ is used to determine future events, such as machine failures, but in real-time. That is, they collect data and model future outcomes continuously. I believe that these abilities will soon become common within FP&A.
There must be a big increase in the amount of data being analysed and its frequency, which is then used to produce/alter sophisticated models of how the organisation works to make it a reality. The models would still need input from management regarding the objectives they want to achieve and to override any model-generated assumption they feel is incorrect. Still, they pave the way to continuous planning based on the latest, up-to-date evidence.
There are a couple of things to remember, though:
But despite the above, there is no doubt these and other analytical technologies will become mainstream. The key question is whether your company will be the one that benefits or who has to play catch up to competitors who have found out how to make it work.
This article was first published in Unit4/Prevero.
In this series of blogs, I’m looking at several areas that FP&A departments must address to...
This article looks at the importance of data governance and data literacy. It also explores how...
With technology developing at breakneck speed, it is no wonder that FP&A teams around the world...
We will regularly update you on the latest trends and developments in FP&A. Take the opportunity to have articles written by finance thought leaders delivered directly to your inbox; watch compelling webinars; connect with like-minded professionals; and become a part of our global community.