A treasure trove of data

In the second post in our three part series entitled ‘The only way is excellence – How IT and Operations can achieve it together’, Carl Bones (Specialist in Architecture) takes a look at the treasure trove of data offered by IT – a wealth of ways to achieve operational excellence.

Give me a lever long enough, and I shall move the world.

IT-InfrastructureHave you ever been looking for something, only to find that what you were looking for was right under your nose?

This thought occurred to me when I started thinking of ways in which we can use IT operational data to support operational excellence.

In my first blog in this three part series, entitled ‘The only way is excellence : IT and Operations together‘, I talked about the ways in which we could ‘design-in’ support for agile operational improvements which would allow operational managers to make better use of existing IT data.
IT has a treasure trove of data that can be used or ‘leveraged’ for operational excellence, but many in Operations haven’t yet realised this.

One of the reasons for this, is that IT has enormously valuable data, but not always in a format or language that can be utilised by Operations and they may not even be aware of it. The data held by IT could be a valuable resource for Operations to improve feedback mechanisms and provide future analysis for the business.

We can access this rich seam of data through two sources; one commonly known as ‘instrumentation’, the other being ‘audit trail’ information. Our first source (instrumentation), comprises monitoring data, logging data, and trace data, and is primarily used by IT for their own internal processes.

Monitoring data is generated by developers and is managed and controlled by service management teams through the use of system monitoring platforms. These platforms come with extensive reporting capabilities and data warehouses for trend analysis.  However, such insights are rarely shared with the business, often only in the ‘post-mortem’ of an operational incident or for capacity predictions.

Moving down a level in technology, we have ‘logging data’ which is closely related to (and often the same) as monitoring data, the difference is that it includes extra events that are not deemed necessary to alert to monitoring, such as; interim stages, non-critical processes and progress to completion information.

Trace data is our last point of call for instrumentation data. This is a lower level form of information and less accessible than logging and monitoring data, but can still prove useful.

Our second source of useful data is that of audit trail data which contains similar information to trace data but which can provide a better alternative source of insight.

Audit trail data is not IT operational data, but can be more useful than just being used as a forensic tool. Audit trails are often confined to reports to satisfy auditors and regulators, but we can find real value in this data if we focus on the ‘norm’ rather than merely looking for exceptions.

The data we can obtain from instrumentation and audit trail sources will be hugely beneficial in improving feedback mechanisms and future analysis for Operations.

So how do we bring this disparate source of data together? Big Data analysis tools provide a highly effective way of correlating this type of information.

The potential for operational improvement is one area that has not been fully recognised by vendors operating in this space. The main focus has tended to be on fraud detection or the use of real-time analytics to discover inefficiencies.

Big Data analysis tools such as Lavastorm allow us to capture and analyse vast amounts of data of varying shapes and sizes. We can use this data to develop strategies that will result in significant business improvements, including the restructuring of processes by identifying and removing artificial or outdated constraints.

We can analyse the data to identify specific problems within processes rather than relying on guesswork. Ultimately better data analysis allows for improvements in the prioritisation of key processes and can identify areas for cost saving – a major theme across the banking world right now!

To achieve this goal we need to think more creatively about the kind of data we may already have but do not use, or data which we may not even be aware of. By using this data to provide insight and analysis, we could very well (as Archimedes said) use this data as “a lever…to move the world” or at least save a few hundred thousand dollars.

____________________________

This blog appeared on Finextra. Click here to see the entry on the Finextra website.

____________________________
Foto: Torkilt Retveld under CC-License at Flickr

Hybrid and multicloud

Learn how cloud and multicloud drive transformation!

Download now