“You can’t manage what you can’t measure” – Designing IT for Operational Excellence
Like many of the areas we work in, proactive design remains a moving target. Carl Bones (Specialist in Architecture) examines recent IT and Operations trends. His intriguing article is the first of a three-part series entitled ‘The only way is excellence – How IT and Operations can achieve it together’.
The old management adage of “You can’t manage what you can’t measure” certainly strikes a chord with those tasked with improving efficiency both in Operations and Technology. While some really important things such as morale and goodwill can’t be so easily quantified, in the field of Operational Excellence within the financial world there is a lot that can.
In this blog I explore some of the ways that the IT department can help the business’ operational managers make savings and achieve rapid return on investment (ROI), by helping them measure their processes and costs, and executing on change.
In this first blog of a three part series I will be analysing proactive design. In particular I will be focussing on the question: When we are defining new functional architectures, are we challenging the requirements to include feedback loops, prioritisation and adequate management information systems?
An example of such a feedback mechanism was demonstrated recently when I was helping a client with a re-architecture of their client reporting solution. One of the key principles vital in approaching this is to move from a ‘push report’ model to a self-service ‘pull data’ model, with a view that this would be more cost effective for the client and also be far more flexible to deploy. We still offered the old method for legacy reasons, but there was a concerted effort to inform users of the benefits provided by the new feature and to migrate them over to it.
My question to the client was “how do we know if this is actually being taken up?” There were no user-stories covering who, when and how their users were consuming their data now, and no future targets specified. Of course knowing what you want in advance is always tricky, but the approach taken was to quite simply log information, such as: who, when, what and how data, for future analysis and correlation with volumes, fees, operational incidents and other operational factors. Those lagging behind on take-up could be encouraged in various ways to adopt the new method, thereby making savings for our client. These small amounts of highly valuable feedback data provide essential insight into improving any process.
What other design approaches are there?
Most workflow solutions already allow the capture of key metrics – why not suggest a use case such as “as an operations manager, I want a view of the key process steps in order to improve the process” for example. Those involved in user experience (UX) highlight the value of being able to measure the use of certain interface features, especially in early releases, in order to know if the anticipated business value is being demonstrated.
If we are really forward thinking, can we accept that not every trade, order or payment is equal? How we define ‘value’ can be dollars and cents, time criticality, customer goodwill, or whatever the business can settle upon and we can codify.
Have we designed into a solution the ability to measure and prioritise the highest value and, almost as importantly, is this under the operations manager’s control? The vision of an operational ‘mixing desk’ has long been a pipe dream for management, but could soon be possible.
Now that we have ‘designed in’ both measurement and prioritisation, we have unlocked a wealth of possibilities for the Operations area to increment their operational processes in an agile way. In this way, IT and Operations can potentially work even closer together to achieve the required (yet often elusive) standards of Operational Excellence.
Foto: Torkilt Retveld under CC-License at Flickr