Productivity Growth in the Age of Digitalisation

Our assets are established, operated and maintained by people and their productivity represents the single greatest barrier to realising maximum value out of this investment as well as the greatest opportunity to increase value derived from their utilisation.

Productivity is enhanced by knowledge as well as the attitude of the people: their knowledge to deliver the best outcomes in their role and their knowledge of asset performance, condition and risks. Attitude is a function of the culture and is inspired by continual learning, common sense processes which lead to a calm working life and effective systems which give them the information they need to deliver a great result.

Digitalisation is the operational environment which provides detailed measurement and derived calculations to all personnel to improve their situational awareness and understand both the overall integrated operation and discrete assets, and how their work has improved or degraded the performance, cost and risk in the organisation.

Aligning information, good engineering practices and a sound reliability business process lifted the OEE of a fleet of five drag lines in a coal mine by over 15% over two years of sustained work by site engineers who had been enlightened in how to maximise their effectiveness.

As an investment, digitalisation must be aligned with a profound understanding of the people, their processes, skills and requirements to deliver their role. This understanding must apply to all levels of the organisation in order to deliver a return.

In recent work in a power company, analysis based on machine learning techniques which covered asset age, condition and past maintenance history identified over 26,000 individual issues across the network. If we flooded this number of work orders into the maintenance department over 3-5 years, we would effectively block meaningful work with churn, poor planning and over whelming resources.

Organisational redesign is being implemented to streamline the end to end process from identifying issues in the assets to design, planning and then scheduling teams to undertake work and change out equipment. The redesign stops multiple people getting in each other’s way, frustration with blocked decisions and lack of feedback, and through good people not being aligned: the slow deterioration of assets over many years simply because the right work was not commissioned.

Manual consolidation of the work in the power company would take over two months for an experienced asset engineer and even then, this would result in a confusing mass of work requests. Algorithms in an artificial intelligence approach took around two weeks to develop and less than two seconds to apply. As new data came to hand, the specification of work could be updated immediately. Hence volatility associated with lack of knowledge or the simple need to cross check can be matched by fast turnaround of packaging the work.

This capability is now ready for the generation of the annual works plans every year. The result for the situation mentioned is 1000 work packs consolidating the 26,000 problems which are estimated to deliver possibly 20% productivity improvements which have been costed into the budget. These are further consolidated into 100 projects where we now talk about renewing feeders with multiple assets rather than thousands of poles and switches being worked on. Project managers can oversight a project-approach to asset renewal, managing multiple teams similar to a large capital development project

Work across multiple sectors: principally power, water, mining and manufacturing has several key lessons:

  1. The data we handle will unlock productivity only if we understand how people deliver asset management from top-level works planning through to detailed planning and scheduling of individual jobs for field crews
  2. In the asset management space, we can manage all our data using tools which handle Gigabytes of data – these are large data sets but not yet Big Data which tops out around 6 Terabytes when SQL is no longer available
  3. But we need tools which utilise machine learning to get fast answers: processing the maintenance history of a large mine and determining the failure modes across all assets now takes less than 15 minutes for over 300,000 work orders. Similar results for condition monitoring data which can reach over 2,000,000 records for a reasonable span of time
  4. The organisation must be designed to use this information, and people taught their roles to move from strategic plans to extend and renew existing systems down to crews who deliver work on the ground. The business processes need to encompass annual works plans through to monthly work control down to weekly dispatch and optimal use of labour

Considering the future: the world of smart meters has now arrived which gives us further insights to risk profile our assets and forecast future performance. But we do not yet have the maths to effectively understand this data. At 5-minute sampling intervals we can barely stay within the range of our existing SQL tools and the best this will offer is risk profiling and, in some cases, life consumption of key assets.

In a recent exercise with high frequency SCADA data we learned that a 20 MW nuclear reactor was being starved of cooling water flow in hot summer months due to air conditioning loads coming off the same circuit. This was interesting and open to being addressed to prevent partial loss of load on a few extremely hot days.

The exercise taught us the need for more work on data normalisation and the need to dig deeper to find the compelling story. The maths to process this kind of data for asset management decision making is still being developed. But the age-old problem is still there: to tell the compelling story about the plant which engineers can then act on.

In another exercise for a transmission company we had to process real time data from the protection systems with the sampling frequency of milliseconds over a 4 second time frame. This work informed us that in each fault case the protection system responded and was fine. Now our challenge is to forecast future faults from this kind of data. And for these applications we must learn to live with either sampling Big Data with our existing data environments or invest in the large data ecosystems which require specialist tools and skills.

The mathematicians can pull down and process the data for us, but we as engineers are now working on new techniques to use this data frugally and effectively: the prize is the intelligent system which is speaking to us at very high sampling rates and from which we are informed what is the right work to invest in the future.