By Russell Riding, Melbourne Water Team Leader, Automation Delivery, Service Delivery – Asset Management Services
Winneke treatment plant, located in the hills north east of Melbourne, was commissioned in the early 1980s to provide a treated water source to meet the growing needs of Melbourne’s developing northern and western suburbs. The plant sources water from Sugarloaf Reservoir, which is harvested from the Yarra River and Maroondah Reservoir in the east. Once treated, the high-quality water is gravity fed via a 2.1m-diameter transfer main into the transfer system network where it’s used to supplement other supplies. This source of water is especially useful during years of low rainfall and can provide stress relief to Melbourne’s main storages and catchments.
The source water for the treatment process is pumped from the Sugarloaf Reservoir directly into the head of the plant via the reservoir pumping station. The station was commissioned with three large TKL horizontal split casing centrifugal pumps coupled to 3.2MW high voltage motors operated via slip recovery high voltage drives. In 2010, the 3.2MW motors and drives were replaced with 1.6MW motors and new technology including variable speed drives (VSDs).
There were two drivers for this upgrade. Firstly, the motors and drives were at end of life, and secondly, the original design was not optimal for providing efficient pumping when the Sugarloaf Reservoir was near full.
As part of the upgrade, an additional three 375kW TKL horizontal split casing pumps were also installed. These were to provide more flexible operation of the system and efficiency when levels were high at Sugarloaf Reservoir.
Collecting data for more efficient pumping
From 2010, Melbourne Water began to analyse data collected from its supervisory control and data acquisition (SCADA) system to measure the efficiency of the pumps. In 2016, the results showed that the new 1.6MW motors and VSDs were more efficient than those they had replaced. However, it was thought that improvements in pump selection and speed, and utilising all six pump sets could yield even greater efficiencies.
In late 2016, Melbourne Water started on a journey of automation and data analytics, incorporating machine learning (ML) and artificial intelligence (AI) into its operations. One of the first AI projects was targeted at the operation of Sugarloaf Reservoir pump station. The aim of the project was to automate the selection of pumps and the speed of each to optimise energy consumption. An added advantage of the project was the automatic bumpless transfer of failed pumps to standby pumps during plant operation. While this type of automation is common, it had never been implemented at this site.
Initially, options for the AI solution were provided by external vendors. These were found to be costly, with largely unknown success rates. They were also difficult to integrate into the pump station’s existing programmable logic controller (PLC). Many of the solutions were “black boxes” with the intellectual property retained by the vendor. Vendor solutions presented the additional problem that the operation of their solution could not be guaranteed at all times, with vendors often unable to provide support 24 hours a day, seven days a week. It was these factors that led Melbourne Water to consider implementing its own AI solution.
In early 2017, a senior data analyst extracted the data collected over the previous seven years from Melbourne Water OSIsoft PI historian using the Python programming language and other software tools. The data was arranged in an array with variables comprising of the Sugarloaf Reservoir level (suction head), plant throughput (flow rate), pump availability and pump speed. From this array it was a relatively easy task to use Python to search the data for the most efficient combination of running pumps and speed to deliver the required flow rate at current reservoir levels. The output of the Python program is kWh per megalitre and it is calculated on every flow setpoint change initiated via the operations team and when any of the other variables, such as reservoir level or pump availability change.
Integrating the custom solution with existing systems
The greatest challenges of the project were the integration of the Python solution into the existing PLC and the change management aspects for the operational teams involved. The solution for the PLC integration ended up being quite simple. A Python library is freely available that allows direct read/write of values directly into the PLC processor memory area. This was used for the main transfer of data between the two systems.
The PLC writes real-time data into a set of memory registers for the Python program to read. These include all values that are required for the program to operate and the flow rate setpoint, pump availability and reservoir level. Once the Python program assesses that a change is required, it writes values into a separate set of registers in the same area for the PLC to act on. These included the required pumps to run and the speed with which they are to run.
The PLC continues to provide the base control of the pumps and does not accept erroneous values from Python. Any pump failures or faults are dealt with by the PLC; pumps are shutdown on critical alarms, and warnings are provided via SCADA for non-critical events. In the event a pump is shut down by the PLC due to a fault, the updated pump status is written to PLC memory where the Python program reads it and another pump (the next most efficient) is selected for operation by the program. This cycle continues while the control system mode is selected to “optimise”.
The health of the connection between the Python optimiser and the PLC is controlled via a counter installed in Python system. If the PLC detects that the Python optimiser counter is not increasing, it alarms on SCADA and reverts the station to normal automatic control. The entire Python optimiser runs on a dedicated industrial PC that is only connected to the local control system network.
Access from outside of the site is not possible and as such we have a high degree of protection against cyberattack.
Making operations more efficient
Since the installation was completed in early 2018, the optimiser has reduced power consumption for the pump station by approximately 20 per cent, saving $200,000 per annum. This saving has continued into 2019 and monitoring continues via a dashboard created in the OSIsoft PI historian. The monitoring provides near real-time trends of the totalised power versus totalised flow for the year in comparison with the best and worst examples recorded since 2010 for the same values. Further analysis shows that the greatest benefits are achieved by running multiple pumps when reservoir levels are high and less pumps as levels drops. This contrasts with the traditional approach of using just one pump.
When first implemented there was some skepticism that running five pumps could be more efficient than operating one pump to deliver the same volume. However, those that understand pump curves appreciate that efficiency is determined by the design of the impeller among other things. In this case the original three pumps were designed to be efficient at low suction pressures due to the belief that the reservoir would be low most of the time. This turned out not to be the case and the three new pumps were selected on their ability to operate efficiently at higher reservoir levels. Overall, it was a combination of these pumps and an assessment of the optimum flow rates and suction pressure that delivered the most efficient method of operation.
Melbourne Water continues to roll out other AI solutions across its business to help it improve the efficiency of its operations.