English (United States)
Select a Language

    Asia Pacific

  1. English (AU)

    Americas

  1. English (US)

    EMEA

  1. English (UK)

Quantifying the operational process capacity of wastewater treatment plants

The process capacity of a wastewater treatment facility is a fundamental - though often poorly defined - property. The issue is critical to owners and designers needing to decide when and how to augment a plant to meet increasing load and effluent quality demands.

Graham Gloag Principal Discipline Engineer

by Graham Gloag

Principal Discipline Engineer , Brisbane

22 August 2017
wastewater treatment plant

A wastewater treatment plant’s actual capacity is a complex function of physical constraints (e.g., tank volumes and equipment capacities), influent characteristics, operational factors (e.g., sludge age and recycle rates) and license constraints. 

The actual operating capacity is inevitably different from the original design capacity because of design and construction margins and variance between design and actual influent quality. Integrating these factors to quantify the actual capacity is a challenging problem and to date, a rational approach has been lacking.

In the past, the capacities of wastewater treatment facilities have been defined in terms of population equivalents (EPs), hydraulic load or mass load of carbon (BOD/COD). While these designations are useful for planning purposes, they don’t consider the design assumptions, influent variability, operating conditions, or license standards.  

Many of the process parameters that define a plant’s capacity are inherently variable and generally unpredictable.  Some parameters, such as flowrate and COD concentration may follow relatively predictable patterns on a diurnal basis, but appear random on a daily basis, and may experience considerable short-term fluctuations owing to factors such as rainfall.  

To accommodate the uncertainty, designers have typically resorted to various heuristic rules which, if followed, will ensure the plant works under all but the most unusual circumstances.  While this approach may provide a high degree of robustness, considerable capital is invested and maintained for an extremely rare event.  

While it is relatively easy to determine the capacity impact of a single parameter, it becomes complex when providing a succinct statement where multiple parameters are considered or the parameters are variable. In this instance, the capacity of the plant can’t be isolated from its performance. What’s more, by using the common heuristic rules to account for uncertainty, it is inevitable that the actual capacity of a plant will be significantly different to its nominal or design capacity. 

Below is a discussion on a deterministic method for establishing the capacity of a plant in relation to its performance and the variability that it experiences.  

Failure Frequency Analysis (FFA)

For most plants, two distinct capacities can be identified:

  1. Hydraulic capacity: The plant’s ability to pass a given flowrate: Determined by head loss through various treatment stages and can be relatively well and simply defined
  2. Process capacity: The load the plant will just begin to fail its license: This takes into account the inherent variability of the influent and process (it is convenient to express this capacity in terms of 50 percentile flowrate at the maximum load)

The deterministic method uses the plant’s historical database to define daily average influent conditions, the operating regime and process behavioral characteristics such as sludge settle-ability. The installed infrastructure is challenged on a daily basis using the entire historical data set with the dynamic model, indicating how the plant will respond.  

The approach allows the impact of catchment growth on the plant to be assessed while retaining the inherent variability of the historical data.  The output is an expected failure frequency (i.e., the percent of days a particular parameter would be out of specification).  

To undertake a Failure Frequency Analysis (FFA), the simulation is started where measured data is available, typically a period of three to four years.  For each day, the model predicts the conditions in all major process units.  At the end of each day, the clarifier’s capacity is determined from a steady state mass flux analysis, based on the measured sludge settle-ability for that day, the influent flowrate, and the Mixed Liquor Suspended Solids (MLSS) concentration.  

The assessment is made based on a steady-state determination, and a “failure” is recorded immediately.  If a clarifier failure condition is recorded, then the analysis is repeated with sufficient flow being bypassed around the bioreactor/clarifier: This ensures the safe clarifier operating conditions are just met, representing the maximum “safe” flow of the plant under the current operating conditions.  

Effluent quality is also estimated, including the impact of bypass events, if any. Plant failure is assessed based on compliance with the plant license’s quality requirements, suspended solids, nitrogen and phosphorus, including rolling percentiles and daily maxima. The plant’s daily aeration capacity can also be determined, with short-term (e.g., hourly or two hourly) values estimated using diurnal peaking factors.   

The use of a steady-state clarifier assessment is believed to be justified since the conditions simulated represent a daily time step.  This means that the clarifier may experience short-term periods of failure (typically thickening failure) that are not registered on a daily time step.  This is typical of many plants which experience short-term thickening failure during the daily peak without detrimental effect.      

Determining the mode of failure helps to debottleneck the plant. In these cases, minor upgrades can be undertaken to improve capacity or performance at lower cost than would otherwise be possible.

Influent Data

A feature of the deterministic method is the use of a plant’s historical data.  However, daily influent composition data are rarely available and consequently must typically be surmised from weekly data in a statistically valid manner.  

A “Goodness of Fit” statistical test (Press et al, 1989) can be used to determine the nature of the measured data distribution (commonly log) normally distributed.  Once known, the fitted distribution can be used to randomly select data to “fill in” the blanks, maintaining the mean, standard deviation and range of the measured data set.  

The COD mass load distribution is used as the basis, with the distributions of other parameters determined from the distribution of their ratios to the COD.  This ensures consistency as the validity of important ratios such as COD:TKN and COD:TP are maintained.  

Application Examples

The failure frequency approach method has been used with good accuracy for several water authorities in Queensland, Australia.    

To obtain accurate predictions it is essential to calibrate the model to the measured data.  Given the model is calibrated to several years of data; it is unrealistic to expect high accuracy on all days.  However, it is essential for the predicted output to show similar trends, ranges, and averages to the measured data sets. 

Once calibrated, the influent mass loads are multiplied by a constant to reflect catchment growth, while maintaining the inherent load variability. The model is then re-run for the entire data period with the number and type of failures recorded each day.  This information can then be plotted on a simple failure frequency plot for each parameter under consideration. 

A central feature of the procedure is the ability to predict bypass events (i.e., events that would generate a clarifier failure condition). During a bypass, the model searches for the maximum flow the clarifier can safely accommodate under its current operating conditions and then bypasses any excess flow.  

Since the bypass typically receives minimal treatment, license failure conditions are often associated with bypass events.  A good measure of the procedure’s validity is its ability to predict the frequency and size of bypass events. Unfortunately, few plants actively record the magnitude of bypass events; however, the duration of events is available for one plant, which is plotted against the predicted daily bypass flowrate.  

A sound method for analyzing failure frequency?

A fundamental feature of the method is its ability to combine physical constraints, influent characteristics, operational factors, and license compliance in a simple rational way.  These factors are critical to both owners and designers who need to decide when and how to augment a plant to meet increasing load and effluent quality demands.

The method provides a means of combining highly variable, disparate historical data into a simple failure frequency curve, while at the same time identifying bottlenecks which may lead to process failure.  The frequency with which combinations of adverse conditions occur is based on historical record rather than arbitrary choice.  The technique enables a plant’s operating performance and capacity to be enhanced by tuning to achieve optimal process configuration, optimal load distribution between parallel trains and optimal control of bypassing during peak flow events.  It also facilitates low cost capacity increase through minor investment on debottlenecking.  This approach has the potential to defer major capital expenditure through increased knowledge about the plant’s expected behavior and is considered to have wide application.  

Topics

WANT TO STAY INFORMED?

Subscribe to receive more perspectives