Implementation in the Daily Routine of Industries

In cooperation with the BASF SE the gain of this approach used in modeling, simulation and optimization in chemical production plants is demonstrated. The method is implemented user-friendly for the daily routine of industries.

Grey Box Model for Complete Process Optimization

The virtualization of chemical manufacturing plants in a model and the subsequent, model-based optimization are key steps towards innovation, as well as for efficiency and quality improvements. The success of this approach crucially depends on the reliability of the models. ITWM and BASF SE are developing, in a bi-lateral cooperation project, hybrid-modeling methods that integrate physical-chemical know-how ("white") with data-driven approaches ("black"). The methods are used at the BASF flowsheet simulator so as to be available for the everyday work of the process engineers.

A typical chemical production process includes a chemical reactor for material transformations, with the educt from the reaction being is fed into a purification process, for example, distillation. To model this process in a flowsheet simulator, not only is knowledge of the chemical reactions required, but also the thermodynamics to describe the destillation must be known. 

The situation where knowledge of the stoichiometries and reaction constants is incomplete is quite typical in industrial practice, whereas the distillation processes are well known. Besides this physical White Box knowledge, historical process data is available for a variety of measured operating points.

 

First Step: Short-Cut Model

The goal of the project is to generate information from the process data that can be used to close the gaps of the physical models. To this end, the first step is to replace the reactor with a simplified short-cut model, which contains – together with the purification model – all existing physical equations. The reconciliation performed using the model enable predictions that are as close as possible to the observed measurements for the real plant.

Schematic shows a flowsheet
© ITWM

Schematic shows a flowsheet with a reaction unit for material transformation and two distillation columns for purification; the circles symbolize measuring points (T: Temperature reading, F: Flow measurement, Q: Measured cooling capacity)

A reconciliation consists of the minimization of a sum of squares, where the squared difference between model predictions and observed measurement points is as low as possible. Each term is weighted with the inverse variance of the measurement point. Since the variances are often vague and the adjustments to the various measurement variables are conflicting, this step includes not just one, but a set of reconciliation problems with optional user interaction. The result of this step is reliable soft sensor data about the inputs and outputs of the reactor.

 

Second Step: Identification of a Model

he second step consists of the identification of a model for the insufficiently modeled apparatus – in this case, the reactor – on the basis of the soft sensor data. Various methods are available, for example, regression methods with predefined functions, but also artificial neural networks with back propagation training. Quantitative statements about the confidence intervals and prediction errors are possible using statistical methods. In addition, the parameters for which only unreliable estimates exist can be separated from those that are identifiable with high accuracy.

 

Third step: Generate total model

In a third step, the data-driven model from step 2 is inserted in the flowsheet to generate a compleee model of the process. This is by no means a trivial step for several reasons: Besides ensuring the solvability of the whole system, the extrapolability for a complete process optimization must also be assured.

This is carried out using optimization processes that not only account for the multi-criteria nature of the problem, but are also able to deal with uncertainties. These methods include robust and stochastic optimization.

Workflow
© ITWM/iStockphoto

Learning from data: Typical workflow with modeling, simulation, optimization, including data for near reality models.

Robust Optimization: best possible Design of the worst possible Scenario

The generally continuous uncertainty of the model parameters is described by a discrete selection of scenarios. The selection of the scenarios ensures the greatest possible representation of the uncertainties; strategies for statistical experimental design are available as are randomized approaches.

The impact of these scenarios on the target function is calculated and quantified by means of sensi-tivity measures. The aim of robust optimization is the best possible design of the worst possible sce-nario. This optimization strategy is performed on a multi-criteria basis, taking into account the many competing objectives. Furthermore, the above mentioned sensitivity measures can be defined as target functions – in addition to those already provided – and minimized (for minimized sensitivity to uncertainties) or maximized (for maximized sensitivity, for example, when experimental design is important). In this way, it is possible to study the cost of a more or less sensitive process design relative to other business targets.

Practical experience shows, in many cases, a relatively small adjustment to the process design is sufficient to achieve a significant improvement in robustness. If the uncertainty in the model's predictions is still too great after preparing the Grey Box model, a model-based, multi-criteria experimental de-sign is developed where the data generated from the experiment is maximized while meeting other business targets to the fullest extent possible.

An important next step is the fine tuning of the data-based methods to facilitate the integration of limiting conditions such as balance equations. This effects, for example, the topology of the artificial neural network being used. It is also interesting to see to what extent the White Box environment can ensure the reduction of confidence intervals that result from the data-driven model identification. This is especially important for process optimization under uncertain model parameters.