Document 3
Document 3
Tables (1)
D.P. Kingma, J. Ba
(6)
(9)
Yoshinari Hashimoto, Hiroto Kase
J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, M. Sun
A. Gulli, S. Pal
Readers:
Download : Download full-size image
GraphML Project Group, 2017. GraphML specification [WWW Document]. URL
http://graphml.graphdrawing.org/specification/dtd.html (accessed 2.10.22).
Since the structure of the graphs in GNNs is learned by aggregating the neighbors,
it is important that the graphs have no missing links. As mentioned before, this is
not always the case in the data base from chapter 2.2 due to inconsistencies in
real P&ID drawings and their DEXPI exports. For this reason, a new dataset of
laboratory plants as well as industrial distillation plants is used. The dataset
contains 13 P&ID graphs and have a sufficiently high density of cross-links such
that they represent the original P&IDs well. These 13 P&ID graphs with 2020 nodes
and 2283 edges are used to train the GNNs in the following. Within the 13 P&IDs,
there are a total of 47 different equipment classes. In this context, a feasibility
analysis must be performed to investigate the potential of classifying P&ID
equipment by GNNs and to identify further challenges. Since different components in
a P&ID can fulfill the same process engineering function and since the use of
meaningful functionalities at the existing positions is to be examined for a
consistency check, it is recommended to divide the components within the P&ID into
meaningful classes. This has the additional advantage that even single components
can be used as training data influencing the classification. Hence, the 47
different components in the training data set are sorted into 9 superior classes,
which are shown in Table 1 below. The number of components per class is also shown
in relation to the training data.
Chen, 2021
arbitrary node u of a graph
Fig. 2. Structure of the Python DEXPI-2-graph converter
Volume 4, September 2022, 100038
Recurrent Neural Networks
Abstract
Fig 3
N
https://doi.org/10.1016/j.dche.2022.100038
Murphy, 2012
Figures (10)
PHASuite: an automated HAZOP Analysis tool for chemical processes
Manaswi, 2018
Digital Chemical Engineering, Volume 3, 2022, Article 100032
Elsevier
Packt Publishing, Birmingham (2017)
Turau and Weyer, 2015
Inductive representation learning on large graphs
Zhou et al., 2020
Elsevier logo
In the second step, the structure of the node sequences is learned with the aid of
recurrent neural networks (RNN). For this purpose, four different approaches are
used: a simple RNN (Chen, 2021), a Bidirectional RNN (BRNN) (Schuster and Paliwal,
1997), a Gated Recurrent Units (GRU) (Cho et al., 2014) as well as a Long-short-
term memory (LSTM) (Hochreiter and Schmidhuber, 1997). An RNN, as shown in Fig. 5,
has an input layer, a hidden layer, and an output layer. The RNN represents a
replication of a neural network with the same dimensions. For each replication, a
state is transferred to the next layer. In this way it is possible to learn local
relationships in the sequences. Basically, RNNs have a disadvantage that should not
be underestimated. The training of weights by more distant information is
difficult, since their errors explode or diminish during the backpropagation.
(Chen, 2021) To get around this there are further possibilities such as the use of
BRNNs. These take into account future information in addition to previous
information to increase the accuracy (Schuster and Paliwal, 1997). Another option
is the use of GRUs or LSTMs. Both consists of single cells and using cell states
and gates to decide which information will be processed and which forgotten. This
allows to get the behavior of a short-term memory. GRUs consist of a reset and
update gate (Chen, 2021; Cho et al., 2014). A LSTM uses one input, one output, and
one forget gate. The gates control the information flow and decide, which
information is necessary to make a prediction. A kind of short-term memory is
created, which also gives the network its name (Chen, 2021; Hochreiter and
Schmidhuber, 1997). Both the GRU and the LSTM are used in state-of-the-art deep
learning applications. GRUs have fewer tensor operations, which leads to faster
training. For this reason, they are used for modeling in this paper to investigate
which of them leads to better results for predicting P&ID equipment based on
sequential data.
Social Media
v
Proteus XML, 2017. Proteus schema for P&ID exchange [WWW Document]. URL
https://github.com/ProteusXML/proteusxml (accessed 5.18.22).
(5)
When parsing the DEXPI files, it becomes apparent that the level of detail in the
P&ID description varies greatly depending on the user. Thus, different numbers of
attributes of the XML files are filled in. At the same time, the use of the
attributes leaves some room for interpretation, such that synonymous information
was mapped to different attributes. This requires a certain degree of robustness,
which has been considered in the DEXPI-2-graph implementation. Therefore, several
attributes (e.g. design temperature, pressure, material, …) are deliberately
searched until the desired information for the respective node is found.
As a last investigated variant, the graph attention approach (8) is applied, where
additionally for each
Download : Download high-res image (542KB)
Synth. Lect. Artif. Intell. Mach. Learn., 14 (2020), pp. 1-159,
10.2200/S01045ED1V01Y202009AIM046
As mentioned before, the information from the P&ID is interpreted in the form of a
graph. This makes it possible to store the relationships between components and the
topology in an unambiguous and machine-interpretable way. However, to learn the
graph structure as a whole and to solve tasks such as node classification, edge
classification or link predictions, machine learning methods of graph analysis are
required that can deal with non-Euclidean data structures such as graphs. The
modeling of graph structures is particularly interesting in the field of P&ID
engineering. By learning connections (e.g. piping, signal lines, …) or components
(e.g. valves, equipment, …) based on their neighborhood with the help of AI, it
will be possible in the future to perform consistency checks in P&IDs and detect
errors in P&IDs. This could reduce the amount of time for drawing P&IDs, which will
shorten the time for developing a plants documentation. To achieve this goal, Graph
Neural Networks can be used for modeling Graph Neural Networks (GNN) can be used
for modelling, which have become increasingly important in recent years (Zhou et
al., 2020). A GNN is based on a message passing algorithm that aggregates arbitrary
information from the neighborhood of a node, which will convolve the graph
(Hamilton, 2020). In general, the message passing of a GNN is analogous to the
Weisfeiler-Lehman algorithm to test the isomorphism of two graphs (Weisfeiler and
Leman, 1968), which was introduced in 1968 and in which information is aggregated
from the neighborhood of each node.
Results - GNN node classification
Hochreiter and Schmidhuber, 1997
Add to Mendeley
The workflow of the node classification is shown in Fig. 8. First, all nodes of all
P&ID graphs in the used dataset are divided into a training dataset (80%) and a
test dataset (20%) using a mask. The neural network is then provided with
information about the topology of the graph, as well as attributes of the nodes and
edges, e.g. equipment class, connection type, etc… From this information, the
network generates an embedding for each node and the predicted node class. This is
compared with the real node class and the error is reduced via backpropagation.
After the training is finished, the trained network can be used for node
classification of unseen data (nodes).
Piping equipment 41
Download : Download full-size image
References
Tandoh Henry, Yi Cao
StellarGraph 2020
Heat exchangers 86
CrossRefGoogle Scholar
(8)
Hamilton et al., 2017
Google Scholar
RELX group home page
Google Scholar
List of symbols
ISO 10628-2 - Diagrams for the Chemical and Petrochemical Industry – Part 2:
Graphical Symbols
Bahdanau et al., 2014
DEXPI is a machine-readable P&ID exchange format under development by the DEXPI
Initiative. The initiative consists of owner operators, engineering, procurement &
construction companies, software vendors and research institutions. The latest data
model and the associated DEXPI specification 1.3 (Theißen and Wiedau, 2021) were
published in 2021. Within the specification, different international standards for
the description of engineering relevant data for P&IDs are combined (e.g. ISO 15926
(International Organization for Standardization, 2013), ISO 10628 (International
Organization for Standardization, 2012a), IEC 62424 (International Electrotechnical
Commission, 2016), ISO 10209 (International Organization for Standardization,
2012b). In particular, these include plant breakout structures, instrumentation,
properties of equipment and components, and piping topology. The DEXPI information
model is already offered by some manufacturers and is exchangeable via a Proteus
XML schema (Proteus XML, 2017). At the same time, DEXPI provides the possibility to
be used as a platform for digital plant data in process industry (Wiedau et al.,
2019), which can significantly reduce the development time of chemical and
biotechnological production plants. Additionally, interoperability increases due to
the continuous integration of DEXPI into existing engineering software (Fillinger
et al., 2017). The uniform and machine-readable format as well as the increasing
acceptance of the DEXPI format in the process industry improve the potential for
the application in the field of data science and allow the application of
artificial intelligence (Wiedau et al., 2021).
This research work was supported within the KEEN project (grant number: 01MK20014S)
and has been funded by the Federal Ministry of Economic Affairs and Climate Action
(BMWK).
International Electrotechnical Commission 2016
Fig. 1. Use cases of artificial intelligence to accelerate and improve the
synthesis of…
Table 1
Fig. 3. P&ID topology representing GraphML structure used for further training
View in ScopusGoogle Scholar
m
Separation units 15
Fig 4
MIT Press, Cambridge (2012)
Results – node prediction
Fig. 3. P&ID topology representing GraphML structure used for further training.
In the following, several P&IDs in the standardized DEXPI format are used as
training data, which were exported using the program PlantEngineer from the
software vendor X-Visual Technologies GmbH and converted to graphs in GraphML
format (GraphML Project Group, 2017) according to chapter 2.1. In total, 35 P&ID
graphs from third parties (laboratory and industrial plants) with 1641 nodes and
1410 edges are used. The data set contains 92 different equipment classes (valves,
pumps, vessels, instrumentation, etc.) based on the DEXPI specifications (Theißen
and Wiedau, 2021) and has three different classes of edges (pipes, signal lines,
process connection lines). The ratio of nodes/edges shows that, as expected for
P&IDs, these are very linear graphs with rather low connectivity structures. At a
closer look there are usually many single nodes along a pipeline (e.g. valves,
vessels, pumps, heat exchangers, measuring points, etc.) which results in a kind of
dead ends. Additionally, some P&IDs show inconsistencies in their drawn structures,
which in some cases lead to isolated nodes or several, smaller graphs. However,
these inconsistencies were deliberately included in the data set, as the data is
intended to represent the current state of machine-readable P&IDs in the process
industry to obtain representative results. The influence of the inconsistencies on
the results is examined in more detail in chapter 4.
Improving interoperability of engineering tools - data exchange in plant design
Google Scholar
Google Scholar
Get rights and content
Fig 9
Conclusion & outlook
Download : Download full-size image
Preprocessing – DEXPI-2-graph
K.P. Murphy
Download : Download full-size image
Chem. Ing. Tech., 93 (2021), pp. 2105-2115, 10.1002/cite.202100203
Fig. 8. Workflow of a learning P&ID components using GNNs.
The accuracy is calculated by dividing the sum of true positive (TP) and true
negative (TN) by the sum of true positive (TP), true negative (TN), false positive
(FP) and false negative (FN). The computations were done on an Intel® Xeon® W-2155
(3.31 GHz) CPU in combination with 128 GB RAM. The results are shown below in Fig.
6. The training accuracy and validation accuracy are shown. In addition, the
accuracy5 indicates the correctness, with which the real output of the validation
dataset is predicted, when the five most probable outputs are returned. This score
is of particular interest to investigate whether the trained models are suitable
for a suggestion system that can be used, for example, in a drop-down menu to speed
up the drawing process of P&IDs. Furthermore, both the calculated loss and the
training and validation accuracy over 60 epochs as well as the training time needed
to calculate the 60 epochs are given.
L. Chen
Citations
a weight factor
The model is programmed as a recursive function, since the function CONCAT for
concatenating the embedding calls itself to compute the message m during iteration
via all neighboring nodes.
Graph representation learning
AI-based processing of P&IDs
How powerful are graph neural networks?
Theißen and Wiedau, 2021
Neural machine translation by jointly learning to align and translate
Chem. Ing. Tech., 89 (2017), pp. 1454-1463, 10.1002/cite.201700032
y
Outline
V. Turau, C. Weyer
M. Wiedau, L. von Wedel, H. Temmen, R. Welke, N. Papakonstantinou
Abbreviations
Learning from flowsheets: A generative transformer model for autocompletion of
flowsheets
2023, arXiv
Fig. 9. Results of the node classification in a P&ID graph via recursive GNN
grouped by the applied aggregation functions.
The results for all models show deviations among each other. The accuracy for the
training for all models is between 75.4% (sum-MLP) and 87.3% (sum) while the
accuracy for the test data varies between 74.5% (sum-MLP) and 81.5% (sum). It is
striking that the gap between the test and training accuracies for the sum
aggregation as well as the attention aggregation is larger than for the remaining
aggregation functions, at about 6 percentage points. Additionally, the results show
that simpler aggregation algorithms such as sum and arithmetic mean achieve higher
training accuracies than the more complex aggregations using attention, set pooling
or sum-MLP. It is hypothesized that this is due to the fact that all neighborhood
information is equally important in predicting the component class. For this
reason, learning the individual P&ID components works particularly well when the
neighborhood information is aggregated with the same weight, i.e., equally
important. This is especially true for the sum or mean.
Shopping cart
Remote access
View PDF
D. Bahdanau, K. Cho, Y. Bengio
A. Grover, J. Leskovec
Declaration of Competing Interest
Fig. 5. Workflow of an RNN-based model for predicting subsequent equipment in P&IDs
Author links open overlay panelJonas Oeing a, Wolfgang Welscher b, Niclas Krink b,
Lars Jansen a, Fabian Henke a, Norbert Kockmann a
Show abstract
Aggregation Functions (Encyclopedia of Mathematics and its Applications)
Google Scholar
Bidirectional recurrent neural networks
Google Scholar
Download : Download high-res image (499KB)
Download : Download high-res image (217KB)
Fig. 6. Results of the training of following P&ID equipment with different RNN
models.
Fig 5
E
Modified smith predictor for slug control with large valve stroke time in unstable
systems
Download : Download high-res image (872KB)
A comparative study for unsupervised network representation learning
(4)
Cited By (4)
Workflow – node prediction
Citation Indexes:
ArXiv ID 1412.6980
Fillinger et al., 2017
Node classification using graph neural networks
Batres et al., 1997
Fig. 6. Results of the training of following P&ID equipment with different RNN
models
The first step is sampling, during which the graph with its networked structure of
nodes and edges is sequentially transformed into linear input data. These input
data consist of a list of contiguous nodes, which contain the interconnected graph
in linear representations. In the sampling process, all possible turns based on the
number of output edges are made at branches to obtain a reliable representation of
all node interconnections via random walks (Grover and Leskovec, 2016). The
sampling is performed with the function randomBiasedWalk, which is part of the
Python library StellarGraph (package: stellargraph.data.BiasedRandomWalk / version:
v1.0.0rc1) (StellarGraph, 2020). The random biased walk requires four input
parameters. The number of walks defines how many walks are generated from each node
in the graph. The walk length specifies how many nodes are considered per walk.
Important special features of the biased random walk are the return hyperparameter
p and the in-out hyperparameter q, which guide the walk. Thus, 1/p defines the
probability of reversing the sampling direction during the random walk, while 1/q
describes the probability of discovering new nodes in the graph. In this way, the
depth of the search can specifically be controlled (Grover and Leskovec, 2016).
Since the generated samples should represent a clean and linear section of the
plant topology, the parameters must be chosen in a way that the random walk jumps
back as rarely as possible and continuously explores new paths. In this respect,
previous investigations have shown that convincing results can be achieved with
values of p = 1000 and q = 1. Smaller values of p, lead to an undesired probability
of sampling against the flow direction. The sequential samples represent the actual
training data for AI modeling and have a previously defined length l. They are
divided in such a way that the first l-1 entries represent the input sequence x,
while the entries at position l are the corresponding output y. The dataset used in
this work is composed of a total of 4923 sequences, each consisting of six nodes.
For validation, 20 % of the data set are randomly retained as a test set. The
remaining 80% are used to train the RNN.
Hamilton, 2020
References
Google Scholar
Google Scholar
The sequential node prediction can be divided into three parts and its workflow is
shown in Fig. 5.
Thermal control of coke furnace by data-driven approach
About ScienceDirect
2023, Computers and Chemical Engineering
Energy and policy considerations for deep learning in NLP
Introduction
Google Scholar
Machine Learning : A Probabilistic Perspective
Xu et al., 2019
Long short-term memory
Fig 2
l
Google Scholar
Algorithmische Graphentheorie, De Gruyter Studium
bias
Terms and conditions
Show 3 more articles
Google Scholar
length of the directed walks
Download : Download full-size image
Tweets:
Weisfeiler and Leman, 1968
(2)
View in ScopusGoogle Scholar
Privacy policy
Share
Deep Learning with Applications Using Python
Chem. Ing. Tech., 91 (2019), pp. 240-255, 10.1002/cite.201800112
B. Weisfeiler, A. Leman
W.L. Hamilton
Introduction
The recursive GNN is used based on the GraphSAGE algorithm presented in the
previous chapter 4. The number of layers is k = 3. For the activation function, the
ReLU function (Manaswi, 2018) and a subsequent normalization are applied. To
achieve the most efficient prediction accuracy, different state-of-the-art
aggregation functions are used and compared against each other. Basic variants in
this respect are the calculation of a sum (4) or an arithmetic mean (5) (Grabisch
et al., 2009).
Deep Learning with Keras: Implementing Deep Learning Models and Neural Networks
With the Power of Python
and CONCAT concatenates the individually calculated aggregations of each node.
Cited by (4)
Node class Quantity
4
IEEE Trans. Signal Process., 45 (1997), pp. 2673-2681, 10.1109/78.650093
To check how well the classification can be performed for the different classes,
the confusion matrix of the model with sum aggregation using the test data set is
also considered, see Fig. 10. The columns in the matrix describe the predicted
classes, while the rows represent the real classes. Consequently, the main diagonal
displays the number of correctly classified components (TP).
Question answering system for chemistry—A semantic agent extension
Workflow - GNN node classification
Proceedings of the ICLR (2014), p. 2015
Hu, 2008
It appears that the classes have an uneven distribution, which is unavoidable since
this data set is a representative cross-section of all components in a process
plant. In the context of this work, it is important to investigate to what extent
the unequal distribution of training data will affect the results of the
classification.
S. Hochreiter, J. Schmidhuber
© 2022 The Authors. Published by Elsevier Ltd on behalf of Institution of Chemical
Engineers (IChemE).
Kingma and Ba, 2014
(3)
Download : Download high-res image (310KB)
International Organization for Standardization, 2013. ISO 15926-2 - industrial
automation systems and integration – integration of life-cycle data for process
plants including oil and gas production facilities – part 2: data model. Beuth
Verlag, Geneva.
2022, arXiv
ENPRO data integration: extending DEXPI towards the asset lifecycle
Check valves 60
Comput. Chem. Eng., 21 (1997), pp. S71-S76, 10.1016/S0098-1354(97)87481-9
embedding/feature of a node u
The results show that RNNs are generally able to learn patterns in sequences from
P&ID graphs. It is noticeable that the SimpleRNN provides the best results with a
validation accuracy of 78.36%. In the case, where the equipment is part of the five
most likely predictions, even 95.2% accuracy is achieved. The BRNN reaches an
accuracy of 94.39%, while predicting the five most suitable equipment types. The
LSTM and GRU have slightly lower accuracy, suggesting that the effect of the
diminishing gradient for the short sequences involved does not have a significant
effect on the training. At the same time, it should be noted that training for the
GRU took less than one-third the time of a SimpleRNN model. Given the current small
amount of data, this is not a decisive factor with the current setting. However,
should the training of the models be done in the future on large data sets or
continuously, it is recommended to give more attention to this aspect, as the use
of GRUs or LSTMs can save time and resources (Strubell et al., 2019), which should
be considered with respect to a sustainable process development.
Fig 10
x
Google Scholar
Google Scholar
In the following, the different RNN models are used and trained with the in chapter
2.2 generated P&ID graphs according to the presented workflow. The implementation
is done in Python using the keras library (Chollet, 2020). The "Adam" optimizer
(Kingma and Ba, 2014) is used for all trainings and the calculation of the loss is
performed by the "categorical cross entropy" (Murphy, 2012). The prediction
accuracy is used as an evaluation metric and is defined as follows.
4
International Organization for Standardization (2012)
Deep Learning and Practice with MindSpore, Cognitive Intelligence and Robotics
View in ScopusGoogle Scholar
W.L. Hamilton, R. Ying, J. Leskovec
Neural Comput., 9 (1997), pp. 1735-1780, 10.1162/neco.1997.9.8.1735
stands for the embedding of a node u at iteration step k. UPDATE and AGGREGATE are
arbitrary, differentiable functions, where the aggregation of the neighborhood N(u)
of node u represents the actual "message" m. The parameter k defines the number of
iterations, at which the message passing proceeds, thus represents the number of
hidden layers of the GNN. Since the aggregation of the neighborhood information
must be independent of the order, it is important that the AGGREGATION is a
permutation-invariant function. Based on the embedding for each iteration step k, a
final embedding for each node u can subsequently be determined using a final layer
(Hamilton, 2020).
u
International Electrotechnical Commission, 2016. IEC 62424, Representation of
process control engineering – requests in P&I diagrams and data exchange between
P&ID tools and PCE-CAE tools. International Electrotechnical Commission, Geneva.
To better understand the modeling of plant topology by message passing GNNs, an
example is given in Fig. 7 that relates the aggregation of neighborhood information
to a snippet of a P&ID. The example shows the aggregation by a two-layer neural
network. Since the plant topology is to be learned, we focus in the following on
the equipment information, such as the classes of each component in the P&ID. Thus,
in a first step (k = 1), inferences can be made about the vessel based on the
information from the valve and the heat exchanger. In a second step (k = 2), a
valve's and a temperature sensor's information can be aggregated for the embedding
of the heat exchanger, while the valves´ embedding is influenced by the connected
drive and flow control.
International Organization for Standardization 2013
b
Download : Download high-res image (630KB)
open access
Grabisch et al., 2009
Proceedings of the ICLR (2019), p. 2019
Fig 7
Springer, Singapore, Singapore (2021), 10.1007/978-981-16-2233-5