Download ebook Parallel Computational Technologies 16Th International Conference Pct 2022 Dubna Russia March 29 31 2022 Revised Selected Papers Communications In Computer And Information Science 1618 Leonid Sokolin online pdf all chapter docx epub

Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

Parallel Computational Technologies

16th International Conference PCT 2022


Dubna Russia March 29 31 2022
Revised Selected Papers
Communications in Computer and
Information Science 1618 Leonid
Sokolinsky
Visit to download the full and correct content document:
https://ebookmeta.com/product/parallel-computational-technologies-16th-international
-conference-pct-2022-dubna-russia-march-29-31-2022-revised-selected-papers-com
munications-in-computer-and-information-science-1618-leonid-sokolin/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Intelligent Systems and Pattern Recognition Second


International Conference ISPR 2022 Hammamet Tunisia
March 24 26 2022 Revised Selected Papers in Computer
and Information Science 1589 Akram Bennour (Editor)
https://ebookmeta.com/product/intelligent-systems-and-pattern-
recognition-second-international-conference-ispr-2022-hammamet-
tunisia-march-24-26-2022-revised-selected-papers-in-computer-and-
information-science-1589-akram-bennour/

Computational Intelligence in Data Science: 5th IFIP TC


12 International Conference, ICCIDS 2022, Virtual
Event, March 24–26, 2022, Revised Selected Papers
Lekshmi Kalinathan
https://ebookmeta.com/product/computational-intelligence-in-data-
science-5th-ifip-tc-12-international-conference-
iccids-2022-virtual-event-march-24-26-2022-revised-selected-
papers-lekshmi-kalinathan/

Software Technologies 16th International Conference


ICSOFT 2021 Virtual Event July 6 8 2021 Revised
Selected Papers Communications in Computer and
Information Science 1622 1st Edition Hans-Georg Fill
https://ebookmeta.com/product/software-technologies-16th-
international-conference-icsoft-2021-virtual-event-
july-6-8-2021-revised-selected-papers-communications-in-computer-
and-information-science-1622-1st-edition-hans-georg-fill/

Computational Intelligence in Communications and


Business Analytics: 4th International Conference, CICBA
2022, Silchar, India, January 7–8, 2022, ... in
Computer and Information Science, 1579) Somnath
Mukhopadhyay (Editor)
https://ebookmeta.com/product/computational-intelligence-in-
communications-and-business-analytics-4th-international-
conference-cicba-2022-silchar-india-january-7-8-2022-in-computer-
Advances in Optimization and Applications 13th
International Conference OPTIMA 2022 Petrovac
Montenegro September 26 30 2022 Revised Selected in
Computer and Information Science 1739 Nicholas Olenev
(Editor)
https://ebookmeta.com/product/advances-in-optimization-and-
applications-13th-international-conference-optima-2022-petrovac-
montenegro-september-26-30-2022-revised-selected-in-computer-and-
information-science-1739-nicholas-olenev/

Computer Supported Education 12th International


Conference CSEDU 2020 Virtual Event May 2 4 2020
Revised Selected Papers Communications in Computer and
Information Science H. Chad Lane (Editor)
https://ebookmeta.com/product/computer-supported-education-12th-
international-conference-csedu-2020-virtual-event-
may-2-4-2020-revised-selected-papers-communications-in-computer-
and-information-science-h-chad-lane-editor/

Information Systems and Design Second International


Conference ICID 2021 Virtual Event September 6 7 2021
Revised Selected Papers Communications in Computer and
Information Science 1539 Victor Taratukhin (Editor)
https://ebookmeta.com/product/information-systems-and-design-
second-international-conference-icid-2021-virtual-event-
september-6-7-2021-revised-selected-papers-communications-in-
computer-and-information-science-1539-victor-taratu/

Advanced Computing 11th International Conference IACC


2021 Msida Malta December 18 19 2021 Revised Selected
Papers Communications in Computer and Information
Science 1528 Deepak Garg (Editor)
https://ebookmeta.com/product/advanced-computing-11th-
international-conference-iacc-2021-msida-malta-
december-18-19-2021-revised-selected-papers-communications-in-
computer-and-information-science-1528-deepak-garg-editor/

Algorithmic Aspects in Information and Management: 16th


International Conference, AAIM 2022, Guangzhou, China,
August 13–14, 2022, Proceedings (Lecture Notes in
Computer Science, 13513) Qiufen Ni
https://ebookmeta.com/product/algorithmic-aspects-in-information-
and-management-16th-international-conference-aaim-2022-guangzhou-
china-august-13-14-2022-proceedings-lecture-notes-in-computer-
Leonid Sokolinsky
Mikhail Zymbler (Eds.)

Communications in Computer and Information Science 1618

Parallel Computational
Technologies
16th International Conference, PCT 2022
Dubna, Russia, March 29–31, 2022
Revised Selected Papers
Communications
in Computer and Information Science 1618

Editorial Board Members


Joaquim Filipe
Polytechnic Institute of Setúbal, Setúbal, Portugal
Ashish Ghosh
Indian Statistical Institute, Kolkata, India
Raquel Oliveira Prates
Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil
Lizhu Zhou
Tsinghua University, Beijing, China
More information about this series at https://link.springer.com/bookseries/7899
Leonid Sokolinsky · Mikhail Zymbler (Eds.)

Parallel Computational
Technologies
16th International Conference, PCT 2022
Dubna, Russia, March 29–31, 2022
Revised Selected Papers
Editors
Leonid Sokolinsky Mikhail Zymbler
South Ural State University South Ural State University
Chelyabinsk, Russia Chelyabinsk, Russia

ISSN 1865-0929 ISSN 1865-0937 (electronic)


Communications in Computer and Information Science
ISBN 978-3-031-11622-3 ISBN 978-3-031-11623-0 (eBook)
https://doi.org/10.1007/978-3-031-11623-0

© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2022
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, expressed or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

This volume contains a selection of the papers presented at the 16th International
Scientific Conference on Parallel Computational Technologies, PCT 2022. The PCT
2022 conference was held in Dubna, Russia, during March 29–31, 2022.
The PCT series of conferences aims at providing an opportunity to report and
discuss the results achieved by leading research groups in solving practical issues
using supercomputer and neural network technologies. The scope of the PCT series
of conferences includes all aspects of the application of cloud, supercomputer, and
neural network technologies in science and technology such as applications, hardware
and software, specialized languages, and packages.
The PCT series is organized by the Supercomputing Consortium of Russian
Universities and the Ministry of Science and Higher Education of the Russian
Federation. Originating in 2007 at the South Ural State University (Chelyabinsk,
Russia), the PCT series of conferences has now become one of the most prestigious
Russian scientific meetings on parallel programming, high-performance computing,
and machine learning. PCT 2022 in Dubna continued the series after Chelyabinsk
(2007), St. Petersburg (2008), Nizhny Novgorod (2009), Ufa (2010), Moscow (2011),
Novosibirsk (2012), Chelyabinsk (2013), Rostov-on-Don (2014), Ekaterinburg (2015),
Arkhangelsk (2016), Kazan (2017), Rostov-on-Don (2018), Kaliningrad (2019), Perm
(2020), and Volgograd (2021).
Each paper submitted to the conference was scrupulously evaluated by three
reviewers based on relevance to the conference topics, scientific and practical contri-
bution, experimental evaluation of the results, and presentation quality. The Program
Committee of PCT selected the 22 best papers to be included in this CCIS proceedings
volume.
We would like to thank the respected PCT 2022 platinum sponsors, namely Intel, RSC
Group, and Karma Group, and the conference partner, Special Technological Center, for
their continued financial support of the PCT series of conferences.
We would like to express our gratitude to every individual who contributed to
the success of PCT 2022. Special thanks to the Program Committee members and
the external reviewers for evaluating papers submitted to the conference. Thanks
also to the Organizing Committee members and all the colleagues involved in the
conference organization from the Joint Institute for Nuclear Research, the South Ural
State University (national research university), and Moscow State University. We thank
the participants of PCT 2022 for sharing their research and presenting their achievements
as well.
Finally, we thank Springer for publishing the proceedings of PCT 2022 in the
Communications in Computer and Information Science series.

June 2022 Leonid Sokolinsky


Mikhail Zymbler
Organization

The 16th International Scientific Conference on Parallel Computational Technologies


(PCT 2022) was organized by the Supercomputing Consortium of Russian Universities
and the Ministry of Science and Higher Education of the Russian Federation.

Steering Committee
Berdyshev, V. I. Krasovskii Institute of Mathematics and
Mechanics, UrB RAS, Russia
Ershov, Yu. L. United Scientific Council on Mathematics and
Informatics, Russia
Minkin, V. I. South Federal University, Russia
Moiseev, E. I. Moscow State University, Russia
Savin, G. I. Joint Supercomputer Center, RAS, Russia
Sadovnichiy, V. A. Moscow State University, Russia
Chetverushkin, B. N. Keldysh Institute of Applied Mathematics, RAS,
Russia
Shokin, Yu. I. Institute of Computational Technologies, RAS,
Russia

Program Committee
Dongarra, J. (Co-chair) University of Tennessee, USA
Sokolinsky, L. B. (Co-chair) South Ural State University, Russia
Voevodin, Vl. V. (Co-chair) Moscow State University, Russia
Zymbler, M. L. (Academic South Ural State University, Russia
Secretary)
Ablameyko, S. V. Belarusian State University, Belarus
Afanasiev, A. P. Institute for Systems Analysis, RAS, Russia
Akimova, E. N. Krasovskii Institute of Mathematics and
Mechanics, UrB RAS, Russia
Andrzejak, A. Heidelberg University, Germany
Balaji, P. Argonne National Laboratory, USA
Boldyrev, Yu. Ya. St. Petersburg Polytechnic University, Russia
Carretero, J. Carlos III University of Madrid, Spain
Gazizov, R. K. Ufa State Aviation Technical University, Russia
Glinsky, B. M. Institute of Computational Mathematics and
Mathematical Geophysics, SB RAS, Russia
Goryachev, V. D. Tver State Technical University, Russia
viii Organization

Il’in, V. P. Institute of Computational Mathematics and


Mathematical Geophysics, SB RAS, Russia
Kobayashi, H. Tohoku University, Japan
Kunkel, J. University of Hamburg, Germany
Kumar, S. Rudrapur, India
Labarta, J. Barcelona Supercomputing Center, Spain
Lastovetsky, A. University College Dublin, Ireland
Likhoded, N. A. Belarusian State University, Belarus
Ludwig, T. German Climate Computing Center, Germany
Lykosov, V. N. Institute of Numerical Mathematics, RAS, Russia
Mallmann, D. Julich Supercomputing Centre, Germany
Malyshkin, V. E. Institute of Computational Mathematics and
Mathematical Geophysics, SB RAS, Russia
Michalewicz, M. A*STAR Computational Resource Centre,
Singapore
Modorsky, V. Ya. Perm Polytechnic University, Russia
Pan, C. S. Cloudflare, UK
Prodan, R. Alpen-Adria-Universität Klagenfurt, Austria
Radchenko, G. I. Silicon Austria Labs, Austria
Shamakina, A. V. HLRS High-Performance Computing Center
Stuttgart, Germany
Shumyatsky, P. University of Brasilia, Brazil
Sithole, H. Centre for High Performance Computing,
South Africa
Starchenko, A. V. Tomsk State University, Russia
Sterling, T. Indiana University, USA
Sukhinov, A. I. Don State Technical University, Russia
Taufer, M. University of Delaware, USA
Tchernykh, A. CICESE Research Center, Mexico
Turlapov, V. E. Lobachevsky State University of Nizhny
Novgorod, Russia
Wyrzykowski, R. Czestochowa University of Technology, Poland
Yakobovskiy, M. V. Keldysh Institute of Applied Mathematics, RAS,
Russia
Yamazaki, Y. Federal University of Pelotas, Brazil

Organizing Committee
Koren’kov, V. V. (Chair) Joint Institute for Nuclear Research, Russia
Podgaynyi, D. V. (Deputy Chair) Joint Institute for Nuclear Research, Russia
Derenovskaya, O. Yu. (Secretary) Joint Institute for Nuclear Research, Russia
Antonov, A. S. Moscow State University, Russia
Antonova, A. P. Moscow State University, Russia
Organization ix

Busa, J. Joint Institute for Nuclear Research, Russia


Goglachev, A. I. South Ural State University, Russia
Kraeva, Ya. A. South Ural State University, Russia
Nikitenko, D. A. Moscow State University, Russia
Saktaganov, N. Joint Institute for Nuclear Research, Russia
Sidorov, I. Yu. Moscow State University, Russia
Sobolev, S. I. Moscow State University, Russia
Sokolov, I. A. Joint Institute for Nuclear Research, Russia
Stankus, D. B. Joint Institute for Nuclear Research, Russia
Torosyan, A. G. Joint Institute for Nuclear Research, Russia
Voevodin, Vad. V. Moscow State University, Russia
Voytishina, E. N. Joint Institute for Nuclear Research, Russia
Vorontsov, A. S. Joint Institute for Nuclear Research, Russia
Zaikina, A. G. Joint Institute for Nuclear Research, Russia
Zymbler, M. L. South Ural State University, Russia
Contents

High Performance Architectures, Tools and Technologies

VGL Rating: A Novel Benchmarking Suite for Modern Supercomputing


Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Ilya Afanasyev and Sviatoslav Krymskii

HPC TaskMaster – Task Efficiency Monitoring System


for the Supercomputer Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Pavel Kostenetskiy, Artemiy Shamsutdinov, Roman Chulkevich,
Vyacheslav Kozyrev, and Dmitriy Antonov

Constructing an Expert System for Solving Astrophysical Problems Based


on the Ontological Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Anna Sapetina, Igor Kulikov, Galina Zagorulko, and Boris Glinskiy

HPC Resources of South Ural State University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43


Natalya Dolganina, Elena Ivanova, Roman Bilenko,
and Alexander Rekachinsky

Parallel Numerical Algorithms

Comparative Analysis of Parallel Methods for Solving SLAEs


in Three-Dimensional Initial-Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . 59
V. S. Gladkikh, V. P. Ilin, and M. S. Pekhterev

Optimization of the Computational Process for Solving Grid Equations


on a Heterogeneous Computing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Alexander Sukhinov, Vladimir Litvinov, Alexander Chistyakov,
Alla Nikitina, Natalia Gracheva, and Nelli Rudenko

Parallel Methods for Solving Saddle Type Systems . . . . . . . . . . . . . . . . . . . . . . . . . 85


V. P. Il’in and D. I. Kozlov

Compact LRnLA Algorithms for Flux-Based Numerical Schemes . . . . . . . . . . . . 99


Andrey Zakirov, Boris Korneev, Anastasia Perepelkina,
and Vadim Levchenko

Analysis of Block Stokes-Algebraic Multigrid Preconditioners on GPU


Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
N. M. Evstigneev
xii Contents

Implementation of the Algebraic Multigrid Solver Designed for Graphics


Processing Units Based on the AMGCL Framework . . . . . . . . . . . . . . . . . . . . . . . . 131
O. I. Ryabkov

Measuring the Effectiveness of SAT-Based Guess-and-Determine Attacks


in Algebraic Cryptanalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Andrey Gladush, Irina Gribanova, Viktor Kondratiev, Artem Pavlenko,
and Alexander Semenov

Tuning of a Matrix-Matrix Multiplication Algorithm for Several GPUs


Connected by Fast Communication Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Yea Rem Choi, Vsevolod Nikolskiy, and Vladimir Stegailov

Visualizing Multidimensional Linear Programming Problems . . . . . . . . . . . . . . . . 172


Nikolay A. Olkhovsky and Leonid B. Sokolinsky

Supercomputer Simulation

Quantum-Chemical Calculations of the Enthalpy of Formation of Some


Tetrazine Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Vadim Volokhov, Elena Amosova, Alexander Volokhov, David Lempert,
Vladimir Parakhin, and Tatiana Zyubina

A New Approach to the Supercomputer Simulation of Carbon Burning


Sub-grid Physics in Ia Type Supernovae Explosion . . . . . . . . . . . . . . . . . . . . . . . . . 210
Igor Kulikov, Igor Chernykh, Dmitry Karavaev, Vladimir Prigarin,
Anna Sapetina, Ivan Ulyanichev, and Oleg Zavyalov

Parallel Simulations of Dynamic Interaction Between Train Pantographs


and an Overhead Catenary Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Evgeny Kudryashov and Natalia Melnikova

Construction of a Parallel Algorithm for the Numerical Modeling of Coke


Sediments Burning from the Spherical Catalyst Grain . . . . . . . . . . . . . . . . . . . . . . . 248
Olga Yazovtseva, Olga Grishaeva, Irek Gubaydullin,
and Elizaveta Peskova

MPI-Based PFEM-2 Method Solver for Convection-Dominated CFD


Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Andrey Popov and Ilia Marchevsky

Modeling of Two-Phase Fluid Flow Processes in a Fractured-Porous Type


Reservoir Using Parallel Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Ravil Uzyanbaev, Yuliya Bobreneva, Yury Poveshchenko,
Viktoriia Podryga, and Sergey Polyakov
Contents xiii

Kinetic Modeling of Isobutane Alkylation with Mixed C4 Olefins


and Sulfuric Acid as a Catalyst Using the Asynchronous Global
Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Irek Gubaydullin, Leniza Enikeeva, Konstantin Barkalov, Ilya Lebedev,
and Dmitry Silenko

Simulation of Nonstationary Thermal Fields in Permafrost Using


Multicore Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Elena N. Akimova and Vladimir E. Misilov

High-Performance Calculations for Modeling the Propagation


of Allergenic Plant Pollen in an Atmospheric Boundary Layer . . . . . . . . . . . . . . . 319
Olga Medveditsyna, Sergey Rychkov, and Anatoly Shatrov

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335


High Performance Architectures, Tools
and Technologies
VGL Rating: A Novel Benchmarking
Suite for Modern Supercomputing
Architectures

Ilya Afanasyev1,2(B) and Sviatoslav Krymskii1


1
Research Computing Center, Lomonosov Moscow State University,
Moscow 119234, Russia
afanasiev [email protected]
2
Moscow Center of Fundamental and Applied Mathematics, Moscow 119991, Russia

Abstract. This paper presents a novel project aimed to rank mod-


ern supercomputing architectures. The proposed rating is based on an
architecture-independent Vector Graph Library (VGL) framework. The
initial integration with VGL greatly simplifies the process of ranking new
supercomputing architectures due to the fact that VGL provides a con-
venient API for developing graph algorithms on a large variety of super-
computing architectures. Unlike existing projects (such as Graph500),
the proposed rating is based on a larger number of graph algorithms and
input graphs with fundamentally different characteristics, which makes
it significantly more representative when certain architectures have to be
compared for a specific real-world problem. Moreover, the proposed flex-
ible software architecture of our rating allows one to easily supplement
the rating with new graph algorithms and input data, if necessary.

Keywords: Graph Algorithms · Graph Framework · Benchmarking ·


Rating systems · Graph500 · NVIDIA GPU

1 Introduction
The ranking of modern supercomputing systems and computational platforms
is an important problem of modern computer science. With a large variety of
architectures that exist and are widely used nowadays, it is crucial to under-
stand which systems are capable of solving a specific real-world problem faster,
frequently taking into account the properties of input data.
There exist multiple projects such as Top500 [14], Graph500 [15], HPCG [13],
Algo500 [5], which are aimed to rank the performance of supercomputing systems
based on algorithms used in different fields of application. The purpose of this
study is to develop a ranking of modern shared memory systems that is more
representative than existing systems. Our research extends the approach of using
graph algorithms to rank modern supercomputing architectures using a family
of graph algorithms, which is important due to the fact that graph algorithms
are used in a wide range of applications: solution of infrastructure and biological
problems, analysis of social and web networks, etc.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
L. Sokolinsky and M. Zymbler (Eds.): PCT 2022, CCIS 1618, pp. 3–16, 2022.
https://doi.org/10.1007/978-3-031-11623-0_1
4 I. Afanasyev and S. Krymskii

We have developed a novel rating system named as the VGL-rating project1 ,


which is designed to achieve two main goals: (1) broader the group of graph
algorithms and input data used in the benchmarking core to make the rating
more representative and (2) make the benchmarking and submission process as
simple as running a single script on the target architecture (with all the required
action being automatized).
Thus, our project has the following advantages over existing solutions such
as Graph500 or Algo500. Firstly, our rating takes into account a larger group
of graph algorithms with drastically different characteristics, as well as input
data from various fields of application, which makes it more representative than
existing counterparts. Secondly, its native integration with the VGL framework
greatly simplifies the process of benchmarking a new architecture. Previously,
when submitting to a rating project, such as Graph500 or Algo500, the user has
to follow the following relatively complex steps: (1) develop optimized imple-
mentations of a specific graph algorithm, (2) obtain input data, (3) run the
implementation and measure performance metrics correctly and (4) fill multiple
forms to get into the rating list.
On the contrary, our rating makes the benchmarking process as simple as
running a single script that automatically performs the described steps. This is
achieved by the native integration of the rating with the VGL framework, which
provides highly optimized implementations for a large variety of modern CPUs
and GPUs. Thus, the development of an optimized implementation is covered
by VGL developers and hardware vendors, who are allowed to extend VGL on
their platforms, while all the remaining steps (downloading input data, compil-
ing optimized implementations, submitting performance results) are performed
automatically by a convenient script provided in VGL.
The rating system described in this paper is currently intended for shared
memory architectures. For this reason, at the current stage of the project, it
cannot be considered as a full replacement of Graph500. However, the VGL rat-
ing can easily be extended for clusters and systems containing multiple NVIDIA
GPUs (DGX) or vector engines (Aurora8) due to outgoing updates of the VGL
framework, which currently enables distributed graph processing using MPI as
a beta version [3].

2 Related Work

At the moment of this writing, many solutions aimed to benchmark and conse-
quently rank supercomputing systems exist. Examples of such solutions include
the Top500 [14], Graph500 [15], Green500 [7] lists, the Algo500 [5] project based
on Algowiki [17], the HPCG [13] benchmark, and some others. Typically, these
solutions are based on applying a specific frequently used algorithm and its
implementation, such as solving SLE, doing SPMV, etc., and using some per-
formance metrics to rank various supercomputing systems. Such a variety of

1
The VGL rating is currently available at vgl-rating.parallel.ru.
VGL Rating 5

existing ratings is explained by the fact that different algorithms used as a bench-
marking core stress different parts of the supercomputing hardware (for example,
Graph500 – memory subsystem). At the same time some approaches, such as
Algo500, are more general, since they are capable of benchmarking supercom-
puting systems based on any algorithm described in the Algowiki project.
The most related to our project is the Graph500 rating, which also uses the
implementations of the Shortest Paths and Breadth-First Search (BFS) graph
algorithms launched on RMAT [6] graphs of different scales as a benchmarking
core. However, this rating, in our opinion, has the following drawbacks:

– Only 2 graph algorithms are used. At the same time, there are many other
graph algorithms with different properties, which usually demonstrate a dras-
tically different performance on different evaluated architectures and result
into significantly different ratings based on these algorithms;
– Similarly, using only one type of synthetic input graphs leads to the same
problem, i.e. architectures can potentially be ranked drastically differently
when some other graph is used;
– Graph500 provides only generic MPI and OpenMP implementations, forcing
users to develop their own highly optimized implementations;
– Graph500 targets large supercomputing systems, while it is also interesting
to compare single-node (and single-GPU) systems.

Thus, we decided to build our own rating system on top of the VGL frame-
work. This rating is mostly aimed to benchmark single-node systems, at the
same time using graphs and algorithms with different properties, which allows
creating a more general and balanced rating.
As a benchmarking core we use our own VGL framework. There exist poten-
tially other CPU-based or GPU-based graph-processing systems, such as Gun-
rock [18], cuSha [9], Ligra [16], etc. However, as we will show in the following
sections, VGL suites these purposes better since it is architecture-independent
and supports a large variety of modern architectures.

3 Proposed Benchmarking Method


When developing a novel rating designed to rank systems based on the perfor-
mance of graph algorithm implementations, we had to decide three main features
of the developed rating:

1. which graph algorithms should be used as the basis of the rating;


2. which input graphs should be used as the basis of the rating;
3. which mathematical model should be used to create a rating based on the
selected graph algorithms and input graphs.

The next three subsections describe each of these three features in detail.
6 I. Afanasyev and S. Krymskii

3.1 Selecting Graph Algorithms

Firstly, we conducted a detailed study of the characteristic properties of a wide


set of graph algorithms, which was aimed at identifying fundamentally different
graph algorithms with fundamentally different computational characteristics. In
the course of this study, a number of basic mathematical properties of graph
algorithms (complexity, computing power, structure of information graphs), as
well as the properties of typical programs that implement these algorithms, were
examined.
Based on the analysis, we decided to form our rating on top of the following
graph algorithms:
Breadth First Search (BFS) is an algorithm designed to find the shortest
paths from one vertex of an unweighted graph to other vertexes. In a parallel
version, the algorithm traverses the graph by “layers”, starting from the initial
layer, which consists of the source vertex. The Breadth First Search algorithm is
the basis for many other graph algorithms, such as searching for connected and
strongly connected components, transitive closure, etc. This algorithm differs
from others in its “sparsity”: on each iteration, BFS typically processes only a
certain subset of graph vertexes (which can be rather small for the currently
processed “layer”).
Page Rank (PR) is a graph algorithm that is applied to a collection of
hyperlinked documents and assigns to each of them some numerical value mea-
suring its “importance”. This algorithm can be applied not only to web pages,
but also to any set of objects interconnected by reciprocal links, for example, to
any graph. This algorithm differs from others in that the processing of each ver-
tex requires loading information from multiple indirectly accessed arrays, thus
causing a larger latency compared to other algorithms (BFS, SSSP), which indi-
rectly access only a single array.
HITS (search for topics by hyperlinks) is a graph algorithm designed
to find Internet pages that match the user’s request based on the information
contained in a hyperlink. The idea of the algorithm relies on the assumption
that hyperlinks encode a significant number of hidden authoritative pages (an
authoritative page is a page that corresponds to the user’s request and has a
greater proportion among documents of a given topic, i.e. a larger number of
pages linking to this page). The HITS algorithm is similar to the Page Rank
algorithm. They both use the link relationship in web graphs to determine the
importance of pages. However, unlike Page Rank, HITS only works with small
subgraphs of a large web graph. This algorithm differs from others as it changes
the traversal direction (visiting either incoming or outgoing edges) twice on each
iteration.
The Bellman-Ford algorithm of shortest paths in a graph from a
source node (Shortest Paths from a Single Source, SSSP) is a graph
algorithm that finds the shortest paths from the starting node of the graph to
all the others. This algorithm goes through all the edges of the graph at most
|V −1| times and tries to improve the value of the shortest paths. This algorithm
VGL Rating 7

differs from others due to (1) its larger computational complexity and the (2)
fact that it processes weighted graphs.
The main comparative characteristics of the algorithms (including those
already mentioned during the algorithm description) are provided in Table 1.
An analysis of these characteristics enables to conclude that we selected a repre-
sentative set of graph algorithms. Our additional conducted experiments demon-
strated that the performance of graph algorithms solving other problems, includ-
ing Maximum Flow, Strongly Connected Components, Coloring, quite resembled
one or several algorithms that we used as the basis of our benchmark. Thus, these
four algorithms form the basis for a representative rating reflecting the features
of a wide range of graph algorithms. However, our implementation makes it eas-
ily to extend the set of algorithms used in the rating (as will be shown in the
other section), in case we need to add another drastically different algorithm in
the future.

Table 1. Main comparative characteristics of the algorithms used in the rating basis.

BFS SSSP PR HITS


Sequential
O(|E|) O(|V | ∗ |E|) O(|E| ∗ N ) O(|E| ∗ N )
complexity
Parallel
O(d) O(|V |) O(N ) O(N )
complexity
Computing
1 |V | N N
power
Working with
sparse vertex yes no no no
lists
Working with
inbound and
outbound edges yes no no yes
at the same
time
Working with
no yes no no
weighted graphs
Necessity to use
atomic no no yes yes
operations
Necessity to
check no no yes no
convergence
Fixed number
no no yes yes
of iterations
8 I. Afanasyev and S. Krymskii

3.2 Selecting Input Graphs


Secondly, we had to decide which input graphs should form the basis of the
rating. Unlike existing ratings (e.g. Graph500), which use a single type of input
graphs, we used a wide set of real-world and synthetic graphs of different size,
the main characteristics of which are provided in Table 2.

Table 2. Graphs used in the project. Each graph is described by the Name, Number
of Vertexes, Number of Edges triple. The columns and rows of the table correspond to
the different sizes and categories of these graphs.

Social Infrastructure Internet Rating Synthetic


YouTube
Stanford
friendships Texas (1.38 mln, Netflix (498k, RMAT (218 ,
Tiny (282k, 2.3
(1.13 mln, 3 1.9 mln) 1B) 223 )
mln)
mln)
LiveJournal Amazon
Western USA Zhishi (7.83 RMAT (222 ,
Small links (5.2 ratings (3.4
(6.2 mln, 15 mln) mln, 66 mln) 227 )
mln, 49 mln) mln, 5.8 mln)
UK domain
Central USA (14 RMAT (224 ,
Medium - (18.5 mln, -
mln, 34 mln) 229 )
262 mln)
Web trackers Amazon (31
Twitter (41.6 Full USA (24 RMAT (225 ,
Large (40.4 mln, mln, 82.6
mln, 1.5B) mln, 57.7 mln) 230 )
140 mln) mln)

Table 2 demonstrates the graphs from four important application fields


(social, infrastructure, internet, rankings), which differ in:

1. Number of vertexes (from 133 thousand to 105 million).


2. Number of edges (from 144 thousand to 3.3 billion).
3. Maximum number of edges outgoing from one vertex (from 9 to 20.7 million).
4. Average number of edges outgoing from one vertex (from 2.13 to 496).
5. Size of the largest connectivity component (from 75 to 104 million).
6. Diameter (from 4 to 8000).
7. Number of cycles in the graph (from 0 to 5.6 million).

Based on the data presented, we can conclude that the developed rating uses
different classes, the parameters of which significantly impact the performance
of graph algorithm implementations on different architectures.

3.3 Principles Used to Form the Rating


The rating has a large number of parameters that allow one to specify weights
for the category of graphs (social, infrastructure, Internet, rating, synthetic),
algorithms (Page Rank, HITS, Shortest Paths, BFS), graph size (tiny, small,
medium, large). By default, all parameters have the same weight (equal to 0.5)
VGL Rating 9

and, therefore, the same contribution to the final result, however, the user can
select a weight for each parameter in order to give preference to one or another
parameter. Each selected weight (from 0.0 to 1.0) indicates how strongly a par-
ticular parameter should affect the generated rating.
Two approaches to the formation of the rating were implemented. First, for
both approaches, all algorithms are launched on all graphs. Further, for each
pair {graph, algorithm}:

1. The performance values are sorted among all architectures; after sorting, each
architecture receives a sequential number, i.e. an index in the sorted array. For
each architecture, a value equal to the difference of the number of architec-
tures and the sequential number of the architecture, multiplied by the weight,
is added to the final rating.
2. The maximum performance value among all architectures is found, and all
results are divided by this value (normalization is performed). After that, for
each architecture, the normalized values multiplied by weights are added to
the final rating value.

Let us denote the set of graph types (social, infrastructure, etc.) used as the
basis of the rating as I, the set of graphs as G, the set of used graph algorithms as
J, the set of graph scales as K, the set of tested architectures as A. In addition,
let us denote the weights of these sets (which are specified by users) as xi , xj ,
xk .
In the first implementation, we first fix the graph g ∈ G, the graph
algorithm j ∈ J, and for all a ∈ A we obtain an array of performance values
Mgja corresponding to the triple {g, j, a}. Then we sort these values, and each
architecture gets the value pgja corresponding to the index of the value Mgja in
the sorted array. Let N be the number of architectures, then the final rating is
formed according to the following equation:


Ra (Rating of architecture a) = (N − pgja ) ∗ xi ∗ xj ,
∀g∈G,∀j∈J

In the second implementation, the rating is formed according to the


following equation:

 ∀p ∈ A Mijk p
Ra (Rating of architecture a) = ∗ xi ∗ xj ∗ xk .
max∀t∈A Mijk t
∀i∈I,∀j∈J,∀k∈K

where Mijk a is the performance of the implementation of the graph algorithm j


on graphs of type i of size k on the architecture a.
The essence of the first approach is that the architecture that is often better
in efficiency on fixed graphs and algorithms will have a higher rating; while the
second approach calculates the sum of normalized efficiency values. The problem
10 I. Afanasyev and S. Krymskii

of the second approach may be that one architecture works much better than
others on a small number of fixed graphs and algorithms, while it is slightly
worse on all others. In this case, this architecture can be ranked higher than
others, although it shows less efficiency on most graphs.
In the second approach, the normalization by the maximum performance
value obtained among all architectures on a certain combination of input param-
eters i, j, k, t is required to avoid situations when performance differences on
different sets of input data are drastically different due to some properties of
input data. For example, the shortest paths algorithm on a road graph performs
many more iterations compared to social graphs (due to their different diame-
ters). Without this normalization, the performance input of road graphs will be
much lower compared to social ones, which should not be the case.

4 Using VGL as a Benchmarking Core


As mentioned in the introduction, we decided to use the architecture-
independent graph-processing framework VGL [2,4] as a benchmarking core;
it currently supports many modern supercomputing architectures: NVIDIA
GPUs [10], NEC SX-Aurora TSUBASA vector engines [1], A64FX with HBM
memory, as well as multicore CPUs of different models and vendors (Intel Xeon,
Intel KNL, Arm Kunpeng, AMD EPYC, etc.). The architectural independence of
VGL [2] is achieved by using the same data and computation abstractions for all
supported architectures (in terms of interfaces, but not implementations). The
use of optimized implementations with generic interfaces is achieved by means
of C++ object oriented programming, inheritance and templates.
Using VGL as a benchmarking core requires the development of implemen-
tations of the graph algorithms selected in Sect. 3.2 on the basis of VGL com-
putational and data abstractions. Thus, the developed high-performance imple-
mentations of the PR, SSSP, BFS, HITS graph algorithms based on the VGL
API will be able to operate on different VGL-supported architectures with the
specification of proper compilation flags.
Scripts to perform the automatic downloading of input data, the compilation
of the required algorithms for the target architecture, as well as the submission
of performance measurements and performance results, were added to the VGL
build. These modifications will be described in the following section in detail.

5 Developed Benchmarking System

During the course of this project, the VGL framework was extended with a
set of interfaces for automatically collecting performance data. These interfaces
execute the selected graph algorithms (PR, BFS, SSSP, HITS) on specified input
data on VGL-supported architectures. Afterwards, the interfaces automatically
send the results to the rating server, which in turn creates a rating based on the
ranking method described in Sect. 3.
VGL Rating 11

The general scheme of the developed benchmarking system is illustrated in


Fig. 1.

Fig. 1. Scheme of the developed benchmarking system: the client side is implemented
via VGL scripts and interfaces, while the server part is responsible for data storage
and rating visualization.

As shown in Fig. 1, the developed system has client and server parts.
Client Part: On the architecture being benchmarked, the user launches a
Python script provided inside VGL, which downloads graphs from the Konect
collection [12], converts them into the internal VGL format, launches four graph
algorithms and then collects performance data in TEPS [15]. Afterwards, the
performance data is uploaded to the rating server.
12 I. Afanasyev and S. Krymskii

Server Part: The server executes two scripts: the first is responsible for receiv-
ing data from the client and storing the received data in MongoDB [8], while the
second is responsible for calculating the rating based on user-specified parame-
ters and visualizing it as an HTML page.
Next, we will describe these two parts in detail, following the process of
benchmarking a specific architecture chosen by the user, submitting the obtained
benchmarking results and processing these results by the rating system.
First of all, the user launches the Python script submit.py on the client side,
which automatically performs the following actions.
At the beginning, the type of the architecture is determined: presence of
GPUs, vector edges of the SX-Aurora TSUBASA system [11], vendor, type and
generation of the CPU, etc. Depending on the obtained values, the evaluated
graph algorithm implementations are compiled according to the obtained infor-
mation (using specific compilers, optimizations flags, etc.). To achieve this, we
implemented in VGL a fairly large database of recommended compilation and
optimization settings for many widely used supercomputing architectures.
Afterwards, all graphs needed for testing are downloaded from the Konect
collection, and synthetic graphs are generated using random graph generators
implemented in the VGL framework.
After downloading, all graphs are divided into groups by the categories
defined in Sect. 3.1. When generating a rating, the user will be able to spec-
ify influence weights for each of the groups.
The user can provide additional parameters to the submity.py script to launch
graph algorithms on specific subsets of input graphs: Tiny, Tiny + Small, Tiny
+ Small + Medium, Tiny + Small + Medium + Large (in other words, a gradual
increase in the graphs used). These modes allow one to accelerate the benchmark-
ing process, as well as to solve the problem when certain large graphs cannot be
stored in the memory of the evaluated architecture, which can be the case for
NVIDIA GPUs or personal computers where the memory is limited by around
16–32 GB. It is important to emphasize that if some graph is not used for test-
ing, the obtained rating of the benchmarked architecture will be lower as if the
performance obtained on these graphs was equal to zero.
Once downloaded, the graphs are converted to an edge list format and stored
on the disk as binary files. Then, the optimized routines of the VGL framework
are used to load and convert these graphs into a specific optimized representation,
namely, CSR, VectorCSR [4], segmented or clusterized CSR [19], etc. Using
the optional parameters of the submit.py script, the user can select a specific
graph storage format for the evaluated architecture, which they think would be
more suitable. By default, VGL also provides a recommendation database, which
format should be used for a specific architecture (similarly to compilation and
optimization options).
Afterwards, all four algorithms are executed on all converted graphs, the
performance data is collected and saved as an array of dictionaries. Finally, this
performance data is packed and sent to the rating server. An offline export of
the performance data is implemented as an option. This is necessary in the case
VGL Rating 13

when the benchmarked system does not have access to the Internet, which is a
frequent situation for supercomputer nodes. In both cases, the generated array
of dictionaries containing the performance results is converted to a stream of
bytes using the pickle library and sent by the client to the server using the
socket library, where the received stream of bytes is converted back to a Python
dictionary.
The rating server processes the received performance data in the following
way.

Fig. 2. Structure of the data saved in the Mongo database.

The received data is saved to the Mongo database in the format shown in
Fig. 2. We decided to use a non-relational (NoSQL) Mongo database due to
the fact that in MongoDB, each collection object can contain different fields,
while in SQL databases, tables have a strongly typed schema. In our project,
it allows providing additional information, i.e. new graphs, algorithms, types of
the evaluated system during the development of the project, while remaining
back compatibility with older data.
After the data is saved, a specific method for calculating the rating (described
in Sect. 3) is used. The developed system is very flexible, and additional rating
formulas can be easily provided. In the future, we plan to support data visual-
ization based on various rating formulas according to the user’s choice.
A web page2 written in html, css and javascript is used to visualize the results.
The interaction of Python scripts and web pages is implemented using the Flask
web framework.
2
vgl-rating.parallel.ru.
14 I. Afanasyev and S. Krymskii

6 Using the Developed Benchmarking System to Rank


Modern Supercomputing Platforms

Table 3. Rating results used to make observations. Each cell after the name of the
architecture provides its rating value, which shows how often the given architecture is
better than the others.

Rating BFS
Overall rating LARGE size
position algorithm
NEC NEC
NVIDIA
SX-Aurora SX-Aurora
1 GPU V100,
TSUABSA, TSUABSA,
36.25
37.4 32
NVIDIA NVIDIA
Intel Xeon
2 GPU V100, GPU P100,
6140, 22.75
31.23 31
NVIDIA NVIDIA
Intel Xeon
3 GPU P100, GPU
6140, 25.5
19.86 V100,16.25
Intel Xeon Kunpeng Kunpeng
4
6140, 12 920, 16 920, 22
NEC
NVIDIA
Kunpeng 920, SX-Aurora
5 GPU P100,
8.81 TSUABSA,
15
19
Intel Xeon Intel Xeon Intel Xeon
6
6240, 5.45 6126, 10.75 6240, 7.75
Intel Xeon Intel Xeon Intel Xeon
7
6126, 4.35 6240, 9.25 6126, 7.5

Based on the developed rating, the following modern supercomputer architec-


tures were ranked:

1. Vector processors NEC SX-Aurora TSUBASA


2. Graphics accelerators NVIDIA (P100, V100, etc.)
3. Central processing units Intel Xeon (Skylake, Cascade Lake)
4. Central processing units A64FX
5. Central processing units ARM Kunpeng.

The following observations were made according to the results provided in


Table 3:

1. Kunpeng 920 works faster on infrastructure graphs than Intel Xeon 6140, but
slower on all the others.
VGL Rating 15

2. NVIDIA GPUs process social graphs faster than NEC SX-Aurora TSUBASA
v1.0 and all the others slower.
3. NVIDIA GPUs work faster with the PR, SSSP algorithms than NEC SX-
Aurora TSUBASA v1.0 and slower with the other algorithms.
4. Intel Xeon 6140 is faster on the BFS algorithm than NVIDIA GPU P100 and
V100 and slower on the other algorithms.
5. Kunpeng 920 is faster on BFS than Intel Xeon 6140 and slower on all the
others.
6. NVIDIA GPUs are faster on Large graphs than NEC SX-Aurora TSUBASA
v1.0 and slower on the other graph sizes.
7. Kunpeng 920 and Intel Xeon 6140 are faster on Large graphs than NEC
SX-Aurora TSUBASA v1.0 and slower on the other graph sizes.

7 Conclusion
In this paper, we proposed a novel rating system that evaluates the performance
of target architectures based on the performance of multiple graph algorithms:
PR, SSSP, BFS and HITS. At the same time, our rating uses different types of
input graphs: infrastructure, social, rating, synthetic, which in aggregate makes
the proposed rating more representative than its existing counterparts.
The proposed rating system is implemented on top of the architecture-
independent VGL framework, which makes the benchmarking and submission
process as simple as running a single script provided in VGL.
Information about our rating is currently available on the vgl-rating.
parallel.ru website. In addition, everyone can easily contribute to the VGL frame-
work, freely available at vgl.parallel.ru, by implementing support for new archi-
tectures. We strongly believe that the proposed rating will be frequently used to
compare modern supercomputing architectures, gradually turning into a larger
project.

Acknowledgements. The reported study presented in all sections, excluding Sect. 5,


was funded by RFBR and JSPS according to research project No. 21-57-50002 and
Grant number JPJSBP120214801. The work presented in Sect. 5 was supported by the
Russian Ministry of Science and Higher Education, agreement No. 075-15-2019-1621.

References
1. NEC SX-Aurora TSUBASA C/C++ compiler user’s guide. https://www.hpc.nec/
documents/sdk/pdfs/g2af01e-C++UsersGuide-016.pdf. Accessed 12 May 2020
2. Afanasyev, I.V.: Developing an architecture-independent graph framework for
modern vector processors and NVIDIA GPUs. Supercomput. Front. Innov. 7(4),
49–61 (2021). https://doi.org/10.14529/jsfi200404
3. Afanasyev, I.V., Voevodin, V.V., Komatsu, K., Kobayashi, H.: Distributed graph
algorithms for multiple vector engines of NEC SX-aurora TSUBASA systems.
Supercomput. Front. Innov. 8(2), 95–113 (2021)
16 I. Afanasyev and S. Krymskii

4. Afanasyev, I.V., Voevodin, V.V., Komatsu, K., Kobayashi, H.: VGL: a high-
performance graph processing framework for the NEC SX-Aurora TSUBASA vec-
tor architecture. J. Supercomput. 77(8), 8694–8715 (2021). https://doi.org/10.
1007/s11227-020-03564-9
5. Antonov, A., Nikitenko, D., Voevodin, V.V.: Algo500-a new approach to the joint
analysis of algorithms and computers. Lobachevskii J. Math. 41(8), 1435–1443
(2020)
6. Chakrabarti, D., Zhan, Y., Faloutsos, C.: R-MAT: a recursive model for graph min-
ing. In: Proceedings of the 2004 SIAM International Conference on Data Mining,
pp. 442–446. SIAM (2004). https://doi.org/10.1137/1.9781611972740.43
7. Feng, W.C., Cameron, K.: The green500 list: encouraging sustainable supercom-
puting. Computer 40(12), 50–55 (2007)
8. Győrödi, C., Győrödi, R., Pecherle, G., Olah, A.: A comparative study: MongoDB
vs. MySQL. In: 2015 13th International Conference on Engineering of Modern
Electric Systems (EMES), pp. 1–6. IEEE (2015)
9. Khorasani, F., Vora, K., Gupta, R., Bhuyan, L.N.: CuSha: vertex-centric graph
processing on GPUs. In: Proceedings of the 23rd International Symposium on
High-Performance Parallel and Distributed Computing, pp. 239–252 (2014)
10. Kirk, D., et al.: Nvidia CUDA software and GPU parallel computing architecture.
In: ISMM, vol. 7, pp. 103–104 (2007)
11. Komatsu, K., Watanabe, O., Musa, A., et al.: Performance evaluation of a vector
supercomputer SX-Aurora TSUBASA. In: Proceedings of the International Confer-
ence for High Performance Computing, Networking, Storage, and Analysis, Dallas,
TX, USA, 11–16 November 2018, SC 2018, pp. 54:1–54:12. IEEE (2018). https://
doi.org/10.1109/SC.2018.00057
12. Kunegis, J.: Konect: the koblenz network collection. In: Proceedings of the 22nd
International Conference on World Wide Web, pp. 1343–1350 (2013)
13. Marjanović, V., Gracia, J., Glass, C.W.: Performance modeling of the HPCG
benchmark. In: Jarvis, S.A., Wright, S.A., Hammond, S.D. (eds.) PMBS 2014.
LNCS, vol. 8966, pp. 172–192. Springer, Cham (2015). https://doi.org/10.1007/
978-3-319-17248-4 9
14. Meuer, H.W.: The top500 project. looking back over 15 years of supercomputing
experience (2008)
15. Murphy, R.C., Wheeler, K.B., Barrett, B.W., Ang, J.A.: Introducing the graph
500. Cray Users Group (CUG) 19, 45–74 (2010)
16. Shun, J., Blelloch, G.E.: Ligra: a lightweight graph processing framework for shared
memory. In: ACM SIGPLAN Notices, vol. 48, pp. 135–146. ACM (2013)
17. Voevodin, V., Antonov, A., Dongarra, J.: AlgoWiki: an open encyclopedia of par-
allel algorithmic features. Supercomput. Front. Innov. 2(1), 4–18 (2015)
18. Wang, Y., Davidson, A., Pan, Y., et al.: Gunrock: a high-performance graph pro-
cessing library on the GPU. In: Proceedings of the 21st ACM SIGPLAN Sympo-
sium on Principles and Practice of Parallel Programming, pp. 1–12. ACM (2016).
https://doi.org/10.1145/2851141.2851145
19. Zhang, Y., Kiriansky, V., Mendis, C., Zaharia, M., Amarasinghe, S.P.: Optimizing
cache performance for graph analytics. arXiv abs/1608.01362 (2016)
HPC TaskMaster – Task Efficiency
Monitoring System
for the Supercomputer Center

Pavel Kostenetskiy(B) , Artemiy Shamsutdinov, Roman Chulkevich,


Vyacheslav Kozyrev, and Dmitriy Antonov

HSE University, 11, Pokrovsky boulevard, Moscow 109028, Russia


[email protected]

Abstract. This paper is devoted to the monitoring system HPC Task-


Master developed at the HSE University for the cHARISMa cluster.
This system automatically evaluates the efficiency of performing tasks
of HPC cluster users and identifies inefficient tasks, thereby significantly
saving the expensive machine time. In addition, users can view reports
on completing their tasks, along with inferences about their work and
interactive graphs. Particular attention in this paper is paid to determin-
ing the effectiveness of the task – the system allows the administrator
to personally configure the criteria for evaluating the effectiveness of the
task without the need for changes in the source code. The system is
developed using open-source software and is publicly available for use on
other clusters.

Keywords: HPC cluster · efficiency · monitoring

1 Introduction

A task efficiency monitoring system is essential for detecting incorrectly started


calculations that entail the insufficiently efficient use of cluster resources. This
paper describes a new task performance monitoring system, HPC TaskMaster,
developed at the HSE University for the cHARISMa (Computer of HSE for
Artificial Intelligence and Supercomputer Modeling) cluster.
The developed system allows users to view reports on the performance of their
tasks together with interactive execution schedules and automatically identify
tasks that worked inefficiently. Having access to the results of the analysis, users
can run their tasks more efficiently in the future, which will significantly save
the machine time of the cluster.
In addition, the system will allow the administrators of the cluster to collect
statistics about user tasks, which was previously unavailable.

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


L. Sokolinsky and M. Zymbler (Eds.): PCT 2022, CCIS 1618, pp. 17–29, 2022.
https://doi.org/10.1007/978-3-031-11623-0_2
18 P. Kostenetskiy et al.

The most common examples of the inefficient usage of cluster resources are:
– allocation of insufficient or excessive resources for a task;
– running a non-parallel task on multiple CPU cores or GPUs;
– allocation of the compute node capacity without starting calculations.
The following requirements were defined for the design of the task perfor-
mance monitoring system.
1. The system should collect the following data for each task:
– utilization of specific CPU cores allocated for the task;
– utilization of GPUs allocated for the task;
– GPU memory utilization;
– GPU power consumption;
– utilization of RAM created by the task;
– file system usage.
2. The system must analyze the collected data and use it to determine whether
the task worked effectively.
3. The system must provide users with access to the list of completed tasks and
reports on their completion using a web application.
The rest of this paper is organized as follows. A comparison of different
monitoring systems is carried out in Sect. 2. In Sect. 3, the architecture of the
system is described. The detection of inefficient user tasks is considered in Sect. 4.
User statistics are provided in Sect. 5. Finally, Sect. 6 shows the conclusions of
this work.

2 Related Work
The key feature of the HSE cluster is how it allocates resources for user tasks.
Instead of allocating the entire compute node for one task, the user is given a
certain number of processor cores and GPUs. As a result, several dozen tasks can
be performed on the compute node at once, thus optimizing cluster resources.
Due to this feature, ready-made solutions for monitoring system resources, such
as Nagios and Zabbix, are not suitable for this cluster. cHARISMa already has
a monitoring system of its own [4], however, it is designed to display only the
global usage across the whole cluster and its nodes.
Since one of the HSE University goals is to provide cluster users with a
secure system in the HSE University environment, a new monitoring system was
built using open-source monitoring tools. Chan [3], Wegrzynek [11], Kychkin [6],
Safonov [10] describe how using a combination of programs such as Telegraf,
InfluxDB and Grafana allows one to quickly set up and run a cluster resource
monitoring system. In [2,3], it is also described how the Slurm plugin acct gather
enables to collect metrics for Slurm tasks, which is precisely the data required
for a task efficiency monitoring system. Since all programs, except Telegraf, are
already installed on cHARISMa, this approach can be used to monitor tasks on
the cluster.
HPC TaskMaster 19

The development of LIKWID Monitoring Stat [9], a task monitoring system


using InfluxDB, Grafana and built-in LIKWID tools for monitoring tasks on the
cluster, also draws attention. For each task, a dashboard is created from ready-
made JSON templates, which allows creating personalized graphs for each task.
The disadvantages of using the LIKWID Monitoring Stack on the HSE Cluster
include the need to use LIKWID tools for the system to operate and the lack of
a web interface for the system in addition to Grafana, which makes the system
inconvenient for using on a cluster with a large number of users and tasks.
In addition to monitoring cluster resources, the system must analyze the
effectiveness of user tasks. A well-known system for creating reports on the
effectiveness of tasks is JobDigest [7,8]. It analyzes the collected integral values
and, based on them, applies a tag to the task describing the property of the
task (for example, “low GPU utilization”). Although using tags is convenient
for searching and filtering tasks, it is not always possible to provide an overall
picture of the effectiveness of the task using tags alone.
Summarizing all the above, we can conclude that there is no ready-made
task monitoring system fitting the individual characteristics of the cHARISMa
cluster, which can be integrated into the HSE University environment. It is
necessary to develop its own software system for evaluating the effectiveness of
tasks, which can be flexibly configured for specific types of user tasks, delimit
access for cluster users, and take into account the compliance of tasks with
registered scientific and educational projects. As the basis of the system, it is
worth using the open-source software Telegraf, InfluxDB and Grafana.

3 System Architecture
This section describes the monitoring infrastructure of the HPC TaskMaster
system, shown in Fig. 1.

Fig. 1. Diagram of the system components


20 P. Kostenetskiy et al.

The Slurm task scheduler is used to run tasks on the cluster. The main
data of Slurm tasks is stored in the MySQL relational database using the back-
ground process slurm database (slurmdbd), and the task metrics are written to
the InfluxDB time series database using the plugin acct gather. This plugin col-
lects memory and filesystem usage (read/write) for each task.
The required metrics of utilizing specific CPU cores and GPUs are collected
with the Telegraf daemon, which has built-in plugins for these metrics. Thus,
having the CPU and GPU IDs assigned to the task, the system can collect
metrics for the components and, therefore, distinguish utilization for different
tasks on one node. Additional metrics are collected using developed plugins in
Python.
The collected metrics are stored in the InfluxDB database. InfluxDB was cho-
sen as a time-series database because of Telegraf support and Slurm acct gather
plugin support, which allows one to store all the required metrics in one database.
Grafana is used as a tool for visualizing graphs on the cHARISMa cluster.
Grafana provides great opportunities for configuring and formatting charts and
also has support for creating them using the API. This API allows automating
the creation of graphs for each task. New graphs for each task are created using
JSON templates. Based on the available data about the task, when the user
requests it, graphs are automatically built in Grafana. The created graphs are
displayed on the system’s website using iframe technology, where the user can
interactively view the graphs for the period of task execution. In addition, the
system creates graphs for both completed and running tasks. Thereby, the user
can observe the work of his task in real time.
The advantage of using a combination of Telegraf, InfluxDB and Grafana is
the ability to install and configure these tools on any cluster. Moreover, these
tools make the monitoring system quite flexible – additional data for the system
can be collected using the built-in plugins of Telegraf or developed ones.
It is important to pay attention to the fact that the HPC TaskMaster system
has a negligible impact on the performance of compute nodes; the installed
Telegraf daemon uses only 0.03% of the overall CPU performance. In addition
to Telegraf, another source of the computing cluster load is InfluxDB. Installed
on the head node, InluxDB uses an average of 5 GB of storage per month. To
free up storage, a retention policy that compresses metrics older than 6 months
is used.
The HPC TaskMaster system is developed on Django, a Python web frame-
work that has a large number of available packages and a wide range of tools for
developing web applications, which allows one to develop a monitoring system
using Telegraf, InfluxDB and Grafana. In addition, Django has a built-in admin-
istration panel through which the administrator can configure the monitoring
system himself without making changes to the source code of the program.
The task performance monitoring system works according to the following
principles:
HPC TaskMaster 21

– metrics are collected on each compute node using Telegraf and stored in the
InfluxDB database on the head node. Metrics from the acct gather plugin are
also stored in InfluxDB;
– the system updates its local MySQL database by comparing its tasks with
those from the Slurm database;
– while the task is running, aggregated metrics are collected for it from the
InfluxDB database with a certain period;
– if the task is completed, its aggregated metrics are collected for the last time;
– the collected aggregated metrics are analyzed by the system, and an inference
about the efficiency of the task is generated.

4 Detecting Inefficient Tasks


The user interacts with the HSE high-performance computing cluster [4] by
launching tasks through the SLURM workload manager. A task is a set of user
processes for which the workload manager allocates computing resources (com-
pute nodes, CPUs, GPUs, etc.) Each launch of the user’s program for execution
generates a new task, which is collected in the database and analyzed.
Here we define task efficiency as the usage of allocated resources above a
certain threshold.

4.1 Collected Data


HPC TaskMaster collects two types of data about running tasks on the HPC
cluster:
1) parameters characterizing the running task;
2) metrics that characterize the execution of the task.

Parameters. Table 1 shows the task parameters and their type.


Metrics
Table 2 shows the metrics collected during the execution of the task. The metrics
form a time series θi . Θ = {θi } denotes the set of all-time series of the task.
The frequency of collecting metrics can be adjusted and selected in such a way
as to obtain sufficiently detailed information about the task without overloading
the system with data collection and storage.

4.2 Data Processing

Aggregated Metrics
To simplify the analysis, aggregated metrics Λk = (λk1 , · · · , λkm ) are calculated
for each time series [5]. They include the minimum, maximum, average, median
and standard deviations. In addition to them, the tuple Λ includes the average
load of each node and the combined average load of the nodes.
22 P. Kostenetskiy et al.

Table 1. Parameters of the task

№ Parameter Type
1 ID Integer
2 Task name
3 Status
String
4 Launch command
5 Type of compute nodes
6 Number of compute nodes
7 Number of CPU cores
8 Number of GPUs
Integer
9 Exit code
10 User ID
11 Project ID
12 Start date and time
Date
13 End date and time

Table 2. Collected metrics and collection frequency

№ Metrics Frequency, seconds Units of measurement


1 CPU cores usage by the user
2 CPU cores usage by the system percentages
3 GPU usage
4 RAM usage 10
5 GPU memory usage kilobyte
7 GPU power consumption watt
8 File System access 60 megabyte

Tags
Since the task parameters are a heterogeneous set of data (integers, strings,
dates), to simplify their analysis, a system of tags, i.e., “labels” indicating the
type of task, execution time, and other properties of the task, is introduced.
Table 3 contains a list of tags currently available in the system. Additional tags
can be developed and implemented into the system.
The tuple T k = (τ1k , . . . , τnk ) is assigned to the task with the ID k, where n is
the number of tags in the system. The τi element corresponds to the indicator of
the i tag and takes the value 1 if all conditions are met and the tag is assigned
to the task, and 0 otherwise.
Indicators
To determine if the task is working inefficiently, it is necessary to evaluate the
disposal of the components involved in the task. To do this, the concept of
indicator of problems is introduced.
HPC TaskMaster 23

Table 3. List of tags

№ Tags Type
1 Jupyter-notebook task
2 LAMMPS task
3 VASP task
String
4 Allocation of resources for calculations
5 The task lasted less than a minute
6 The task was completed with an error

Indicators, dimensionless values inversely proportional to the value of the


metrics, are used to evaluate the disposal of the components involved in the
task.
Indicators take a value from 0 (with the full use of allocated resources) to
1 (otherwise). For example, the value of the indicator lj is calculated from the
aggregated metric λkj ∈ Λk using formula (1).

λkj − aj
ljk = 1 − , lj ∈ [0, 1], (1)
bj − aj

where aj , bj are the admin defined parameters referring to the minimum and
maximum possible values of the j-th element of the aggregated metrics.
Indicators are placed in the tuple of indicators Lk = (l1k , . . . , lm
k
).
The list of currently available indicators is presented in Table 4. Additional
indicators can be developed and implemented into the system. The number of
indicators for a specific task depends on the number of cores, compute nodes
and GPUs used.

Table 4. List of indicators

№ Indicators
1 Low average CPU usage
2 Low average CPU core usage
3 Low average GPU usage
4 Low GPU memory usage
5 The task was completed with an error

4.3 Inferences

To help users to interpret the results, the system has a set of inferences Φ = (φi ).
Inferences are the result of the analysis of the task.
24 P. Kostenetskiy et al.

Different requirements for tags and indicator values are set for each inference.
An inference is assigned to the task when all the conditions are met. Several
inferences can correspond to one task at once.
Denote the union of tuples of indicators L and tags T as

N k = (l1k , . . . , lnk , τ1k , . . . , τm


k
). (2)

Let Ωi be a set of conditions for the output of φi to the elements of the tuple
N k.
Then we can match the set C k to each problem:

C k = {φi ∈ Φ : Πω∈Ωi 1ω (N k ) = 1}, (3)

where 1ω is the indicator function equal to 1 if the condition ω ∈ Ωi is met. In


other words, the tuple C k contains the inferences assigned to the task.

4.4 Example

Let us consider a computational task performed on the cHARISMa supercom-


puter using 176 cores and 16 NVIDIA Tesla V100 GPU accelerators on 4 compute
nodes. Table 5 shows the parameters of the task.

Table 5. Parameters of the task

№ Parameter Value
1 ID 405408
2 Task name SimpleRun
3 Status Successful
4 Exit code 0
5 Launch command sbatch run task.sh
6 User ID 2000
7 Project ID 32
8 Start date and time November 11, 2021 10:13:28
9 End date and time November 12, 2021 13:19:09
10 Type of compute nodes type a
11 Number of compute nodes 4
12 Number of CPU cores 176
13 Number of GPUs 16

The aggregated metrics across all compute nodes for the example task are
shown in Table 6.
HPC TaskMaster 25

Table 6. Aggregated metrics by node

№ Metrics Value
1 Avg. load of cores on comp. node cn-001 99.36
2 Avg. load of cores on comp. node cn-002 99.11
3 Avg. load of cores on comp. node cn-003 99.15
4 Avg. load of cores on comp. node cn-004 99.51
5 Avg. load of comp. nodes 99.28
7 Avg. utilization of GPUs on comp. node cn-001 71.62
8 Avg. utilization of GPUs on comp. node cn-002 71.6
9 Avg. utilization of GPUs on comp. node cn-003 71.15
10 Avg. utilization of GPUs on comp. node cn-004 71.8
11 Avg. utilization of GPUs 71.54

Table 7 shows the aggregated metrics of the time series for compute node
cn-001. Data for compute nodes cn-002, cn-003, cn-004 are not shown to save
space.

Table 7. Aggregated metrics of compute node cn-001

Node cn-001 Min Avg Max


CPU usage by the system
1 Core 1 0 0.12 11.4
. . . . .
. . . . .
. . . . .
44 Core 44 0 0.13 7
CPU usage by the user
45 Core 1 0 98.9 100
. . . . .
. . . . .
. . . . .
88 Core 44 0 99.8 100
89 Average usage of cores on the node 99.36
GPU usage №:0
90 Utilization 0 71.62 99
91 Memory usage, MB 0 7095.3 8780
92 Power consumption, Watt 66 128.9 156.1
. . . . .
. . . . .
. . . . .
GPU usage №:3
99 Utilization 0 71.61 99
100 Memory usage, MB 0 7095.3 8780
101 Power consumption, Watt 66 129 155.9
102 RAM usage, MB 0.35 128.29 715.44
File system access, GB
103 Read 0 141389.39 288706.41
104 Write 0 1302.76 2753.31
26 P. Kostenetskiy et al.

Tags of the Task


Based on the parameters of the task from Table 5 and the tags from Table 3, no
tag will be assigned to task 405408, since it is completed without an error and
is not the launch of one of the packages. Therefore, the tuple of task tags will
have the form T 405408 = (0, 0, 0, 0, 0, 0).
Indicators of the Task
Based on the data from Tables 6, 7, the system calculates the values of the
indicators shown in Table 8.

Table 8. List of indicators

N Indicator Value
Compute node cn-001
1 Core 1 0.011
. . .
. . .
. . .
44 Core 44 0.002
207 GPU №:0 utilization 0.284
. . .
. . .
. . .
210 GPU №:3 utilization 0.284
223 GPU №:0 memory usage 0.778
. . .
. . .
. . .
226 GPU №:3 memory usage 0.778
. . .
. . .
. . .
Compute node cn-004
205 Core 1 0.011
. . .
. . .
. . .
206 Core 40 0.002
207 GPU №:0 utilization 0.279
. . .
. . .
. . .
208 GPU №:4 utilization 0.28
209 GPU №:0 memory usage 0.779
. . .
. . .
. . .
210 GPU №:3 memory usage 0.778
Summary
239 Avg. load of cores on node cn-001 0.006
240 Avg. load of cores on node cn-002 0.009
241 Avg. load of cores on node cn-003 0.008
242 Avg. load of cores on node cn-004 0.005
243 Avg. load of nodes 0.007
244 Avg. utilization of GPUs on node cn-001 0.284
245 Avg. utilization of GPUs on node cn-002 0.284
246 Avg. utilization of GPUs on node cn-003 0.289
247 Avg. utilization of GPUs on node cn-004 0.282
248 Avg. utilization of GPUs 0.285
HPC TaskMaster 27

Inferences of the Task


After the previous steps, we get a tuple
N 405408 = (l1 , . . . , l202 , τ1 , . . . , τ6 )
As an example, let us consider the three outputs presented in Table 9.

Table 9. Inferences

φi Inference Conditions Cond. is met


1 Successful task li ≤ 0.5, i = 1, · · · , 248 Yes
τi = 0, i = 5, 6 Yes
2 Task completed with an error τ5 = 1 No
3 Inefficient CPU usage li > .5 i = 1, · · · , 206, 239, · · · , 243 No
li ≤ 0.5,
4 GPU is not used No
i = 1, · · · , 206, 211, · · · , 215
li > 0.8,
No
i = 207, · · · , 238, 244, · · · , 248

Based on the tuple N 405408 , the system will associate the set C 405408 = {φ1 }
with task 405408, since the task is executed without errors and all resources are
used.
An example of the task report with an inference of inefficient salloc usage is
shown in Fig. 2.

Fig. 2. Task report

5 User Statistics
System administrators have access to inference statistics for each cluster user
for a selected period of time. An example of statistics is shown in Fig. 3. Using
28 P. Kostenetskiy et al.

this pie chart, administrators can understand which types of tasks are caus-
ing difficulties for the user. After determining the problem that the user has
encountered, he can get a personal consultation to solve this problem.

Fig. 3. Graphs of the utilization of computing resources by the task

Statistics of the most active users of the cluster with the lowest percentage
of effective tasks are compiled monthly; personal consultations are held on the
basis of the statistics. By tracking trends in user efficiency by month, we can
conclude how the HPC TaskMaster system can increase the efficiency of using
cluster resources.

6 Conclusions
The developed task performance monitoring system, HPC TaskMaster, is a pow-
erful tool that provides all the necessary information (main information, aggre-
gated metrics, graphs, and inferences) about tasks in one place. This system
will help users to identify the problem for existing scientific applications and
applications of their development, thereby simplifying work with the cluster for
users, allowing them to perform scientific calculations faster and more efficiently
in the future.
HPC TaskMaster is constantly evolving and improving. Among the future
directions for development are:
– monitoring the effectiveness of individual categories of applications using
machine learning tools;
HPC TaskMaster 29

– adding new types of indicators and tags to generate new inferences;


– smart recognition of the type of running application;
– development of a module for notifying users about the launch of inefficient
tasks by them.
HPC TaskMaster is available to all cluster users of cHARISMa via the personal
account of the supercomputer complex. HPC TaskMaster is also available for
public use [1], and any suggestions for improving the project are greatly appre-
ciated.
The research was performed using the cHARISMa HPC cluster of the HSE
University [4].

References
1. Open Source/HPC TaskMaster GitLab. https://git.hpc.hse.ru/open-source/hpc-
taskmaster
2. Slurm Workload Manager - acct gather.conf. https://slurm.schedmd.com/acct
gather.conf.html
3. Chan, N.: A resource utilization analytics platform using grafana and telegraf
for the Savio supercluster. In: ACM International Conference Proceeding Series.
Association for Computing Machinery (2019). https://doi.org/10.1145/3332186.
3333053
4. Kostenetskiy, P.S., Chulkevich, R.A., Kozyrev, V.I.: HPC resources of the higher
school of economics. J. Phys. Conf. Ser. 1740, 012050 (2021). https://doi.org/10.
1088/1742-6596/1740/1/012050
5. Kraeva, Y., Zymbler, M.: Scalable algorithm for subsequence similarity search in
very large time series data on cluster of phi KNL. In: Manolopoulos, Y., Stupnikov,
S. (eds.) DAMDID/RCDL 2018. CCIS, vol. 1003, pp. 149–164. Springer, Cham
(2019). https://doi.org/10.1007/978-3-030-23584-0 9
6. Kychkin, A., Deryabin, A., Vikentyeva, O., Shestakova, L.: Architecture of com-
pressor equipment monitoring and control cyber-physical system based on influx-
data platform. In: 2019 International Conference on Industrial Engineering,
Applications and Manufacturing, ICIEAM 2019 (2019). https://doi.org/10.1109/
ICIEAM.2019.8742963
7. Nikitenko, D., et al.: JobDigest - detailed system monitoring-based supercomputer
application behavior analysis. In: Voevodin, V., Sobolev, S. (eds.) Supercomputing.
Communications in Computer and Information Science, vol. 793, pp. 516–529.
Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71255-0 42
8. Nikitenko, D.A., Voevodin, V.V., Zhumatiy, S.A.: Deep analysis of job state
statistics on Lomonosov-2 supercomputer. Supercomput. Front. Innov. 5(2), 4–10
(2018). https://doi.org/10.14529/jsfi180201
9. Rohl, T., Eitzinger, J., Hager, G., Wellein, G.: Likwid monitoring stack: a flexible
framework enabling job specific performance monitoring for the masses (2017).
https://doi.org/10.1109/CLUSTER.2017.115
10. Safonov, A., Kostenetskiy, P., Borodulin, K., Melekhin, F.: A monitoring system
for supercomputers of SUSU. In: Proceedings of Russian Supercomputing Days
International Conference, vol. 1482, pp. 662–666. CEUR-WS (2015)
11. Wegrzynek, A., Vino, G.: The evolution of the ALICE O 2 monitoring system.
In: EPJ Web of Conferences, vol. 245 (2020). https://doi.org/10.1051/epjconf/
202024501042
Constructing an Expert System
for Solving Astrophysical Problems Based
on the Ontological Approach

Anna Sapetina1 , Igor Kulikov1 , Galina Zagorulko2 ,


and Boris Glinskiy1(B)
1
Institute of Computational Mathematics and Mathematical Geophysics SB RAS,
Novosibirsk, Russia
[email protected], [email protected]
2
A.P. Ershov Institute of Informatics Systems SB RAS, Novosibirsk, Russia
[email protected]

Abstract. The current state of the methods for solving computational


problems of mathematical physics and supercomputer systems poses a
complicated task for the researcher associated with the choice of numer-
ical methods and a multicore computer architecture for efficiently solv-
ing the problem in a reasonable time with the required accuracy. We
are developing an intelligent support system for solving mathematical
physics problems on supercomputers. The system includes a knowledge
base and an expert system based on the ontological representation of
numerical methods, computing architectures, and inference rules that
connect them. This paper discusses in detail the issues related to the
formation of inference rules for solving astrophysical problems. The for-
malization of these rules is described, and their application for construct-
ing a solution scheme of the problem according to the user’s specification
is shown. An example of solving the problem of modeling the spiral insta-
bility evolution in a protostellar disk based on the proposed approach is
given.

Keywords: Intelligent decision support · Inference engine · Inference


rules · Astrophysics · Compute-Intensive Problems

1 Introduction
Modern astrophysics studies the physical processes of the Universe, the evolution
of astronomical objects and their interaction. Mathematical models of evolving
astronomical objects and their mutual influence are constructed on the basis
of the observed information taking into account the gravitational and magnetic
fields. It should be noted that mathematical modeling is the primary theoretical
method for studying astrophysical processes. It becomes necessary to solve a
numerous class of problems associated with the study of the structure, dynamics

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022


L. Sokolinsky and M. Zymbler (Eds.): PCT 2022, CCIS 1618, pp. 30–42, 2022.
https://doi.org/10.1007/978-3-031-11623-0_3
Constructing an Expert System for Solving Astrophysical Problems 31

and evolution of stellar systems, the Sun and stars, with the study of variable
stars, multiple stellar systems and the physics of the interstellar medium.
A large number of parallel codes have been developed for the solution of astro-
physical problems. We distinguish the following groups of codes: codes based on
Smoothed Particle Hydrodynamics [1–3], grid codes [4–6], including codes using
adaptive [7–9] and moving [10–12] meshes. Each implemented numerical method
and code focus on a certain type of problems and are often limited to the use of
classical supercomputer architectures. There are also codes adapted on graph-
ics accelerators [13–15] and Intel Xeon Phi accelerators [16]. However, the use of
any of these codes for solving a specific astrophysical problem requires significant
improvement. Currently, there are no universal systems for generating astrophys-
ical codes. Nevertheless, attempts to create such systems exist, for example, at
the University of Costa Rica [17] on the basis of the EXCALC package. An
intelligent system for generating astrophysical codes has not yet been created,
although there are attempts to develop such a system, including those based on
the ontological approach [18] and the ontological approach practice [19].
In [20,21], we presented the concept of intelligent support for solving
compute-intensive problems of mathematical physics using ontology. Let us
briefly list the main blocks of the proposed system and their purpose (Fig. 1).
The main block of the system is a knowledge base, which includes the ontol-
ogy of numerical methods and parallel algorithms and the ontology of parallel
architectures and technologies, and inference rules. Based on these ontologies, an
information-analytical web resource is built, it allows the user to study objects
included in the knowledge base, to view the connections between them, and also
add new objects to the base. The next block is an expert system, at the input of
which the user submits the specification of the problem to be solved. Based on
this information, the inference engine builds a scheme for solving the problem
using ontology objects from the knowledge base and inference rules formulated
by experts. When the solution scheme is determined, the next step is to build a
parallel program for solving the problem. In this step, modules from the software
library are used. If there is no suitable module, then the user will have to develop
it himself. Thus, a parallel code is generated taking into account the computa-
tional algorithm and architecture of the selected computing system. The system
also includes a block for simulation, which allows one to determine the optimal
number of computing cores for solving the problem.
To work with ontological models, inference machines are used, they allow one
to check the correctness of the ontology, operating with the names of classes,
properties and entities. They can also be used to display information that is not
explicitly contained in the ontology based on inference rules. There are several
inference machines, the most famous of which are Pellet, HermiT, FaCT++.
These inference engines are installed as plugins for the Protege ontology
editor [22].
The goal of this work is to develop a crucial component for solving astro-
physical problems using the ontological approach: the assignment of a group
of inference rules that determine the choice of a numerical method, computing
32 A. Sapetina et al.

Fig. 1. Main blocks of the intelligent support system for solving compute-intensive
problems of mathematical physics.

system architecture and parallel programming technology from the knowledge


base. It should be noted that making these decisions depends on the user solving
his problem. Therefore, speaking about a system of intelligent support, we con-
sider only such support for decision-making that consists in the best provision
of the user with the information required for its conscious adoption, as well as in
the prediction of the consequences of certain options for solving the problem. In
this paper, we consider inference rules for solving astrophysical problems using
an intelligent support system.

2 Scheme Construction and Rules Formalization


for Solving Compute-Intensive Astrophysics Problems
The general approach to constructing an ontology for intelligent support of solv-
ing compute-intensive problems of mathematical physics is described in detail in
[20,21,23]. In [23], the upper level of the ontology for solving compute-intensive
problems of cosmic plasma hydrodynamics is shown with templates for describ-
ing objects of the main ontology classes. The base objects of each class are listed.
Constructing an Expert System for Solving Astrophysical Problems 33

In [20], an example of choosing a chain of objects from the main classes of such
an ontology to solve an astrophysical problem associated with the collision of
galaxies is given. This work does not consider in detail how this chain should be
built to solve the problem, including the questions of setting a group of rules,
on the basis of which the numerical method and architecture of the computing
system are selected from those available in the ontology.
In [24], we considered a conceptual model for constructing a scheme for solv-
ing a mathematical physics problem based on the ontology approach (Fig. 2).
The main blocks for the specification of the problem (user interface), the main
blocks of the solution scheme, as well as the groups of rules that must be set for
the automatic construction of the scheme are highlighted. These are groups of
rules determining a system of equations, a numerical method, the implementa-
tion of a parallel algorithm, the properties of this algorithm, parallel computing
architectures and technologies. Essentially, these are user decision points where
intelligent support is needed to select the optimal solution, including from the
point of view of parallel implementation. Therefore, for each subject area, it is
necessary to develop a set of such rules that will allow the user to avoid mistakes
when developing a parallel algorithm and a program for solving his problem.

Fig. 2. Scheme of relationships between the main blocks of the user interface (high-
lighted in blue), problem-solution scheme blocks (highlighted in yellow), and rule groups
(highlighted in green). (Color figure online)

Let us consider these issues in more detail in relation to the solution of


astrophysical problems. Figure 3 demonstrates the main ontology objects at each
point of choice, which can be used to solve astrophysical problems.
34 A. Sapetina et al.

Fig. 3. Basic ontology objects for constructing a scheme of an astrophysical problem


solution.

Let us formulate the rules for solving astrophysical problems in a language


familiar to an expert in the mathematical modeling of astrophysical processes.
Rules for Determining Physical and Mathematical Models
1. The hydrodynamics model is used by default.
2. If there is a magnetic field, then the magnetic hydrodynamics model is used.
3. If there are velocities of the order of the speed of light, then relativistic hydro-
dynamics is used.
4. If it is important to take into account the composition of astrophysical objects,
then chemical dynamics is added.
5. If the velocities of gravitational interaction are of the order of hydrodynamic
ones, then gravity is added.
6. If radiation or a special composition of the gas is taken into account, then a
special equation of state is constructed.

Rules for Determining Discretization (Grid)


1. The regular grid is used by default.
2. If a collapse-based process is modeled, then nested grids are used.
3. If the collapse process is multiple, then adaptive grids are used.
Another random document with
no related content on Scribd:
puis pour voir les choses comme elles sont. Voici ce qui me semble
bien s’être passé.
« La production des ouvrages d’imagination, en France, a
presque décuplé depuis un demi-siècle. Les critiques, je vous l’ai dit,
se sont trouvés submergés. Ils n’ont plus, même matériellement, le
temps de tout lire ; il y a eu, de leur part, une sorte de demi-carence,
involontaire. Les membres des jurys littéraires, en décernant une
demi-douzaine de prix chaque année, opèrent une espèce de triage.
Ils lisent, les pauvres diables, ils lisent même « à l’œil », si j’ose
m’exprimer avec cette vulgarité. Et de la sorte ils signalent les
ouvrages qu’ils couronnent, non seulement au public, mais aux
critiques. Ceux-ci ont beau protester contre les prix littéraires, ils
sont bien obligés de rendre compte à leurs lecteurs d’un livre dont
ceux-ci leur demandent, naturellement : « Le prix, selon vous, a-t-il
été bien, ou mal donné ? »
— Il y a donc du bon dans cette coutume nouvelle ?
— Sans doute, mais non sans mélange. Auparavant c’était les
lecteurs eux-mêmes, sous la direction des critiques, qui faisaient
librement leur choix, par une sorte de suffrage universel. Aujourd’hui,
nous n’en sommes plus qu’au suffrage à deux degrés, avec un
scrutin aristocratique à la base, et un vote populaire qui n’existe que
pour ratifier. Car la puissance d’achat du public est limitée. Lorsque,
dans l’année, le lecteur s’est procuré chez le libraire une dizaine de
volumes, il y a des chances pour qu’il s’en tienne là. Il en résulte que
tout ouvrage qui ne bénéficie pas d’un prix littéraire risque fort de
tomber dans l’oubli — ou les boîtes des quais, ce qui est à peu près
la même chose.
— L’expérience paraît prouver, en effet, qu’il en est ainsi.
— De plus, cette institution des prix littéraires, si elle a pour effet,
dans une certaine mesure, de moraliser les écrivains des
générations antérieures, qui décernent la récompense, pourrait bien
démoraliser les candidats, c’est-à-dire toute la jeune littérature.
— Comment cela ?
— Les jurés sont obligés de lire les ouvrages de ces débutants,
ou quasi-débutants. Cela ne leur est pas sans fruit : ils sortent ainsi
de leur coquille, ils entrent en contact avec des tendances nouvelles,
des conceptions d’art qui ne sont pas les leurs. Je ne dis point qu’ils
ne le fissent pas auparavant ; mais ils le font ainsi plus souvent, et
d’une attention plus éveillée.
« Pour ceux, par contre, qui prétendent à leurs suffrages, ces
concours ne vont pas sans inconvénients. Ils les accoutument à des
démarches un peu trop souples, à des sollicitations, en un mot à
l’intrigue. Je suis persuadé qu’ils s’exagèrent l’influence de ces petits
moyens. Ce qui m’a presque toujours frappé, c’est la générosité,
l’impartialité des débats dans ces jurys littéraires, le soin touchant
que mettent les jurés à peser le mérite des œuvres. Ils commencent
d’ordinaire par accorder des voix de sympathie ou d’amitié à
quelques candidats. Mais ensuite la véritable discussion commence.
Elle est souvent fort vive ; elle demeure rigoureusement probe.
« Mais rien n’a pu empêcher le candidat de se dire : « Me liront-
ils ?… Ils en reçoivent tant ! Je ferais bien d’aller les voir ! Et aussi de
leur écrire ! Et aussi de leur faire écrire, par telle personne qui passe
pour avoir de l’influence auprès de celui-ci ou de celui-là. » Ce
médiocre souci, l’emploi de ces petites ficelles, n’est pas pour
rehausser les caractères. Ce sera là, selon moi, un des principaux
reproches qu’on pourra faire aux prix littéraires, tant qu’ils dureront.
— Tant qu’ils dureront ?
— Il en est un certain nombre qui sont assurés de vivre. Le
premier en date, d’abord, qui est le prix Goncourt ; celui que
l’Académie a fondé, à l’imitation et en concurrence du prix Goncourt,
un ou deux encore. Mais d’autres sont des entreprises de publicité.
Leur existence est fonction de la prospérité de la firme qui les
inventa, et du succès que le genre romanesque obtient en ce
moment. Ils ne seront pas éternels.
— Des entreprises de publicité ?
— Pamphile, elles sont fort légitimes ! Mais il ne saurait y avoir de
doute sur cette origine commerciale. Il n’en était pas du tout ainsi de
leur aïeul, le prix Goncourt. Celui-ci a eu pour père deux écrivains,
prosateurs et romanciers, qui tenaient leur profession pour la
première du monde, et à un moment où la morale publique, plus
chatouilleuse que de nos jours, mettait aisément certaines œuvres à
l’index. Ils ont voulu manifester contre cette attitude, où ils voyaient
du pharisaïsme, élever en dignité l’artiste libre, dédaigneux des
conventions, en face des Béotiens. La petite compagnie qu’ils ont
formée, désignant par leur testament ses premiers membres, est
composée d’écrivains de valeur, et sans nulle attache officielle ou
mercantile. De là le légitime accueil que fit le public à cette
fondation. Observez qu’il n’en résulta pas tout d’abord, pour les
ouvrages couronnés, un succès de librairie. Les « prix Goncourt » du
début n’ont pas connu de gros tirages. Ce n’est qu’à la longue que
ceux qui lisent constatèrent que les juges du « prix Goncourt »
d’ordinaire ne se trompaient pas dans leur choix, et leur signalaient
des œuvres intéressantes.
« A compter de cet instant, les éditeurs s’efforcèrent d’avoir « leur
poulain » pour le prix Goncourt. Ce fut la première phase. Dans la
seconde, ils songèrent à fonder ou à susciter la création d’autres
prix, pour le motif que c’est là le genre de publicité qui « paie » le
plus sûrement.
« Cela durera donc tant que ce genre de publicité paiera.
— C’est-à-dire ?…
— C’est-à-dire tant que ces prix ne seront pas trop nombreux
pour se faire mutuellement concurrence, ce qui se produit déjà. Et
tant que nous ne passerons pas, comme je le disais l’autre jour, de
la période des vaches grasses à celle des vaches maigres.
« … Mais, je ne saurais trop le répéter, je plains les poètes. C’est
eux surtout qui auraient besoin d’un secours extérieur, de l’appui
social : un romancier de talent peut espérer aujourd’hui vivre de sa
plume. Les poètes ne peuvent s’adresser, de notre temps, qu’à
quelques rares délicats. En mettant les choses au mieux, il leur faut
attendre beaucoup plus longtemps que les romanciers l’instant où
quelques paillettes d’or se mêleront pour eux à l’eau claire
d’Hippocrène. Pour la plupart, ces paillettes ne tombent jamais dans
leur sébile. Si le fier Moréas n’avait eu quelques petites rentes, il
serait mort de faim…
« Il y a bien quelques petits prix pour les poètes, mais si
dérisoires !… D’ailleurs il me paraît que cette institution des prix
annuels, justement par ce qu’elle a souvent de trop commercial, ne
remplit pas son objet. Un prix qui serait donné tous les cinq ans
seulement à un jeune auteur, et qui assurerait à celui-ci, pour cinq
ou dix ans, une somme suffisante pour qu’il pût travailler avec
indépendance, rendrait à l’art de bien plus grands services. Mais
quel est le mécène qui nous le donnera ? »
CHAPITRE XVII

L’ÉCRIVAIN ET L’ARGENT

Pamphile, peut-être avec le désir malin de m’embarrasser un


peu, m’apporte trois ouvrages récemment parus. Le premier est une
idylle très chaste, de la sonorité un peu grêle et charmante d’un
verre de pur et mince cristal frappé d’une cuiller d’argent, composée,
avec une ingéniosité alexandrine, par un conteur adroit et lettré qui,
étant donné le sujet et le milieu — que du reste il connaissait fort
bien — avait décidé avec intelligence que c’était de la sorte qu’il le
devait traiter, et non autrement. Tout le monde, malgré la concision
de cette analyse, aura reconnu Maria Chapdelaine.
Le second a été fabriqué en série, dirait-on, et selon les vieilles
recettes naturalistes. Il contient des pages d’autant plus scabreuses
qu’il est écrit sans art, et par surcroît avec des prétentions à instituer
quelque chose comme une nouvelle morale sexuelle. Cette manie
de mêler la leçon de morale à l’indécence n’est pas nouvelle : elle
date du XVIIIe siècle et a continué de sévir durant tout le cours du
XIXe siècle. Elle n’est pas pour cela plus agréable. Je ne désignerai
pas plus clairement ce roman, qui a eu un grand succès de librairie,
non seulement en France mais à l’étranger, où il est tenu pour
essentiellement français et parisien.
Le troisième est une œuvre excellente, d’un de nos plus grands
et plus parfaits artistes.
Les deux premiers se vantent, sur leurs couvertures, d’avoir
atteint le trois centième mille. Le dernier n’a obtenu l’attention que de
quelques milliers de lecteurs.
« Est-ce juste ? me demande Pamphile.
— Je ne vous dirai pas maintenant si c’est juste. Mais je vous
demande tout de suite ce que ça prouve, et si ça prouve quoi que ce
soit ? »
Ce fut au tour de Pamphile d’être embarrassé.
« Ce n’est pas une raison, poursuivis-je, parce qu’on moud un
morceau de musique sur l’orgue de Barbarie, pour que ce morceau
soit vulgaire et sans valeur. En Allemagne, presque tous les orgues
de Barbarie jouent la Marche nuptiale de Lohengrin, durant qu’un
singe habillé en soldat anglais fait des grimaces sur le dessus de
l’instrument. Ça n’empêche pas la Marche nuptiale d’être une belle
chose. Il y a de belles choses qui peuvent être populaires — et il
importe même qu’il y en ait — et d’autres qui ne sont faites que pour
un public restreint. Elles n’en valent, les unes et les autres, ni plus ni
moins.
— D’autre part, ce n’est pas non plus une raison, parce qu’on
joue un morceau sur l’orgue de Barbarie, pour qu’il ait du mérite !
— Votre observation est juste. Mais vous devriez ajouter que si
une musique n’est comprise que par deux ou trois cents amateurs,
ce n’est pas non plus une preuve suffisante que l’auteur a du
génie… Stendhal n’a connu la gloire qu’après sa mort, soit, et c’est
regrettable pour le goût de ses contemporains. Mais Obermann n’a
eu, du vivant de Senancour, qu’une poignée de lecteurs, et pas
davantage ensuite : de quoi il ne faut ni s’étonner ni se scandaliser,
car Obermann n’est, après tout, qu’une intéressante curiosité
littéraire.
— Pourtant, il faut bien qu’un écrivain vive de son travail et que,
dans l’état actuel de notre société, sa valeur soit appréciée, comme
les autres valeurs sociales, en argent ?
— Je n’en vois pas du tout la nécessité absolue. Que feriez-vous
alors des poètes, qui sont malgré tout, n’est-ce pas, l’honneur le plus
pur de toute littérature ? Il est assez rare pourtant qu’un poète vive
de son œuvre. Ni Baudelaire, ni Leconte de Lisle, ni Heredia n’y sont
parvenus. Encore que la tendance actuelle de notre civilisation soit
de tout commercialiser, elle ne saurait commercialiser le poète et il
n’est pas désirable qu’elle y puisse arriver. Par-dessus tout, le poète
doit se plaire à lui-même, et négliger tout le reste. Il doit servir son
dieu, et même ne pas songer à vivre de l’autel. Il en est qui en
meurent… Avez-vous entendu parler d’un certain Deubel, qui avait
du talent, et dont M. Léon Bocquet a rapporté la belle et triste
histoire ?… Je ne parle pas de Rimbaud, enfant terrible et de génie,
mais Ardennais vigoureux et réalisateur, qui mourut, je m’en assure,
convaincu de détenir, comme chef de factorerie, dans la société, un
rang très supérieur à celui que lui conférait la gloire d’avoir écrit le
Bateau ivre.
— Pourtant, il faut qu’ils vivent, puisqu’ils sont le plus grand
honneur des Lettres.
— Il le faut !… Mais le traitement que leur accorde la société est
demeuré exactement ce qu’il était il y a trois siècles. Il y a trois
siècles, le poète était entretenu, protégé, par un grand seigneur. A
cette heure il l’est, ou devrait l’être, par la société, par l’État. Je
redoute pour lui le zèle égoïste ou imprudent des fonctionnaires et
des politiciens qui font la chasse aux sinécures. Il en faut quelques-
unes, dans une communauté bien policée, pour les poètes et les
travailleurs désintéressés ; de même que des bureaux de tabac pour
les veuves pauvres d’officiers supérieurs.
« Et cela nous ramène, pour l’écrivain pauvre, au début de sa
carrière, à la nécessité de cette « profession seconde » dont nous
parlions l’autre jour. Car, après tout, quand il compose son premier
poème ou sa première prose, il ignore absolument si ce qu’il écrit est
digne d’être écrit ; et l’État ne peut ni ne doit accorder de sinécures à
tous ceux qui tiennent une plume avant que leurs pairs ou leurs
anciens les aient désignés à son attention.
« Toutefois, Pamphile, il n’est nullement interdit de vivre de ce
léger outil, d’en tirer du profit en même temps que de l’honneur, et
même de bénéficier de ces gros tirages qui attirent la considération
des gens sérieux. Ceci même du point de vue social : car, du
moment que les gens sérieux regardent d’un œil favorable les
personnes qui savent, par leur industrie, se créer d’importants
revenus, cette considération finit par s’étendre, en quelque mesure,
à la corporation tout entière. Tous les ingénieurs ni tous les
architectes ne sont riches ; mais il suffit que quelques-uns le soient
devenus pour que la profession d’ingénieur ou d’architecte soit
définitivement « classée ».
— On a donc le droit, en somme, si l’on entre dans la carrière
des Lettres, de ne point négliger les bénéfices matériels qu’elle peut
réserver ?
— Certes ! Il existe même, aujourd’hui, des groupements, des
syndicats qui s’occupent, avec discernement et autorité, de ces
questions commerciales, établissent des formules qui déterminent le
minimum des avantages auxquels ils ont droit, examinent les projets
de traités, défendent avec bonheur les intérêts professionnels.
« Mais, Pamphile, pourtant, n’oubliez pas une chose : c’est qu’il
serait funeste, à la fois pour vous et pour le bon renom des Lettres,
d’entrer dans cette carrière comme vous entreriez dans toute autre,
avec le seul souci d’en tirer, le plus vite possible, le plus gros
rendement matériel et « monnayable » qu’il se pourra. Elle est en
cela différente de beaucoup d’autres. Le premier but qu’on doit s’y
donner n’est pas de gagner de l’argent, mais de se plaire à soi-
même.
« Se plaire à soi-même avant de plaire aux autres et de songer à
un bénéfice quelconque ! Tout écrivain qui débute en se disant : « Je
vais composer tel livre en vue d’un grand succès de lecture, et par
conséquent d’argent », est sûr de faire une œuvre médiocre, de
devenir un fabricant, non pas un artiste, d’être justement oublié
après sa mort, et souvent même, de son vivant, de se voir négligé.
Combien n’en ai-je pas vus qui ont souffert de cet abandon du
public ; même après un premier succès qu’ils n’avaient pas cherché,
mais qui avait été trop retentissant pour des qualités trop vulgaires.
Ils ont penché du côté de leur faiblesse secrète et ils en acquittent le
prix, après l’avoir prématurément touché. On entend dire d’eux :
« C’est Un Tel qui a tiré le bouquet de son feu d’artifice le premier. »
Ils tombent dans la triste et un peu ridicule catégorie de ceux qui ont,
comme on dit, un bel avenir derrière eux.
« Voyez-vous, Pamphile, il est un mot de l’Évangile que nous
devons, nous autres gens de lettres, garder tout spécialement en
mémoire : « Cherchez d’abord le Royaume de Dieu et sa justice, et
tout le reste vous sera donné par surcroît. » Cherchons d’abord la
perfection, selon notre personnalité, et tout le reste viendra, sans
que nous l’ayons désiré. »
CHAPITRE XVIII

LE MARIAGE DE L’ÉCRIVAIN.
L’ÉCRIVAINE

« Dois-je me marier ? dit Pamphile.


— Mon cher ami, c’est une question que déjà posait Panurge à
l’oracle de la bouteille Bacbuc, qui ne lui répondit point. Permettez
que j’en fasse autant.
— Voilà bien les plaisanteries de votre génération ! Je ne vous
demande pas, comme Panurge, si je serai trompé. Ce que je
voudrais savoir est s’il convient à un homme de lettres de se marier.
— Pourquoi pas, Pamphile, pourquoi pas ?… Il apparaît que c’est
aujourd’hui la mode dans la corporation.
— Encore une plaisanterie !
— Non pas… Mais vous concevez que, en pareille matière, je ne
puis me placer que sur le terrain de l’observation. Or il semble bien
que, pour les gens de lettres contemporains, le mariage devienne la
règle, le célibat l’exception.
— La belle affaire ! Comme pour tout le monde !
— Comme pour tout le monde, en effet. Ce que j’entends
seulement signifier est que, il y a trois quarts de siècle, le célibat
était, chez les écrivains, un peu plus fréquent qu’aujourd’hui. Si
Hugo, si Balzac même, vers la fin de sa vie, furent mariés, ni
Stendhal, ni Musset, ni Flaubert, ni les deux Goncourt ne
convolèrent en justes noces. Et nous pourrions, en cherchant un
peu, découvrir pas mal d’autres exemples de cette répugnance à se
soumettre au lien conjugal. Il n’en va plus tout à fait de la sorte à
cette heure.
— En voyez-vous une raison ?
— On pourrait peut-être la découvrir dans le fait que l’écrivain —
ou l’artiste en général — est beaucoup moins laissé hors de la
société qu’il y a deux ou trois générations. Celle-ci, par un réflexe de
défense que j’ai déjà signalé au début de ces conversations, tend à
le reprendre, à se l’annexer. En d’autres termes, il s’embourgeoise…
L’opinion des familles, sur la carrière littéraire depuis trente ou
quarante ans, a beaucoup changé. La liberté que vous laisse
madame votre mère de l’embrasser en est une preuve ; et il me
souvient qu’au contraire, il y a un demi-siècle environ, un professeur,
dans un lycée de Paris, ayant dit à l’un de ses élèves qu’il semblait
avoir des dispositions pour écrire, les parents de cet élève s’en
allèrent plaindre au proviseur… Au fond du différend qui sépara le
général Aupick de son beau-fils Baudelaire, et qui rendit l’existence
matérielle du poète si misérable, on croit bien distinguer cette
méfiance des classes moyennes et supérieures de cette époque à
l’égard d’une profession encore non classée. Il n’en est plus de
même aujourd’hui.
« Mariez-vous donc quand vous voudrez, Pamphile, si le cœur
vous en dit. Autrement, ce ne serait pas la peine…
« Ce qu’on est convenu d’appeler « le monde » existe encore, au
moins comme façade. Si donc le genre de vie de l’écrivain devient
mondain, une femme lui devient indispensable. C’est elle qui reçoit,
c’est elle aussi qui sert d’ambassadrice. De là cette modification, qui
se généralise, dans la vie privée des gens de lettres. Il faut au moins
qu’ils soient divorcés. Le divorce, dans la profession, est assez bien
porté.
— Un homme de lettres peut-il épouser une femme de lettres ?
— Je connais de telles unions qui furent et demeurent heureuses
et brillantes. Pourtant je ne les saurais recommander. Non
seulement c’est faire entrer sans prudence dans l’association un
élément dangereux de rivalité — que doit-il arriver si le public
reconnaît à la femme plus de talent qu’au mari, ou inversement ? —
mais encore, même entre égaux de mérite, il n’est pas commun
qu’on ait la même conception de l’œuvre d’art, et il peut en résulter
des débats pénibles, ou de silencieux jugements qui ne le sont pas
moins. Je vois fort bien un médecin épouser une avocate, un
ingénieur une femme de lettres : la diversité même des professions
suscite l’intérêt, et des enseignements. Je n’aurais pas la même
confiance dans le mariage d’un avocat et d’une avocate, d’un
docteur et d’une doctoresse en médecine. Pourtant, tout cela est
question d’espèce, et il est, je vous le répète, des exceptions
favorables.
— Puisque nous parlons de femmes de lettres, poursuivit
Pamphile, il me souvient d’avoir lu à ce sujet, dans l’Avenir de
l’Intelligence de M. Charles Maurras, des pages fort remarquables,
mais assez méchantes. L’auteur ne s’occupait que des plus
légitimement illustres parmi nos contemporaines. Il leur
reconnaissait beaucoup de talent ; il louait même ce talent avec
force et subtilité ; il le discernait, il le faisait briller. Mais il ajoutait —
car telle est sa thèse — que ce succès grandissant des femmes
dans tels romans d’un lyrisme subjectif, et dans la poésie, marquait
un aboutissement inévitable du romantisme qui, dans l’œuvre d’art,
a donné le pas, sur l’intelligence, à la sensibilité — constatation qui,
de la part de M. Maurras, n’est pas un compliment.
— Il peut bien y avoir un grain de vérité là-dedans ! Il est certain
que, de façon générale, les femmes se trouvent plus à leur aise
dans le domaine de la sensibilité et de l’instinct que dans celui de la
raison. Il n’est guère douteux non plus que le romantisme a fait,
dans l’œuvre d’art, une part plus grande à la sensibilité que les
époques antérieures. Ce qui, du reste, est loin d’être un malheur !
Etre sensible n’empêche pas, ou ne devrait pas empêcher, d’être
intelligent !
« Toutefois, M. Charles Maurras aurait écrit quelque chose de
plus exact — mais qui aurait moins étonné — en se contentant de
discerner que, s’il y a un peu plus de romancières et de poétesses
qu’auparavant, exploitant la même veine romantique, en somme,
que leurs émules masculins, bien qu’autrement, c’est pour ce simple
motif que les mœurs sociales reconnaissent à la femme une
indépendance de plus en plus grande. Elle en profite, et voilà tout !
Elle en profite pour se peindre telle qu’elle se voit et se sent, et cela
s’appelle alors de la littérature — mais aussi pour s’essayer, et non
sans bonheur, dans tous les autres genres d’activité intellectuelle. Il
y a au moins autant d’avocates et de doctoresses que de femmes de
lettres ; et, dans la science de la médecine et du droit, je ne sache
pas qu’il faille plus de sensibilité que d’intelligence. On en peut
conclure que, même si notre temps était anti-romantique et
insensible, il ne posséderait pas moins « d’écrivaines ».
« Car il s’agit là surtout d’un fait social nouveau, qui est
l’affranchissement progressif de la femme. Encore ne faut-il pas
exagérer l’intensité du phénomène. Entrez au Palais et dites-moi
combien vous comptez d’avocats pour une avocate ? Prenez un
annuaire, et dites-moi combien vous comptez de docteurs en
médecine pour une doctoresse ? Maintenant, faites une dernière
expérience, allez à une assemblée générale de la Société des gens
de lettres, et déterminez la proportion des femmes et celle des
hommes. Elle n’est pas de dix pour cent.
« Il est possible, il est même probable, que cette proportion soit
destinée à s’accroître, dans toutes les professions libérales, à
mesure que l’enseignement donné aux jeunes filles se rapprochera,
jusqu’à s’y confondre, de celui qu’on dispense aux jeunes gens. Et,
sous l’influence de cet enseignement identique, on verra — on voit
déjà — diminuer la différence entre la mentalité féminine et la
mentalité masculine, entre l’art féminin et l’art masculin.
— On la verra diminuer, mais non pas disparaître.
— Évidemment, Pamphile, évidemment ! Un homme ne saurait
être une femme, ni une femme un homme : et ceci, n’est-ce pas, est
fort heureux ! »
CHAPITRE XIX

SALONS LITTÉRAIRES

Jadis les écrivains allaient au café ; ils y faisaient leurs débuts ; ils
y vivaient ; parfois ils y mouraient, ou peu s’en faut. Le grand Moréas
aura peut-être été le dernier à mener intrépidement, et jusqu’à
l’hôpital, cette existence indépendante et bohème. Elle avait ses
avantages, assurant à l’esprit une liberté qu’ailleurs il ne saurait
retrouver aussi entière. Elle avait ses inconvénients, dont l’un, et non
des moindres, était de séparer presque complètement les gens de
lettres des femmes — du moins des femmes qui ne fréquentent pas
les cafés, et c’est le plus grand nombre. Un autre de ces
inconvénients est qu’on ne saurait guère aller au café, et y séjourner,
sans boire. La littérature d’alors buvait donc, et non sans excès… La
Faculté, de nos jours, constate qu’il existe « un alcoolisme des gens
du monde » à base de porto et de cocktails. Il y avait, à cette époque
aujourd’hui préhistorique, un alcoolisme des littérateurs, à base
d’absinthe et d’autres breuvages violents et populaires.
Nul ne saura jamais pourquoi les peintres vont encore au café,
tandis que les gens de lettres l’abandonnent. Il se peut que ce soit
parce qu’il subsiste, dans la peinture, plus de fantaisie et d’esprit
révolutionnaire, si l’on entend ce dernier terme au sens d’une sorte
de répugnance à s’incliner devant un minimum de conventions
mondaines et aussi d’un goût déterminé pour les discussions
théoriques. Les discussions théoriques ne peuvent guère avoir lieu
qu’au café, et entre hommes, ou du moins en présence de dames
qui ne sont là que pour attendre patiemment que leur ami finisse par
estimer qu’il est temps de s’aller coucher.
Le café, pour la littérature, surtout pour la très jeune littérature, a
été remplacé par le bar-dancing, plus coûteux, et où l’on rencontre
des dames également plus coûteuses, bien que d’un niveau social
analogue à celui des personnes qui accompagnaient autrefois leurs
seigneurs et maîtres à la brasserie ; mais surtout par les salons.
Il existe en ce moment très peu de salons « littéraires » au sens
propre du mot, c’est-à-dire où un homme de lettres, ou plusieurs,
tiennent le haut du tapis et le dé de la conversation. Mais il en est,
beaucoup plus qu’auparavant, où les jeunes gens de lettres sont
admis de plain-pied avec les gens du monde ou de fortune
considérable. Ceci vient, comme il a été dit, de la tendance des
classes dirigeantes et conservatrices à s’annexer, comme une force,
la littérature. Les jeunes gens de lettres se font là des amies, ni plus
ni moins sûres que celles que leurs prédécesseurs conduisaient au
café, mais qui en diffèrent par leur rang social, leur manière de vivre
et, en quelques nuances, d’envisager les problèmes de l’amour.
Elles ont, de plus, en raison de leur habitude du monde, et de leur
situation, plus d’autorité ; elles exigent qu’on ne les laisse pas
entièrement à part de la conversation, même si elle est « d’idées »,
ce qui, à la grande rigueur, peut arriver.
Il résulte de cette évolution des mœurs que la littérature
d’autrefois, la littérature de café, avait une tendance excessive à se
masculiniser, et que la littérature d’aujourd’hui marque en sens
inverse une propension à se féminiser, tout en s’affirmant, en
quelque manière, antiféminine. Elle est de meilleur ton, et plus
galante ; elle est moins romantique, moins oratoire, plus spirituelle,
légère, psychologique ; elle recherche d’autres genres de
supériorité, elle admet aussi d’autres genres de médiocrité. Il ne faut
pas croire que les cafés littéraires n’eussent pas leur snobisme :
celui de la violence, de la grossièreté truculente et, dans les derniers
temps, d’un individualisme anarchique… Les salons plus ou moins
littéraires de nos jours ont le leur, dicté par quelques revues plus ou
moins jeunes, qui ont la prétention d’exprimer le fin du fin, d’avoir un
goût qui n’est pas celui du vulgaire — le snobisme de l’ennui, a dit
avec rudesse, et sans suffisantes nuances, M. Henri Béraud — et
celui des opinions décentes, non pas en morale, où l’on est fort
indulgent, mais en politique.
Le café était volontiers libertaire ; le salon est conservateur, bien
que de façon platonique et inefficace. Il ne saurait, en effet, aller bien
loin : car il ne reçoit pas seulement des gens de lettres et des gens
du monde, mais des hommes politiques des partis au pouvoir, qui
sont aussi, pour la maîtresse de la maison, des numéros « à
montrer ». Souvent aussi, d’ailleurs, des intérêts matériels, des
intérêts « d’affaires » y sont pour quelque chose. On a toujours un
petit service à demander à un homme politique ! D’ailleurs on
s’accorde généralement à déclarer qu’il pense moins mal qu’on
n’aurait cru, qu’au fond « il est des nôtres ». On garde le vague
espoir qu’on le gagnera tout à fait. Cette erreur est excusable : à
Paris et dans un milieu parisien, l’homme politique parle comme on
parle à Paris, il ne tient pas à se faire d’ennemis. Le dos tourné, il
recommence à penser à ses électeurs de province, qui eux-mêmes
ne pensent pas comme les habitués de ce salon parisien. Il sait ce
qu’il faut dire — et ce qu’il faut taire. En fin de compte, ce ne sont
pas ses électeurs qu’il trahira, mais le salon ne lui en gardera pas
longtemps rancune, parce que, malgré tout, il faut « l’avoir ».
Le salon n’exerce aucune influence réelle sur la littérature ; il ne
la mène pas, il ne lui signale nulle direction, pour le motif qu’on y
pense peu, et que les conversations « d’idées » y sont rares de nos
jours. Du reste, en plus des écrivains des petites chapelles à la
mode, dont je parlais tout à l’heure, il se contente d’accueillir les
écrivains que la faveur publique a désignés par de gros tirages ou
certaines revues par leur publicité ; il ne fait pas les réputations. Il a
pourtant cet avantage de constituer un lieu de rencontre pour des
gens de lettres qui jusque-là ne se connaissaient que par leurs
œuvres, ou pas du tout. Il peut aussi servir à une candidature
académique.
Pamphile, qui n’est qu’un néophyte, n’y dit pas grand’chose, sauf
aux femmes, en quoi il a bien raison ; et, avec elles, il ne parle pas
littérature. Mais cela ne l’empêche pas d’avoir des yeux et des
oreilles. Il écoute attentivement, et sait regarder ; il sort de là, le plus
souvent, avec des considérations qui m’amusent. Je ne suis
nullement étonné — de telles illusions sont de son âge — qu’il se
trouve déçu à voir que beaucoup d’auteurs ne ressemblent pas à
leurs œuvres. Belphégor, si ardent et si incisif, en ses écrits, lui
apparaît sous la forme d’un petit homme blond, timide et doux
comme un Eliacin qui aimerait seulement couper les cheveux en
quatre, au lieu de réciter les leçons du grand-prêtre Joad. Il s’étonne
que Vergis, qui publia les deux plus beaux romans lyriques et
romanesques de la fin du romantisme ne veuille plus entendre parler
que de philosophie bouddhique ; que Paulus, qui a tant d’esprit dans
ses livres et au théâtre, se répande communément en plaisanteries
qui ne feraient pas même honneur à l’Argus du café du commerce
d’une petite ville de province — mais n’en sont pas moins accueillies
comme d’une originalité exceptionnelle.
Enfin Pamphile a découvert Lépide, dont le succès, dans ce
salon et dans plusieurs autres, demeure pour lui un mystère. Lépide
est terne, même gris, ennuyeux et ne dit rien sur rien qui mérite
jamais d’être retenu. On le croirait plutôt né pour la diplomatie que
pour la littérature. Mais c’est à la littérature qu’il applique sa
diplomatie. Il écrit ; il compose des ouvrages ; mais ses ouvrages,
assez ennuyeux, ont toujours, par surcroît, le tort de rappeler ceux
de quelque devancier. Son style est pur, mais sans caractère ; une
eau transparente et insipide. On ne saurait rien en retenir. Pourtant il
est là, et la place qu’on lui reconnaît est distinguée — comme sa
personne, empreinte de cette élégance, vraiment mondaine, qui
consiste à ne présenter aucune chose remarquable. Nul ne doute
qu’il ne soit destiné au plus brillant avenir.
Pamphile, un peu choqué, m’en demande la raison.
« Il n’y en a pas, lui dis-je. Il y a seulement, dans la littérature,
des réputations de salon comme il y avait, il y a trente ans, des
réputations de café, tout aussi peu méritées. Ce ne sont pas les
mêmes, voilà tout. Le café aimait « les forts en gueule » et prenait
leur vulgarité bruyante pour de l’originalité. Le monde aime les gens
effacés, discrets, serviables. Il les adopte ; il n’obligera personne à
lire leurs livres : cela n’est point en son pouvoir ; mais il les peut
pousser jusqu’à l’Académie.
— Lépide sera donc de l’Académie ?
— Pourquoi pas ? Il est de bonne compagnie. C’est là un mérite,
et l’on ne saurait indéfiniment dire « non » à un aimable homme
qu’on rencontre partout où l’on va, et sur lequel il n’y a rien à dire, ni
en bien ni en mal. Une fois mort, il sera comme s’il n’avait jamais
existé. Son dernier, et peut-être son premier lecteur, sera celui qui le
remplacera sous la Coupole. Le malheureux aura de la peine à s’en
tirer ; mais il s’en tirera si, de façon discrète, il sait faire entendre qu’il
est des écrivains dont l’influence est personnelle, et ne vient pas de
leurs ouvrages. »
CHAPITRE XX

L’ÉCRIVAIN ET L’ACADÉMIE

Nous voyons, Pamphile et moi, Théodore entrer dans un salon.


Théodore jette les yeux de tous côtés ; il aperçoit ce qu’il est venu
chercher. La chasse est même trop bonne, le gibier trop abondant : il
y a là deux membres de l’Académie Française.
Peut-être son premier mouvement a-t-il été de s’en applaudir :
Théodore est candidat au siège laissé vacant, dans cette illustre
compagnie, par la mort du regretté Fillon-Laporte, l’historien de la
marine française. Ne pourrait-il courir ces deux lièvres à la fois, faire
d’une fois sa cour à ces deux électeurs influents ?… Mais à la
réflexion, le voici hésitant, décontenancé par cette abondance de
biens : ces deux immortels ne passent pas pour être, à l’Académie,
du même parti. Ne va-t-il pas s’aliéner l’un en manifestant trop de
déférence et d’admiration pour l’autre ? Enfin il se décide : quelques
mots au premier, une conversation plus longue avec le second.
Celui-ci, qu’elle n’amuse pas sans doute outre mesure, prend le parti
de s’en aller. Théodore alors respire, et se rapproche de celui qu’il
avait un peu négligé. Puis il regarde sa montre : avec un taxi, il aura
le temps de courir à une autre assemblée, où il s’attend à rencontrer
un autre électeur.
Pamphile s’est fort intéressé à ce manège.
« Ces campagnes mondaines, me demande-t-il, ont-elles une
action décisive ? L’influence des salons, des relations, joue-t-elle un
rôle important dans les scrutins académiques ?
— Cela peut arriver, Pamphile. Mais le contraire n’est pas non
plus sans précédent. Il en est, là-dessus, des élections à l’Académie
comme de toutes les autres, où le candidat qui triomphe est parfois
celui que nul ne connaissait : du moins, si les électeurs n’en pensent
pas de bien, ils ne lui veulent pas de mal. Nul ne pense à voter
contre lui ; c’est la moitié de la victoire assurée. Les antipathies
naissent plus fréquemment de contacts personnels, qui furent
malheureux, que de la lecture des ouvrages.
— On aurait de la peine, remarqua Pamphile avec dédain, à lire
ceux de Théodore. Il n’est point un homme de lettres. Il fut
diplomate, homme politique, administrateur, et n’écrivit jamais que
des rapports. Je fais des vœux pour son concurrent qui est
romancier.
— Ce romancier est en effet un écrivain distingué. Mais je vois
avec regret, Pamphile, que vous tombez dans l’erreur commune, qui
est de croire que l’Académie ne doit s’ouvrir uniquement qu’à des
gens de lettres. Depuis qu’elle existe, elle n’a jamais cessé d’être
une espèce de cercle, qui prend soin de se recruter, par une sorte
d’échantillonnage, parmi les illustrations des classes dirigeantes.
Elle a toujours contenu des prélats, des savants, des grands
seigneurs, des ministres et des guerriers — à de certaines époques
n’ayant pas fait la guerre, mais ceci n’a aucune importance — et non
pas seulement des poètes, des historiens, des dramaturges, des
conteurs de fictions et des philosophes.
— … Une espèce de résumé, d’échantillonnage, comme vous
dites, de la haute société française.
— C’est cela.
— Dans ce cas, l’échantillonnage est incomplet. J’y vois bien
trois maréchaux, deux ecclésiastiques, un assez grand nombre
d’hommes politiques. Mais non pas un de ces chefs de finance ou
d’industrie, un de ces grands directeurs de chemins de fer qui sont

You might also like