1. Introductory Remarks
Parameterized Complexity has developed strongly and there is a long way to go, as evidenced by the directions of the papers in this Special Issue.
The high-quality research in the field has been recognized by funding agencies (ERC, NSF, many national research councils), by research fellowships (EATCS, ACM, Marie Curie, Lise Meitner and others), by memberships in Royal Societies, Academy Europaea, and Academy of Science in various countries, and by prizes and awards (Humboldt Research Prize, Witold Lipski Prize, Krill Prize, NSF Career and others). Foundational papers are evinced in a broad sweep of areas including AI, Access Control, Business, Bioinformatics, Computational Geometry, Computational Social Choice, Cognitive Science, Computational Medicine, Machine Learning, Phylogeny, Psychology, Operations Research and Scheduling, with many Best Paper Awards. About 8% of papers in top theoretical computer science conferences are related to Parameterized Complexity.
Since 2015, the annual Parameterized Algorithms and Computational Experiments Challenge (PACE) has served to deepen the relationship between parameterized algorithms and practice, and help bridge the divide between the theory of algorithm design and analysis, and the practice of algorithm engineering, as well as inspire new theoretical developments.
Books in the field include Parameterized Complexity in the Polynomial Hierarchy: Extending Parameterized Complexity Theory to Higher Levels of the Hierarchy (de Haan 2020). Two new books were published in 2019: Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis (Iris van Rooij, et al. 2019), and Kernelization: Theory of Parameterized Preprocessing (Fomin et al. 2019). Many open problems remain that were proposed in Parameterized Complexity, 1999 and Foundations of Parameterized Complexity, 2013 (Downey and Fellows), Invitation to Fixed-Parameter Algorithms (Niedermeier 2002), Parameterized Complexity Theory (Flum, Grohe 2006), and Parameterized Algorithms (Cygan et al. 2016). The wiki helps to keep the community informed, and archives the Parameterized Complexity Newsletter (www.wikidot.com).
The articles in this Special Issue represent fresh and ambitious research, in new directions in this area.
We would like to thank all the authors for the excellent surveys and papers in this Special Issue. We thank all the reviewers. Experts take reviewing very seriously, and the detailed help in this is valued by the authors. We thank Jones Zhang and the staff at the Algorithms Editorial Office, who worked closely with us to make this Special Issue a smooth and enjoyable experience. As an Open Access journal, the Algorithms Journal publishes quickly, which is a boon to the rapid dissemination of ground-breaking research, which you will find in this Special Issue.
2. New Frontiers
Several frontiers in Parameterized Complexity, either new or with well-established roots, but still in their infancy, are represented in this collection. All of these frontiers go “hand in hand” with the overarching goal of increasing the scope and impact of Parameterized Complexity. To this end, first and foremost, parameterized algorithms should be made practical.
This, of course, means that specific methods to enhance performance in practice are to be developed. Novel approaches to do so, with an emphasis on treewidth, the most well-studied width measure in the field, are given by Salvchev et al., and by Bannach and Brendt. Further, basic machinery in Parameterized Complexity should be exported to work when the input is not a graph. Indeed, most problems in practice are not about graphs, yet some of them can be modeled by graphs. For example, Integer Linear Programming (ILP) is a ubiquitous language to formulate problems, which is used in practice, and Ganian and Ordyniak survey how it can be modeled by parameterized graph problems. Additionally, Lin et al. do so for the kidney exchange problem. Taking a broader perspective, Bulteau and Weller survey new challenges for parameterized complexity in bioinformatics, where problems often concern entities that are not “just” graphs or are not graphs at all.
To design practical parameterized algorithms, additional considerations are often required. Specifically, in practice, it may not be clear how to determine what the “best" solution is. Therefore, we might wish to compute several solutions. Creignou et al. present results for the enumeration of solutions with FPT delay, and Baste et al. highlight diversity as a measure of the quality of a set of solutions. In addition to uncertainty with respect to the best solution, the input itself might be uncertain (in a different sense). This issue is discussed by Narayanaswamy and Vijayaragunathan. Further, the paradigm of parameterized complexity by itself might not be enough, for example, when no efficient fixed-parameter algorithm, or no fixed-parameter algorithm at all, is likely to exist for some problems. In this regard, a research direction that has received growing attention in recent years is that of parameterized approximation, surveyed by Feldmann et al., and a specific result is given by Li. Another framework to handle this is offered by SAT-solvers, which are often employed in practice, as surveyed by de Haan and Szeider.
Beyond Fixed-Parameter Tractability: New Paradigms. In recent years, the paradigm of Prameterized Complexity has been “combined” with other algorithm design paradigms, including approximation algorithms, streaming algorithms, dynamic algorithms and distributed algorithms. The last three focus on the design of parameterized algorithms that can be used under additional constraints/features of the input and the computation entities involved, such as a lack of space or the necessity to efficiently deal with small changes in the input that occur over time. Approximation algorithms, like Parameterized Complexity, is a framework to deal with computationally hard problems. Here, we compromise the “quality” of the solution, but still demand that the algorithm run in polynomial time. By compromising both quality and running time, we can compromise each of them by just a little bit rather than one of them by a lot. For example, we can achieve a -approximate solution in FPT time for a problem that is unlikely to admit any constant-factor approximation algorithm in polynomial time and which is W[1]-hard, and hence enjoy both. In this collection, Feldmann et al. survey both positive algorithmic and hardness results in Parameterized Approximation. Further, Li presents a result in the context of circuits, showing negative results regarding the approximability of the Dominating Set by para-AC circuits.
Another framework to deal with problems that are computationally hard even when FPT time is allowed is to employ SAT solvers. In particular, SAT solvers are often used in practice, and are very effective for this purpose. Set in the framework of Parameterized Complexity, this can mean that the objective is to design FPT-time reductions in SAT. Of course, this makes sense only for problems that are supposed not to be NP-complete, but belong to higher levels of the Polynomial Hierarchy. In this way, we harness the power of both Parameterized Complexity and SAT solvers, aiming to develop practical algorithms for very hard problems. In this collection, a comprehensive compendium by de Haan and Szeider discusses this framework and presents a long list of relevant problems that are known to be reducible in FPT time to SAT.
The “Best” Solution and the “Correct” Input. Motivated by both theoretical and practical considerations, the study of enumeration and counting problems is of broad interest. Indeed, in many scenarios that arise in practice, we might not be interested in computing a single solution, but we may be interested in several—and possibly even all—solutions to a given problem, or just to count how many solutions there are. For example, in many problems that arise in bioinformatics, it is not clear what the “best” solution is, since different solutions can represent different biological explanations (e.g., describe different evolutionary events), and it is not clear which is the correct one. In some cases, but not all, after offering these solutions, further experiments can be used to further sieve them. Therefore, perhaps the t “best” (or some t) solutions are sought. This yields different questions in enumeration, where we wish to list all solutions, some t best solutions, or each of the solutions one by one with modest delay. Clearly, the last option is the least restrictive if we enumerate the solution from “best” to “worse”. The study of enumeration in the framework of Parameterized Complexity is relatively young. Here, when interested in the last option, we want the delay time to be FPT. Problems that admit enumeration algorithm with an FPT delay time are said to belong to the class DelayFPT. Creignou et al. study this setting, and present corresponding algorithms for modification problems on graphs and beyond.
When seeking some collection, say, of size t, of solutions, we may wish them to be “diverse”, in some sense. Indeed, we may not have a reliable measure for what the t “best” solutions are, or perhaps we want to represent the entire solution space. In both cases, we may want the t output solutions to be, in some sense, as “diverse” as possible. There are various measures for the diversity of a collection. In particular, the work of Baste et al. makes use of the sum of the “distances” between every pair of output solutions. Specifically, they present parameterized algorithms to output a collection of t solutions for the d-Hitting Set problem (which are hitting sets whose size is, at most, some input parameter k), whose diversity measure is, at least, some input parameter p, where the distance between two solutions is the size of their symmetric difference.
Besides being uncertain about which solution we seek, we might be uncertain about the input itself. In particular, for a graph problem, we might be uncertain about which edges should exist—for example, in protein–protein-interaction networks, where vertices represent proteins and edges represent interactions between them, noisy and error-prone experiments to determine the interactions lead to uncertainty. Narayanaswamy and Vijayaragunathan survey different models of uncertainty as well as problems corresponding to them, some of which are motivated by biological and social networks. In addition, they zoom into the problem of finding the maximal core of a graph under uncertainty in two models, and present polynomial-time and FPT algorithms as well as a W[1]-hardness result.
Width Measures: New Perspectives. This collection also contains two articles that address practical issues, both focused on treewidth and tree decompositions, but from two different perspectives. One concerns the computation of treewidth itself, and the other concerns the solution of graph problems parameterized by treewidth (where a corresponding tree decomposition is given as input). Roughly speaking, treewidth is a measure of how close a graph is to a tree, where a tree decomposition is a structure that witnesses it, and this measure is among the most well-studied structural parameters in Parameterized Complexity. Its computation was also posed several times as a challenge in Parameterized Algorithms and Computational Experiments (PACE), an annual competition to design the fastest (in practice) algorithms for the parameterized problems posed as challenges in that year. On the one hand, the article by Slavchev, Masliankova and Kelk presents a new approach based on Machine Learning to obtain a faster (in practice) algorithm for the computation of the treewidth of a given graph. Specifically, it considers known algorithms to compute treewidth, and presents an approach to automatically learn which input characteristics make which algorithm among them better. On the other hand, the article by Bannach and Berndt considers algorithms based on dynamic programming on tree decompositions. It is noteworthy that almost all existing algorithms based on treewidth make use of dynamic programming, though this is sometimes coupled with other modern techniques. Specifically, Bannach and Berndt focus on the practicality of these algorithms, and also present a software for this purpose.
While many problems that need to be solved in practice do not concern graphs, some of them can be represented by using graphs, and thereby give rise to the usage of structural graph parameters such as treewidth. Ganian and Ordyniak present a survey that has this view with respect to Integer Linear Programming (ILP), a well-known, very general problem that encompasses many basic problems in computer science. Specifically, they consider a standard graphical representation from the study of constraint satisfaction problems that describe the interactions between the variables and constraints (namely, which variable is present, having a nonnegative coefficient, in which constraint). They survey recent development in this regard, considering width parameters such as treewidth, treedepth and cliquewidth.
Exporting Fixed-Parameter Tractability to Various Application Domains. Many computational problems that arise in Bioinformatics are NP-hard and require dealing with inputs of a very large size which are, nevertheless, structured, and hence naturally fit the framework of Parameterized Complexity. Indeed, Bioinformatics has been a main application domain for this framework for two decades. The survey by Bulteau and Weller presents an up-to-date survey of several topics in Bioinformatics from the viewpoint of Parameterized Complexity, with an emphasis on problems solved in practice, and addressing current questions, aiming, also, by providing concrete open problems, to advance further research in this regard. The survey presents an overview on various central topics: genome comparison and completion, genome assembly and sequence analysis, haplotyping and phylogenetics. For example, one direction for research in the first topic concerns distances between genomes, particularly rearrangement distances. Here, given two genomes, the objective is to decide whether one can be transformed to the other using, at most, k operations of a specific kind, motivated by biological considerations. This aims to reflect the evolutionary events that have occurred since their last common ancestor.
An article by Lin et al. considers a specific problem, called the Kidney Exchange problem, which aims to effectively utilize living-donor kidneys. This is modelled as a (di)graph problem, where the objective is to find a maximum weight packing of vertex-disjoint cycles and chains, where the length of cycles is upper-bounded by a given constant. This represents, in some sense, barter exchange. For example, when the cycle is of length 2, this corresponds to the scenario where a patient–donor pair can donate a kidney to some other pair and receive a compatible kidney from that pair in exchange.