Book contents
- Frontmatter
- Contents
- Contributors
- Introduction
- Part 1 Graphical Structure
- Part 2 Language Restrictions
- Part 3 Algorithms and their Analysis
- 6 Tree-Reweighted Message Passing
- 7 Tractable Optimization in Machine Learning
- 8 Approximation Algorithms
- 9 Kernelization Methods for Fixed-Parameter Tractability
- Part 4 Tractability in Some Specific Areas
- Part 5 Heuristics
8 - Approximation Algorithms
from Part 3 - Algorithms and their Analysis
Published online by Cambridge University Press: 05 February 2014
- Frontmatter
- Contents
- Contributors
- Introduction
- Part 1 Graphical Structure
- Part 2 Language Restrictions
- Part 3 Algorithms and their Analysis
- 6 Tree-Reweighted Message Passing
- 7 Tractable Optimization in Machine Learning
- 8 Approximation Algorithms
- 9 Kernelization Methods for Fixed-Parameter Tractability
- Part 4 Tractability in Some Specific Areas
- Part 5 Heuristics
Summary
Optimization problems are often hard to solve precisely. However solutions that are only nearly optimal are often good enough in practical applications. Approximation algorithms can find such solutions efficiently for many interesting problems. Profound theoretical results additionally help us understand what problems are approximable. This chapter gives an overview of existing approximation techniques, along five broad categories: greedy algorithms, linear and semi-definite programming relaxations, metric embeddings and special techniques. It concludes with an overview of the main inapproximability results.
Introduction
NP-hard optimization problems are ubiquitous, and unless P=NP, we cannot expect algorithms that find optimal solutions on all instances in polynomial time. This intractability thus forces us to relax one of the three above mentioned constraints. Approximation algorithms relax the optimality constraint, and aim to do so by as small an amount as possible. We shall concern ourselves with discrete optimization problems, where the goal is to find amongst the set of feasible solutions, the one that minimizes (or maximizes) the value of the objective function. Usually, the space of feasible solutions is defined implicitly, e.g. the set of cuts in a graph on n vertices. The objective function associates with each feasible solution a real value; this usually has a succinct representation as well, e.g. the number of edges in the cut. We measure the performance of an approximation algorithm on a given instance by the ratio of the value of the solution output by the algorithm, to that of the optimal solution.
- Type
- Chapter
- Information
- TractabilityPractical Approaches to Hard Problems, pp. 231 - 259Publisher: Cambridge University PressPrint publication year: 2014