Error-Correction on Non-Standard Communication Channels
()
About this ebook
Related to Error-Correction on Non-Standard Communication Channels
Related ebooks
Information Theory Rating: 0 out of 5 stars0 ratingsEvent-Based Neuromorphic Systems Rating: 0 out of 5 stars0 ratingsTransitions from Digital Communications to Quantum Communications: Concepts and Prospects Rating: 0 out of 5 stars0 ratingsMicrostrip and Printed Antennas: New Trends, Techniques and Applications Rating: 4 out of 5 stars4/5Signal Integrity: From High-Speed to Radiofrequency Applications Rating: 0 out of 5 stars0 ratingsRadio Propagation and Adaptive Antennas for Wireless Communication Networks: Terrestrial, Atmospheric, and Ionospheric Rating: 0 out of 5 stars0 ratingsInformation Theory: A Concise Introduction Rating: 0 out of 5 stars0 ratingsTheory of Computational Complexity Rating: 0 out of 5 stars0 ratingsCommunication Nets: Stochastic Message Flow and Delay Rating: 3 out of 5 stars3/5Artificial Neural Systems: Principle and Practice Rating: 0 out of 5 stars0 ratingsBare-Metal Embedded C Programming: Develop high-performance embedded systems with C for Arm microcontrollers Rating: 0 out of 5 stars0 ratingsSignal Processing and Integrated Circuits Rating: 0 out of 5 stars0 ratingsComputational Number Theory and Modern Cryptography Rating: 3 out of 5 stars3/5Mathematical Methods of Statistics Rating: 3 out of 5 stars3/5The Game of Probability: Literature and Calculation from Pascal to Kleist Rating: 0 out of 5 stars0 ratingsDiscrete Fourier Analysis and Wavelets: Applications to Signal and Image Processing Rating: 0 out of 5 stars0 ratings3D NAND Complete Self-Assessment Guide Rating: 0 out of 5 stars0 ratingsSelectors Rating: 0 out of 5 stars0 ratingsGroup Theory Rating: 0 out of 5 stars0 ratingsDeep Learning: Computer Vision, Python Machine Learning And Neural Networks Rating: 0 out of 5 stars0 ratingsDiscrete Structure and Automata Theory for Learners: Learn Discrete Structure Concepts and Automata Theory with JFLAP Rating: 0 out of 5 stars0 ratingsInformation Theory and Statistics Rating: 0 out of 5 stars0 ratingsIntroduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements Rating: 0 out of 5 stars0 ratingsAn Introduction to Information Theory Rating: 0 out of 5 stars0 ratingsMastering Prolog Programming: From Basics to Expert Proficiency Rating: 0 out of 5 stars0 ratingsTheory of Computation Rating: 0 out of 5 stars0 ratingsVector-valued function and distribution spaces on the torus Rating: 0 out of 5 stars0 ratingsESD: Design and Synthesis Rating: 0 out of 5 stars0 ratingsComplexity Rating: 3 out of 5 stars3/5
Technology & Engineering For You
The Art of War Rating: 4 out of 5 stars4/5The CIA Lockpicking Manual Rating: 5 out of 5 stars5/5A Night to Remember: The Sinking of the Titanic Rating: 4 out of 5 stars4/5The Art of War Rating: 4 out of 5 stars4/5The 48 Laws of Power in Practice: The 3 Most Powerful Laws & The 4 Indispensable Power Principles Rating: 5 out of 5 stars5/5The Big Book of Hacks: 264 Amazing DIY Tech Projects Rating: 4 out of 5 stars4/5The Wuhan Cover-Up: And the Terrifying Bioweapons Arms Race Rating: 4 out of 5 stars4/5The Invisible Rainbow: A History of Electricity and Life Rating: 5 out of 5 stars5/5Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time Rating: 4 out of 5 stars4/5Vanderbilt: The Rise and Fall of an American Dynasty Rating: 4 out of 5 stars4/5Ghost Rider: Travels on the Healing Road Rating: 4 out of 5 stars4/5Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career Rating: 4 out of 5 stars4/5The Big Book of Maker Skills: Tools & Techniques for Building Great Tech Projects Rating: 4 out of 5 stars4/5Rethinking Narcissism: The Bad---and Surprising Good---About Feeling Special Rating: 4 out of 5 stars4/5Navy SEALs Bug-In 2025 Guide: Transform Your Home into an Impenetrable Fortress of Security and Resilience Rating: 0 out of 5 stars0 ratingsThe Systems Thinker: Essential Thinking Skills For Solving Problems, Managing Chaos, Rating: 4 out of 5 stars4/5Summary of Nicolas Cole's The Art and Business of Online Writing Rating: 4 out of 5 stars4/5Nuclear War: A Scenario Rating: 4 out of 5 stars4/5How to Repair Briggs and Stratton Engines, 4th Ed. Rating: 0 out of 5 stars0 ratings80/20 Principle: The Secret to Working Less and Making More Rating: 5 out of 5 stars5/5The Homeowner's DIY Guide to Electrical Wiring Rating: 5 out of 5 stars5/5The ChatGPT Millionaire Handbook: Make Money Online With the Power of AI Technology Rating: 4 out of 5 stars4/5On War: With linked Table of Contents Rating: 4 out of 5 stars4/5Ham Radio For Dummies Rating: 0 out of 5 stars0 ratingsSelfie: How We Became So Self-Obsessed and What It's Doing to Us Rating: 4 out of 5 stars4/5My Inventions: The Autobiography of Nikola Tesla Rating: 4 out of 5 stars4/5How to Disappear and Live Off the Grid: A CIA Insider's Guide Rating: 0 out of 5 stars0 ratingsArtificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5
Related categories
Reviews for Error-Correction on Non-Standard Communication Channels
0 ratings0 reviews
Book preview
Error-Correction on Non-Standard Communication Channels - Edward A. Ratzer
Error-Correction on Non-Standard Communication Channels
Edward A. Ratzer
9781447614739
ABSTRACT
Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels.
First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB.
Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels.
Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes.
Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code.
ACKNOWLEDGEMENTS
My interest in error correction started in my last undergraduate year in Cambridge while attending David MacKay’s course Information Theory, Pattern Recognition and Neural Networks
and being supervised by him for my MSci project. For the last three years David MacKay has been a PhD supervisor always full of ideas and encouragement.
EPSRC and Schlumberger have generously funded me for the duration of my studies. Their funding has allowed me to attend several conferences which have kept me up-to-date with the academic side of the field. Discussions with Astrium, Azea, IBM and Schlumberger have been invaluable in keeping in touch with industrial applications. Work in Appendix H was done while visiting the IBM Zurich Research Laboratory.
I would like to thank all the members of my research group in Cambridge : Seb Wills, Sanjoy Mahajan, David Ward, James Miskin, Phil Cowans, John Winn, John Barry and Hanna Wallach, for being ready to bounce ideas off and looking over draft papers. The members of Cambridge URNU also deserve thanks for being willing volunteers for experimental work.
LIST OF PUBLICATIONS
Edward A. Ratzer. Convolutional code performance as a function of decoding delay. In 6th International Symposium on Communication Theory and Applications, July 2001.
Edward A. Ratzer. Low-density parity-check codes on Markov channels. In Second IMA Conference on Mathematics in Communications, December 2002.
Edward A. Ratzer and David J.C. MacKay. Sparse low-density parity-check codes for channels with cross-talk. In IEEE Information Theory Workshop, pages 127–130, April 2003.
Edward A. Ratzer. Intersected low-density parity-check and convolutional codes. In Postgraduate Research Conference in Electronics, Photonics, Communications and Software, April 2003.
Edward A. Ratzer. Sparse data blocks and multi-user channels. In International Symposium on Information Theory, July 2003.
Edward A. Ratzer. Intersected low-density parity-check and convolutional codes. In 7th International Symposium on Communication Theory and Applications, July 2003.
Edward A. Ratzer. Marker codes for channels with insertions and deletions. In 3rd International Symposium on Turbo Codes and Related Topics, September 2003.
Edward A. Ratzer. Marker codes for channels with insertions and deletions. Annals of Telecommunications, January 2005.
Table of Contents
Title Page
Copyright Page
ABSTRACT
ACKNOWLEDGEMENTS
LIST OF PUBLICATIONS
CHAPTER 1 - INTRODUCTION
CHAPTER 2 - LOW-DENSITY PARITY-CHECK CODES
CHAPTER 3 - MARKOV CHANNELS
CHAPTER 4 - MULTI-USER CHANNELS
CHAPTER 5 - SPARSE-DENSE CODES FOR MULTI-USER CHANNELS
CHAPTER 6 - SPARSE LDPC CODES FOR MULTI-USER CHANNELS
CHAPTER 7 - INSERTION-DELETION CHANNELS
CHAPTER 8 - CONVOLUTIONAL CODES
CHAPTER 9 - INTERSECTED LDPC AND CONVOLUTIONAL CODES
CHAPTER 10 - CONCLUSION
APPENDIX A - INFORMATION THEORY
APPENDIX B - SIMULATIONS
APPENDIX C - FINITE FIELD ARITHMETIC
APPENDIX D - LINEAR BLOCK CODES
APPENDIX E - CONVOLUTIONAL CODES
APPENDIX F - FACTOR GRAPHS
APPENDIX G - THE HAT PROBLEM
APPENDIX H - COMPLEXITY OF LDPC BELIEF PROPAGATION DECODING
APPENDIX I - EXIT CHARTS WITH A GAUSSIAN APPROXIMATION
BIBLIOGRAPHY
CHAPTER 1
INTRODUCTION
1.1 Forward Error Correction
We are all familiar with noise on the radio. If the noise level gets so high that we miss something, we can not get back what we missed. This is a common problem in many areas of communication and has led to the development of forward error-correction
or channel coding
. In forward error-correction we add extra redundancy to our original signal and then hope we can reconstruct the original transmission from this new signal when it is corrupted by noise. In 1948 Shannon [93] showed that there was a theoretical solution that would allow an arbitarily small probability of error under certain constraints. This thesis will look at situations where these constraints do not hold or are not suitable.
We will look at the field of binary communication–where we transmit a sequence of 0s and 1s over a system like that illustrated in figure 1.1. For example, on a fibre optic channel 1s could correspond to the presence of light and 0s to no light. This is one possible type of modulation; many modulation schemes exist for different types of channel [81]. The receiver observes the original sequence corrupted by noise. There are two main classes of receivers. Hard-decision
receivers report a best guess of each transmitted symbol (for the fibre-optic channel this could be obtained from a threshold of the observed light level). Soft-decision
receivers report analogue information, ideally Pr(received signal|transmitted bit = 1) and Pr(received signal|transmitted bit = 0). For the fibre-optic channel this could be obtained by observing the light intensity and using a knowledge of the channel characteristics. In Appendix B.1 simple standard channel models that approximate some real systems are presented.
Figure 1.1: Schematic of a communication system
In forward-error correction we are interested in forming an error-correcting code–a set of allowed sequences, E. Redundancy for forward-error correction can then be obtained as not all sequences are allowed. We define a one-to-one mapping from input sequences s to transmitted sequences x ∈ E. In this thesis we will look at codes where there is a fixed ratio between the length of these sequences, e9781447614739_i0003.jpg . R is a measure of redundancy called the code rate and corresponds to the number of information bits carried per symbol transmitted on the channel.
There are two main classes of error-correcting codes. The first are block codes for which lengths and length x are fixed. Length x is the blocksize of the code, commonly given the symbol N . The second class are stream or convolutional codes for which only R is fixed and there are no block boundaries. Commonly stream codes are implemented by an encoder which, for example, outputs 2 bits for every 1 bit of input. This would then define a stream code of R = 1/2.
1.2 Random codes
Shannon [93] showed that on a channel with stationary ergodic noise of a fixed power there exists a rate C called the channel capacity. For R < C transmission at an arbitarily small error probability is possible with a large random block code (meaning that the set of codewords is chosen at random). For R > C arbitrarily reliable transmission is not possible.
To get a particular error probability with a random block code one needs to choose the blocksize N. To get a smaller error probability, N needs to be larger. For a code to be practical we would like an e9781447614739_i0004.jpg (N ) complexity encoder and decoder so we can choose a larger blocksize without a penalty on the amount of computation per bit. Random block codes are not practical. One could imagine a simple encoder and decoder based on the set of codewords. Such an encoder and decoder are both O (NeN) in space as they need to store all the codewords (there are (eN) codewords each of length N). The maximum-likelihood decoder is (NeN) in time–the decoder looks through every codeword and evaluates the probability of the received signal having been produced by the channel given that codeword. More elaborate algorithms could be used, for example with decision trees, but no algorithm is known that is better than exponential [69].
So instead more structured codes with tractable encoders and decoders were studied.
1.3 Algebraic coding theory
Algebraic coding theory mainly deals with noise from the binary symmetric channel (as defined in appendix B.1) which flips bits with probability pf . The simplest family of error-correcting codes for such a channel are the repetition codes RN. For the binary case RN consists of two codewords : the codeword of N 1s and the codeword of N 0s. The code has R = 1/N . To encode, we take one input bit and repeat it N times. On receiving an RN codeword corrupted by a binary symmetric channel, the maximum likelihood decoder returns the bit that forms the majority of the received noisy codeword (if pf < 1/2) [71].
In [50] the idea of looking at error-correcting codes from the point of view of the minimum distance
, dmin, was introduced. One can define a distance (the Hamming distance) between two codewords as the number of positions where the