A classical measure of goodness of a code is the code's minimum Hamming distance. In fact, a large part of traditional coding theory is concerned with finding the fundamental trade-offs between three parameters: the length of the code, the number of codewords in the code, and the minimum distance of the code. It is well-known that the minimum Hamming distance d of a code reflects its guaranteed error-correcting capability in the sense that any error pattern of weight at most floor[(d-1)/2] can be corrected.

Modern Coding Theory

However, most codes can, with high probability, correct error patterns of substantially higher weight. This insight is the cornerstone of modern coding theory which attempts to capitalize on the full correction capability of a code. One of the most successful realizations of this phenomenon is found in binary low-density parity-check (LDPC) codes. These codes come equipped with an iterative message-passing algorithm to be performed at the receiver’s end which is extremely efficient and corrects, with high probability, many more error patterns than guaranteed by the minimum distance.

Pseudo-Codewords and the Fundamental Cone

In this situation, we are left with the problem of finding new mathematically precise concepts that can take over the role of minimum Hamming distance for such high performance codes. This concept can be identified as the fundamental cone of a code.

As a binary linear code, an LDPC code C is defined by a parity-check matrix H. The strength of the iterative decoding algorithm, i.e., its low complexity, comes from the fact that the algorithm operates locally on a so-called Tanner graph representing the matrix H. However, this same fact also leads to a fundamental weakness of the algorithm: because it acts locally, the algorithm cannot distinguish if it is acting on the graph itself or on some finite unramified cover of the graph. This leads to the notion of pseudo-codewords, which arise from codewords in codes corresponding to the covers and which compromise the decoder. Thus to understand the performance of LDPC codes, we must understand the graph covers and the codes corresponding to them.This is tantamount to understanding a cone in R^n defined by inequalities arising from H, called the fundamental cone.

About www.PseudoCodewords.info

  • Papers
  • Presentations
  • Feedback
  • Any suggestions are highly welcome. Please send an email to Pascal Vontobel [email: firstname dot lastname at ieee dot org]. Thank you!
  • This material is based upon work supported by the National Science Foundation under Grants No. CCF-0514869 and CCF-0514801. Any opinions, findings and conclusions or recomendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).