Compression is the process of reducing the size of an electronic file for storage or transmission. Once compressed, the file takes up less space, downloads or uploads faster, and in many cases, can be executed or read in less time.
Decompression is the reverse process of compression which restores the file to its original size.
Examples of compression formats and encoding:
JPEG (Joint Photographic Experts Group)
Uses lossy compression to remove redundant visual data
Usually ends with the .jpeg or .jpg extension
MPEG (Moving Pictures Expert Group)
Refers to several standards for various types of files
Video compression is done by storing only the changes from one video frame to another rather than storing entire frames
ZIP/TAR
Uses lossless compression to remove redundant data from a file.
Extensions are .zip and .tar
Communication is an important aspect of human interaction and is a complicated process, and it can be difficult to pinpoint where or with whom an interaction begins and ends. It can be beneficial to consider various communication models at times.
Understanding the various communication channels and how to use them effectively is critical for effective communication in a variety of contexts, including personal relationships, business interactions, and social settings.
Coding Theory
Coding theory is the study of how to secure information during transmission from errors or interference. Its goal is to create methods to ensure that information is sent correctly and without errors. It began in the 1940s with Claude Shannon's work and has subsequently resulted in the development of various error-correcting codes, such as Hamming codes and Reed-Solomon codes, which are now utilized in modern communication systems. As digital communication and storage systems evolve, coding theory will remain an important subject of research to ensure that information is reliably and securely delivered and saved.
Tasks in Coding Theory
Error Detection is a great aid in high-quality maintenance. Without this, a large digital system becomes unmaintainable.
Error Correction is the process of detecting errors in transmitted messages and reconstructing the original error-free data is known as error correction. This ensures that messages are received correctly and without errors.
Goals of Coding Theory
Improve the reliability of digital communication by devising methods that enable the receiver to decide whether there have been errors during the transmission.
If there are, to possibly recover the original message (error correction).
Source coding, also known as data compression, is the process of reducing the amount of data required to represent a piece of information. The goal of source coding is to minimize the number of bits used to represent the information without sacrificing too much of its quality. There are different techniques for source coding, but the most common ones include entropy coding (e.g., Huffman coding), dictionary coding (e.g., Lempel-Ziv-Welch), and transform coding (e.g., the discrete cosine transform).
Error control coding, on the other hand, is the process of adding redundancy to the data in order to detect and correct errors that may occur during transmission or storage. The goal of error control coding is to ensure that the receiver can recover the original data even if some bits are corrupted or lost during transmission. The two most common techniques for error control coding are block codes (e.g., Hamming codes, Reed-Solomon codes) and convolutional codes (e.g., Viterbi algorithm).
The processes of source coding and error control coding can be combined to achieve better performance. For example, the source data can be compressed using source coding techniques and then protected using error control coding techniques before transmission. At the receiver, the data can be first decoded using error control coding techniques and then decompressed using source coding techniques to recover the original information. This combination of techniques is used in many communication and storage systems, such as digital television, mobile phones, and computer networks.
Parity Check
A simple technique for error detection in data transmission. It involves adding an extra bit, called a parity bit, to the data being transmitted. The parity bit is chosen such that the total number of 1s in the transmitted data and the parity bit are always even or odd. If the receiver detects an odd number of errors in the received data, it knows that there must have been an error in the transmission.
Example:
Consider the transmission of the 8-bit data word 11010101. We can add a parity bit such that the total number of 1s in the transmitted data and the parity bit are always even. We can set the parity bit to 1, since there are already an odd number of 1s in the data. The transmitted data word would be 110101011.
If an error occurs during transmission and one of the bits is flipped, the receiver can detect the error by checking the parity of the received data word. If the received data word has an even number of 1, the receiver assumes that the transmitted data was correct. If the received data word has an odd number of 1, the receiver knows that there was an error in the transmission.
While parity check is a simple and easy-to-implement technique, it can only detect single-bit errors. If there are multiple bit errors or if two errors occur in the same bit position, parity check may not detect the errors.
Hamming Distance
A measure of the difference between two binary words It is the number of positions at which the two words differ. Hamming distance is commonly used in error control coding, particularly in the design of error-correcting codes.
Example:
Consider the two 8-bit data words 11010101 and 11110101. The Hamming distance between these two words is 1, since they differ in only one bit position (the fifth bit).
Error-correcting codes, such as the Hamming code, use the Hamming distance to detect and correct errors in the transmitted data. By adding redundancy bits to the data, an error-correcting code can be designed to detect and correct a certain number of errors. The number of errors that can be corrected depends on the Hamming distance of the code. For example, a code with a Hamming distance of 3 can detect and correct any single-bit error, while a code with a Hamming distance of 4 can detect and correct any two-bit error.