CSE5313 Coding and information theory for data science (Lecture 17)
Shannon’s coding Theorem
Shannon’s coding theorem: For a discrete memoryless channel with capacity , every rate is achievable.
Computing Channel Capacity
: channel input (per 1 channel use), : channel output (per 1 channel use).
Let the rate of the code be (or if it is linear).
The Binary Erasure Channel (BEC): analog of BSC, but the bits are lost (not corrupted).
Let be the fraction of erased bits.
Corollary: The capacity of the BEC is .
Proof
Suppose we denote .
So,
So
So
So the capacity of the BEC is .
General interpretation of capacity
Recall .
Edge case:
- If , then output reveals all information about input .
- rate of is possible. (same as information compression)
- If , then reveals no information about .
- rate of no information is transferred.
Compression is transmission without noise.
Side notes for Cryptography
Goal: Quantify the amount of information that is leaked to the eavesdropper.
- Let:
- be the message distribution.
- Let be the cyphertext distribution.
- How much information is leaked about to the eavesdropper (who sees )?
- Idea: One-time pad.
One-time pad
Suppose the Sender and Receiver agree on
Let
Measure the information leaked to the eavesdropper (who sees ).
That is to compute .
Proof
Recall that .
So
is uniformly distributed over .
So .
because is uniformly distributed over .
- Notice for every . (since is a group)
- For every ,
So .
Discussion of information theoretical privacy
What does mean?
- No information is leaked to the eavesdropper.
- Regardless of the value of and .
- Regardless of any computational power.
Information Theoretic privacy:
- Guarantees given in terms of mutual information.
- Remains private in the face of any computational power.
- Remains private forever.
Very strong form of privacy
The mutual information is an average metric for privacy, no guarantee for any individual message.
The one-time pad is so-far, the only known perfect privacy scheme.
The asymptotic equipartition property (AEP) and data compression
This section will help us understand the limits of data compression.
The asymptotic equipartition property
Idea: consider the space of all possible sequences produced by a random source, and focus on the “typical” ones.
- Asymptotic in the sense that many of the results focus on the regime of large source sequences.
- Fundamental to the concept of typical sets used in data compression.
- The analog of the law of large numbers (LLN) in information theory.
- Direct consequence of the weak law.
The law of large numbers
The average of outcomes obtained from a large number of trials is close to the expected value of the random variable.
For independent and identically distributed (i.i.d.) random variables , with expected value , the sample average
converges to the expected value as goes to infinity.
The weak law of large numbers
converges in probability to the expected value
That is,
\lim_{n\to\infty} P\left(|\overline{X}_n<\epsilon)=1for any positive .
Intuitively, for any nonzero margin , no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value (within the margin)
The AEP
Let be an i.i.d source that takes values in alphabet and produces a sequence of i.i.d. random variables .
Let be the probability of observing the sequence .
Theorem (AEP):
Proof
The terms are also i.i.d. random variables.
So by the weak law of large numbers,
Typical sets
Definition of typical set
The typical set (denoted by ) with respect to is the set of sequence that satisfies
In other words, the typical set contains all -length sequences with probability close to .
- This notion of typicality only concerns the probability of a sequence and not the actual sequence itself.
- It has great use in compression theory.
Properties of the typical set
- If , then .
- . for sufficiently large .
Sketch of proof
Use the AEP to show that . for sufficiently large .
For any , there exists such that for all ,
- .
- . for sufficiently large .
Smallest probable set
The typical set is a fairly small set with most of the probability.
Q: Is it the smallest such set?
A: Not quite, but pretty close.
Notation: Let be i.i.d. random variables drawn from . For some , we denote as the smallest set such that .
Notation: We write if
Theorem: .
Check book for detailed proof.
What is the difference between and ?
Consider a Bernoulli sequence with .
The typical sequences are those in which the proportion of 1’s is close to 0.9.
- However, they do not include the most likely sequence, i.e., the sequence of all 1’s!
- .
- . Its average logarithmic probability cannot come close to the entropy of no matter how large can be.
- The set contains all the most probable sequences… and includes the sequence of all 1’s.
Consequences of the AEP: data compression schemes
Let be i.i.d. random variables drawn from .
We want to find the shortest description of the sequence .
First, divide all sequences in into two sets, namely the typical set and its complement .
- contains most of the probability while contains most elements.
- The typical set has probability close to 1 and contains approximately elements.
Order the sequences in each set in lexicographic order.
- This means we can represent each sequence of by giving its index in the set.
- There are at most sequences in .
- Indexing requires no more than bits.
- Prefix all of these by a “0” bit ⇒ at most bits to represent each.
- Similarly, index each sequence in with no more than bits.
- Prefix it by “1” ⇒ at most bits to represent each.
- Voilà! We have a code for all sequences in .
- The typical sequences have short descriptions of length bits.
Algorithm for data compression
- Divide all -length sequences in into the typical set and its complement .
- Order the sequences in each set in lexicographic order.
- Index each sequence in using bits and each sequence in using bits.
- Prefix the sequence by “0” if it is in and “1” if it is in .
Notes:
- The code is one-to-one and can be decoded easily. The initial bit acts as a “flag”.
- The number of elements in is less than the number of elements in , but it turns out this does not matter.
Expected length of codewords
Use to denote a sequence and to denote the length of the corresponding codeword.
Suppose is sufficiently large.
This means .
Lemma: The expected length of the codeword is upper bounded by , where .
can be made arbitrarily small by choosing and appropriately.
Efficient data compression guarantee
The expected length of the codeword is upper bounded by , where .
Theorem: Let be i.i.d. with and . Then there exists a code that maps sequences to binary strings (codewords) such that the mapping is one-to-one and for sufficiently large.
- Thus, we can represent sequences using bits on average
Shannon’s source coding theorem
There exists an algorithm that can compress i.i.d. random variables , each with entropy , into slightly more than bits with negligible risk of information loss. Conversely, if they are compressed into fewer than bits, the risk of information loss is very high.
- We have essentially proved the first half!
Proof of converse: Show that any set of size smaller than that of covers a set of probability bounded away from 1