Difference between revisions of "The Data Processing Inequality"

From Microlab Classes
Jump to navigation Jump to search
Line 74: Line 74:
 
& = \sum_{z\in Z} \sum_{y\in Y} \sum_{x\in X} P\left(x, y, z\right)\cdot \left(\log_2\frac{P\left(z\right)}{P\left(x, z\right)}-\log_2\frac{P\left(y, z\right)}{P\left(x, y, z\right)}\right) \\
 
& = \sum_{z\in Z} \sum_{y\in Y} \sum_{x\in X} P\left(x, y, z\right)\cdot \left(\log_2\frac{P\left(z\right)}{P\left(x, z\right)}-\log_2\frac{P\left(y, z\right)}{P\left(x, y, z\right)}\right) \\
 
& = \sum_{z\in Z} \sum_{y\in Y} \sum_{x\in X} P\left(x, y, z\right)\cdot \log_2\frac{P\left(z\right)}{P\left(x, z\right)} - \sum_{z\in Z} \sum_{y\in Y} \sum_{x\in X} P\left(x, y, z\right)\cdot \log_2\frac{P\left(y, z\right)}{P\left(x, y, z\right)} \\
 
& = \sum_{z\in Z} \sum_{y\in Y} \sum_{x\in X} P\left(x, y, z\right)\cdot \log_2\frac{P\left(z\right)}{P\left(x, z\right)} - \sum_{z\in Z} \sum_{y\in Y} \sum_{x\in X} P\left(x, y, z\right)\cdot \log_2\frac{P\left(y, z\right)}{P\left(x, y, z\right)} \\
 +
& = \sum_{y\in Y} P\left(y\right) \sum_{z\in Z} \sum_{x\in X} P\left(x, z\mid y\right)\cdot \log_2\frac{P\left(z\right)}{P\left(x, z\right)} - \sum_{z\in Z} \sum_{y\in Y} \sum_{x\in X} P\left(x, y, z\right)\cdot \log_2\frac{P\left(y, z\right)}{P\left(x, y, z\right)} \\
 
\end{align}</math>|{{EquationRef|12}}}}
 
\end{align}</math>|{{EquationRef|12}}}}
  

Revision as of 18:00, 23 October 2020

Markovity

A Markov Chain is a random process that describes a sequence of possible events where the probability of each event depends only on the outcome of the previous event. Thus, we say that is a Markov chain in this order, denoted as:

 

 

 

 

(1)

If we can write:

 

 

 

 

(2)

Or in a more compact form:

 

 

 

 

(3)

We can use Markov chains to model how a signal is corrupted when passed through noisy channels. For example, if is a binary signal, it can change with a certain probability, , to , and it can again be corrupted to produce .

Consider the joint probability . We can express this as:

 

 

 

 

(4)

And if , we get:

 

 

 

 

(5)

Since , we can write:

 

 

 

 

(6)

Thus, we can say that and are conditionally independent given . If we think of as some past event, and as some future event, then the past and future events are independent if we know the present event . Note that this property is good definition of, as well as a useful tool for checking Markovity.

We can rewrite the joint probability as:

 

 

 

 

(7)

Therefore, if , then it follows that .

Entropy Chain Rules

As we increase the number of random variables we are dealing with, it is important to understand how this increase affects entropy. We have previously shown that for two random variables and :

 

 

 

 

(8)

For three random variables , , and :

 

 

 

 

(9)

In general:

 

 

 

 

(10)

Conditional Mutual Information

Conditional mutual information is defined as the expected value of the mutual information of two random variables given the value of a third random variable, and for three random variables , , and , it is defined as:

 

 

 

 

(11)

We can rewrite the definition of conditional mutual information as:

 

 

 

 

(12)


Chain Rule for Mutual Information

For two random variables and :

 

 

 

 

(12)

The Data Processing Inequality

Sufficient Statistics

Fano's Inequality