Difference between revisions of "161-A3.1"
Jump to navigation
Jump to search
Line 77: | Line 77: | ||
* Will our expectations change if we do not have access to the blood test, but instead, we get to access (1) the person's susceptibility to skin cancer, and (2) the joint probabilities in Table 1? Since we are given more information, we expect one of the following: | * Will our expectations change if we do not have access to the blood test, but instead, we get to access (1) the person's susceptibility to skin cancer, and (2) the joint probabilities in Table 1? Since we are given more information, we expect one of the following: | ||
** The uncertainty to be equal to the original uncertainty if <math>X</math> and <math>Y</math> are independent, since <math>H\left(X\mid Y\right)=H\left(X\right)</math>, thus <math>I\left(X;Y\right)=H\left(X\right)-H\left(X\mid Y\right)=0</math>. | ** The uncertainty to be equal to the original uncertainty if <math>X</math> and <math>Y</math> are independent, since <math>H\left(X\mid Y\right)=H\left(X\right)</math>, thus <math>I\left(X;Y\right)=H\left(X\right)-H\left(X\mid Y\right)=0</math>. | ||
− | ** A reduction in the uncertainty equal to <math>I\left(X;Y\right)</math>, due to the additional correlated information given by <math>P\left(x_i | + | ** A reduction in the uncertainty equal to <math>I\left(X;Y\right)</math>, due to the additional correlated information given by <math>P\left(x_i, y_j\right)</math>. |
** If <math>X</math> and <math>Y</math> are perfectly correlated, we reduce the uncertainty to zero since <math>H\left(X\mid Y\right) =H\left(X\mid X\right) = 0</math> and <math>I\left(X;Y\right)=H\left(X\right)</math>. | ** If <math>X</math> and <math>Y</math> are perfectly correlated, we reduce the uncertainty to zero since <math>H\left(X\mid Y\right) =H\left(X\mid X\right) = 0</math> and <math>I\left(X;Y\right)=H\left(X\right)</math>. | ||
Revision as of 11:04, 2 October 2020
- Activity: Mutual Information and Channel Capacity
- Instructions: In this activity, you are tasked to
- Walk through the examples.
- Calculate the channel capacity of different channel models.
- Should you have any questions, clarifications, or issues, please contact your instructor as soon as possible.
Contents
Example 1: Mutual Information
Given the following probabilities:
A | B | AB | O | |
---|---|---|---|---|
Very Low | 1/8 | 1/16 | 1/32 | 1/32 |
Low | 1/16 | 1/8 | 1/32 | 1/32 |
Medium | 1/16 | 1/16 | 1/16 | 1/16 |
High | 1/4 | 0 | 0 | 0 |
To get the entropies of and , we need to calculate the marginal probabilities:
-
(1)
-
-
(2)
-
And since:
-
(3)
-
We get:
-
(4)
-
-
(5)
-
Calculating the conditional entropies using:
-
(6)
-
-
(7)
-
Note that . Calculating the mutual information, we get:
-
(8)
-
Or equivalently:
-
(9)
-
Let us try to understand what this means:
- If we only consider , we have the a priori probabilities, for each blood type, and we can calculate the entropy, , i.e. the expected value of the information we get when we observe the results of the blood test.
- Will our expectations change if we do not have access to the blood test, but instead, we get to access (1) the person's susceptibility to skin cancer, and (2) the joint probabilities in Table 1? Since we are given more information, we expect one of the following:
- The uncertainty to be equal to the original uncertainty if and are independent, since , thus .
- A reduction in the uncertainty equal to , due to the additional correlated information given by .
- If and are perfectly correlated, we reduce the uncertainty to zero since and .
Example 2: A Noiseless Binary Channel
Consider transmitting information over a noiseless binary channel shown in Fig. 1. The input, has a priori probabilities and , and the output is . Calculating the output probabilities:
-
(10)
-
-
(11)
-
Example 3: A Noisy Channel with Non-Overlapping Outputs
Example 4: The Binary Symmetric Channel (BSC)
Sources
- Yao Xie's slides on Entropy and Mutual Information