Difference between revisions of "Channel polarization"
Adrian Vidal (talk | contribs) |
Adrian Vidal (talk | contribs) |
||
Line 160: | Line 160: | ||
It turns out that the channel <math>W^{-}: U_1 \rightarrow (Y_1, Y_2)</math> reduces to another BSC. To see this, consider the case where <math>(Y_1, Y_2) = (0,0)</math>. To minimize the error probability, we must decide the value of <math>U_1</math> that has the greater likelihood. For <math>0 < p < 0.5</math>, <math> 0.5(1-2p+2p^2) > p(1-p)</math> so that the maximum-likelihood (ML) decision is <math>U_1 = 0</math>. Using the same argument, we see that the ML decision is <math>U_1 = 0</math> for <math>(Y_1,Y_2)=(1,1)</math>. More generally, the receiver decision is to set <math>\hat{U_1} = Y_1 \oplus Y_2</math>. Indeed, if the crossover probability is low, it is very likely that <math>Y_1 = U_2 \oplus U_1</math> and <math>Y_2 = U_2</math>. Solving for <math>U_2</math> and plugging it back produces the desired result. | It turns out that the channel <math>W^{-}: U_1 \rightarrow (Y_1, Y_2)</math> reduces to another BSC. To see this, consider the case where <math>(Y_1, Y_2) = (0,0)</math>. To minimize the error probability, we must decide the value of <math>U_1</math> that has the greater likelihood. For <math>0 < p < 0.5</math>, <math> 0.5(1-2p+2p^2) > p(1-p)</math> so that the maximum-likelihood (ML) decision is <math>U_1 = 0</math>. Using the same argument, we see that the ML decision is <math>U_1 = 0</math> for <math>(Y_1,Y_2)=(1,1)</math>. More generally, the receiver decision is to set <math>\hat{U_1} = Y_1 \oplus Y_2</math>. Indeed, if the crossover probability is low, it is very likely that <math>Y_1 = U_2 \oplus U_1</math> and <math>Y_2 = U_2</math>. Solving for <math>U_2</math> and plugging it back produces the desired result. | ||
− | We just saw that despite having a quartenary output alphabet, <math>W^{-}</math> is equivalent to a BSC. To get the effective crossover probability of this synthetic channel, we just determine the probability that the ML decision <math>\hat{U_1}</math> is not the same as <math>U_1</math>. This will happen with probability <math>2p(1-p)<math>. Intuitively, <math>W^{-}</math> should have less mutual information compared to the original channel <math>W</math> since an independent source <math>U_2</math> interferes with <math>U_1</math> on top of the effects of the BSC. | + | We just saw that despite having a quartenary output alphabet, <math>W^{-}</math> is equivalent to a BSC. To get the effective crossover probability of this synthetic channel, we just determine the probability that the ML decision <math>\hat{U_1}</math> is not the same as <math>U_1</math>. This will happen with probability <math>2p(1-p)</math>. Intuitively, <math>W^{-}</math> should have less mutual information compared to the original channel <math>W</math> since an independent source <math>U_2</math> interferes with <math>U_1</math> on top of the effects of the BSC. |
{{Note|Checkpoint: Show that for <math>0 < p < 1/2</math>, <math>p < 2p(1-p)</math>.|reminder}} | {{Note|Checkpoint: Show that for <math>0 < p < 1/2</math>, <math>p < 2p(1-p)</math>.|reminder}} | ||
Now, let us consider the other synthetic channel <math>W^{+}: <math>U_2 \rightarrow (U_1, Y_1, Y_2)</math>. This synthetic channel has a greater mutual information compared to the original BSC due to the "stolen" information about <math>U_1</math>. As with the previous synthetic channel, we can produce the transition probability matrix from <math>U_2</math> to <math>(U_1, Y_1, Y_2), which now has an eight-element output alphabet. To facilitate the discussion, the rows of the table below have been grouped according to their entries: | Now, let us consider the other synthetic channel <math>W^{+}: <math>U_2 \rightarrow (U_1, Y_1, Y_2)</math>. This synthetic channel has a greater mutual information compared to the original BSC due to the "stolen" information about <math>U_1</math>. As with the previous synthetic channel, we can produce the transition probability matrix from <math>U_2</math> to <math>(U_1, Y_1, Y_2), which now has an eight-element output alphabet. To facilitate the discussion, the rows of the table below have been grouped according to their entries: |
Revision as of 05:00, 7 May 2021
Synthetic channels
Binary erasure channel
Let be the erasure probability of a binary erasure channel (BEC).
00 | 0? | 01 | ?0 | ?? | ?1 | 10 | 1? | 11 | |
00 | |||||||||
01 | |||||||||
10 | |||||||||
11 |
00 | 0? | 01 | ?0 | ?? | ?1 | 10 | 1? | 11 | |
0 | |||||||||
1 |
Binary symmetric channel
Let be the crossover probability of a binary symmetric channel (BSC).
00 | 01 | 10 | 11 | |
00 | ||||
01 | ||||
10 | ||||
11 |
00 | 01 | 10 | 11 | |
00 | ||||
01 |
It turns out that the channel reduces to another BSC. To see this, consider the case where . To minimize the error probability, we must decide the value of that has the greater likelihood. For , so that the maximum-likelihood (ML) decision is . Using the same argument, we see that the ML decision is for . More generally, the receiver decision is to set . Indeed, if the crossover probability is low, it is very likely that and . Solving for and plugging it back produces the desired result.
We just saw that despite having a quartenary output alphabet, is equivalent to a BSC. To get the effective crossover probability of this synthetic channel, we just determine the probability that the ML decision is not the same as . This will happen with probability . Intuitively, should have less mutual information compared to the original channel since an independent source interferes with on top of the effects of the BSC.
Now, let us consider the other synthetic channel . This synthetic channel has a greater mutual information compared to the original BSC due to the "stolen" information about . As with the previous synthetic channel, we can produce the transition probability matrix from to <math>(U_1, Y_1, Y_2), which now has an eight-element output alphabet. To facilitate the discussion, the rows of the table below have been grouped according to their entries: