Difference between revisions of "Channel polarization"

From Microlab Classes
Jump to navigation Jump to search
Line 144: Line 144:
  
 
== Binary erasure channel ==  
 
== Binary erasure channel ==  
Let <math>\epsilon</math> be the erasure probability of a binary erasure channel (BEC).
+
Let <math>\epsilon</math> be the erasure probability of a binary erasure channel (BEC). As with the BSC, we can start with the conditional probability of <math>(Y_1, Y_2)</math> given <math>(U_1,U_2)</math>.
 +
 
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
Line 204: Line 205:
 
|}
 
|}
  
 +
The synthetic channel <math>W^{-}: U_1 \rightarrow (Y_1,Y_2)</math> can be obtained by marginalizing <math>U_2</math>,
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
Line 240: Line 242:
 
|-
 
|-
 
|}
 
|}
 +
 +
A maximum-likelihood receiver will decide that <math>U_1 = 0</math> if is more likely than <math>U_1 = 0</math>. If there are ties, we declare an erasure, denoted by <math>?</math>. This allows us to reduce the nine-element output alphabet into three groups. The receiver declares that <math>\hat{U}_1 = 0</math> if <math>Y_1 \oplus Y_2 = 0</math> and <math>\hat{U}_1 = 1</math> if if <math>Y_1 \oplus Y_2 = 1</math>. This is very similar to the XOR decisions in our discussion of the BSC. The difference now is that several combinations may correspond to an erasure.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 257: Line 261:
 
| <math>2\epsilon - \epsilon^2</math>
 
| <math>2\epsilon - \epsilon^2</math>
 
| <math>(1 - \epsilon)^2</math>
 
| <math>(1 - \epsilon)^2</math>
 +
|-
 
|}
 
|}
  
The other synthetic channel <math>W^+: U_2 \rightarrow (U_1, Y_1, Y_2)</math> has an output alphabet of size 18. For brevity, the table below shows the reduced transition probability matrix. Upon inspection, we see that the reduced channel is equivalent to another BEC with erasure probability <math>\epsilon^2</math>. For all <math>0 < \epsilon < 1</math>, <math>\epsilon^2 < 2\epsilon - \epsilon^2</math> which guarantees that <math>W^{+}</math> has higher mutual information compared to <math>W^-/math>.
+
The other synthetic channel <math>W^+: U_2 \rightarrow (U_1, Y_1, Y_2)</math> has an output alphabet of size 18. For brevity, the table below shows the reduced transition probability matrix. Upon inspection, we see that the reduced channel is equivalent to another BEC with erasure probability <math>\epsilon^2</math>. For all <math>0 < \epsilon < 1</math>, <math>\epsilon^2 < 2\epsilon - \epsilon^2</math> which guarantees that <math>W^{+}</math> has higher mutual information compared to <math>W^-</math>.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 277: Line 282:
 
| <math>\epsilon^2</math>
 
| <math>\epsilon^2</math>
 
| <math>1 - \epsilon^2</math>
 
| <math>1 - \epsilon^2</math>
 +
|-
 
|}
 
|}
  

Revision as of 06:48, 7 May 2021

Synthetic channels

Binary symmetric channel

Let be the crossover probability of a binary symmetric channel (BSC).

00 01 10 11
00
01
10
11
00 01 10 11
00
01

It turns out that the channel reduces to another BSC. To see this, consider the case where . To minimize the error probability, we must decide the value of that has the greater likelihood. For , so that the maximum-likelihood (ML) decision is . Using the same argument, we see that the ML decision is for . More generally, the receiver decision is to set . Indeed, if the crossover probability is low, it is very likely that and . Solving for and plugging it back produces the desired result.

00,11 01,10
00
01

We just saw that despite having a quartenary output alphabet, is equivalent to a BSC. To get the effective crossover probability of this synthetic channel, we just determine the probability that the ML decision is not the same as . This will happen with probability . Intuitively, should have less mutual information compared to the original channel since an independent source interferes with on top of the effects of the BSC.

Checkpoint: Show that for , .

Now, let us consider the other synthetic channel . This synthetic channel has a greater mutual information compared to the original BSC due to the "stolen" information about . As with the previous synthetic channel, we can produce the transition probability matrix from to , which now has an eight-element output alphabet. To facilitate the discussion, the columns of the table below have been grouped according to their entries:

000 110 001 010 100 111 011 101
0
1

In the transition probability matrix, we see that there are four columns where the receiver will not be able to tell whether or . We can call such a scenario an erasure, and say that the transmitted bit is erased with probability . Clearly, we cannot reduce this synthetic channel to a BSC since there are no erasures in a BSC. Regardless, we can still come up the following more manageable reduction:

000,110 001,010,100,111 011,101
0
1
Checkpoint: Show that the mutual information is preserved after the reduction.

Binary erasure channel

Let be the erasure probability of a binary erasure channel (BEC). As with the BSC, we can start with the conditional probability of given .

00 0? 01 ?0 ?? ?1 10 1? 11
00
01
10
11

The synthetic channel can be obtained by marginalizing ,

00 0? 01 ?0 ?? ?1 10 1? 11
0
1

A maximum-likelihood receiver will decide that if is more likely than . If there are ties, we declare an erasure, denoted by . This allows us to reduce the nine-element output alphabet into three groups. The receiver declares that if and if if . This is very similar to the XOR decisions in our discussion of the BSC. The difference now is that several combinations may correspond to an erasure.

00,11 0?,?0,??,?1,1? 01,11
0
1

The other synthetic channel has an output alphabet of size 18. For brevity, the table below shows the reduced transition probability matrix. Upon inspection, we see that the reduced channel is equivalent to another BEC with erasure probability . For all , which guarantees that has higher mutual information compared to .

000, 00?, 0?0, 1?0, 110, 11? 0??, 1?? 0?1, 01?, 011, 10?, 101, 1?1
0
1

Finally, observe that the capacity of the original BEC is . Since and are also BECs, then their capacities are given by similar expressions. Adding the capacities of the synthetic channels gives twice the capacity of the original channel,