Difference between revisions of "161-A1.1"

From Microlab Classes
Jump to navigation Jump to search
Line 5: Line 5:
 
* For Pass/Fail grades, the possible outcomes are: <math>\{P, F\}</math> with probabilities <math>\{\tfrac{1}{2}, \tfrac{1}{2}\}</math>. Thus,  
 
* For Pass/Fail grades, the possible outcomes are: <math>\{P, F\}</math> with probabilities <math>\{\tfrac{1}{2}, \tfrac{1}{2}\}</math>. Thus,  
  
{{NumBlk|::|<math>H\left(P\right)=\sum_{i=1}^n p_i\cdot \log_2\left(\frac{1}{p_i}\right) = \frac{1}{2}\cdot \log_2\left(2\right) + \frac{1}{2}\cdot \log_2\left(2\right) = 1\,\mathrm{bit}</math>|{{EquationRef|10}}}}
+
{{NumBlk|::|<math>H\left(P\right)=\sum_{i=1}^n p_i\cdot \log_2\left(\frac{1}{p_i}\right) = \frac{1}{2}\cdot \log_2\left(2\right) + \frac{1}{2}\cdot \log_2\left(2\right) = 1\,\mathrm{bit}</math>|{{EquationRef|1}}}}
 +
 
 +
* For grades = <math>\{1.0, 2.0, 3.0, 4.0, 5.0\}</math> with probabilities <math>\{\tfrac{1}{5}, \tfrac{1}{5}, \tfrac{1}{5}, \tfrac{1}{5}, \tfrac{1}{5}\}</math>, we get:
 +
 
 +
{{NumBlk|::|<math>H\left(P\right)=\sum_{i=1}^n p_i\cdot \log_2\left(\frac{1}{p_i}\right) = 5\cdot \frac{1}{5}\cdot \log_2\left(5\right) = 2.32\,\mathrm{bit}</math>|{{EquationRef|2}}}}

Revision as of 23:10, 13 September 2020

Let's look at a few applications of the concept of information and entropy.

Student Grading

How much information can we get from a single grade? Note that the maximum information occurs when all the grades have equal probability.

  • For Pass/Fail grades, the possible outcomes are: with probabilities . Thus,

 

 

 

 

(1)

  • For grades = with probabilities , we get:

 

 

 

 

(2)