Difference between revisions of "Engaging introduction to information theory"

From Microlab Classes
Jump to navigation Jump to search
Line 65: Line 65:
 
So where does information theory come in? Well, it comes in the redundancy part. When objects and symbols appear too frequently, then those data aren't new anymore. We expect them to appear and it we are more interested only in encoding information that are infrequent. Data that are less surprising provide good opportunities to be reduced or encoded in a different way. Data that are more surprising are those that need to be given more attention to. Later in the course, we will study about ''source coding'' which is reducing data based on the probability distribution.
 
So where does information theory come in? Well, it comes in the redundancy part. When objects and symbols appear too frequently, then those data aren't new anymore. We expect them to appear and it we are more interested only in encoding information that are infrequent. Data that are less surprising provide good opportunities to be reduced or encoded in a different way. Data that are more surprising are those that need to be given more attention to. Later in the course, we will study about ''source coding'' which is reducing data based on the probability distribution.
  
=== Information Theory to Measure Complexity ===
+
=== Information theory to measure complexity ===
  
 
[[File:Measuring complexity.png|thumb|400px|Figure 1: Measuring complexity (from [https://www.slideserve.com/cala/measuring-complexity Ron Eglash, RPI]).]]
 
[[File:Measuring complexity.png|thumb|400px|Figure 1: Measuring complexity (from [https://www.slideserve.com/cala/measuring-complexity Ron Eglash, RPI]).]]

Revision as of 15:54, 3 February 2022

What's with information theory?

If Newton developed theories for calculus and Einstein developed the theory of relativity, then Claude Shannon, at par with these mathematical geniuses, developed information theory. In 1948, Claude's published work called "A Mathematical Theory of Communication"[1] tells us how to quantify information. IBased on any dictionary, information can mean knowledge, facts, or a message obtained from investigation, study, or instruction. Information in a mathematical perspective means something similar but with a tiny twist in its definition. Let's take a look at a few examples. Suppose you were told the following:


  1. Classes will start next week!
  2. The entire semester will be suspended due to the pandemic!
  3. AEKG EAKJGRALGN EAFKEA EAFFH

Can you identify which statement conveys the most information? Hopefully, you'll pick statement number 2! If you carefully think about it, the first statement is probably something you know already. It has a high probability of happening, but this information isn't new to you. For the second statement, the chance of this happening is almost close to none, and we'll be the most surprised if it does happen! The second statement, surprisingly, has the most information. Lastly, the third statement is just a jumble of letters without structure; therefore, it has no meaning at all. In the perspective of the English language, the third statement has a very low chance of happening because we know very well that this is not how we structure our sentences in English.


The example above tells us something about information. Information consists of surprise and meaning. In this class, we are more interested in the surprise part of information because that is something that we can measure. There are two motivations for this [2]: (1) SOriginally, Shannon's information theory was developed for communication systems, and 'surprise' was more relevant. (2) Meaning or semantics are challenging problems. We won't discuss meaning for now but for those interested you might want to check out the branch of artificial intelligence called natural language processing (NLP). The above example is oversimplified, but it should give you a glimpse of what we expect when studying information theory. Let's take a look at a few more interesting examples.

A tale of 2n paths

When you were young, you've probably come across a game called Pinoy Henyo. Pinoy Henyo was an iconic game in the Philippines consisting of two players: (1) a guesser and (2) an answerer. The guesser will have a random word stuck on his forehead, and only the answerer can see the word. The word could be an object, thing, place, or person. The goal is that the guesser asks a series of questions to which the answerer can only respond with a yes or no until the guesser can correctly guess the word. In other countries this game is also called twenty questions. Let's try to constrain the game to three questions only, and let's assume that the guesser and answerer know the eight possible words. The goal is to guess the word that the answerer thought of. Consider the figure below that shows possible paths to the correct.


Eight questions.png

In the figure, all arrows that point up are 'yes' while all arrows that point down are 'no'. Let's say our ideal set of questions would be:

  1. Q1 - Is it a vehicle?
  2. Q2 - Is the size small?
  3. Q3 - Does it carry things?

We get one step closer to the correct answer for every question. Since the questions are answerable only by yes or no, we always obtain a 'bit' of information (pun intended). For example, when we asked Q1, and the answerer responded with a yes, we now have 1 bit of information. That information tells us that we are thinking of a vehicle. When we asked Q2, we received another bit of information. That information tells us that the automobile is small. Finally, when we get to Q3, we receive another bit of information again. If you were asked to repeat the game, you would still ask 3 questions that lead to one of the 8 possible answers. In principle, each question you ask halves the number of possible answers as depicted in the figure. As the guesser, each answer has an equal 'surprise' for you, but it is beneficial because each response gives you information! Are you starting to see a pattern now?


Our simple three-question example tells us that it takes questions to come up with different answers. In the case of the twenty questions game, we can arrive at different possible outcomes. This is an extreme number. The Pinoy Henyo game is more challenging because we have no idea how many questions do we need yet the challenge is to get to the final answer with the least number of questions or at least until time runs out. In information theory, we are interested in "what question would give me the highest information?". If you carefully think about it, in Pinoy Henyo, you would like your partner (i.e., answerer) to actually give you a yes rather than a no. The context looks different because we're expecting so many 'no's because we know that most of our questions may not be useful. However, when we receive a 'yes' it becomes surprising because it gives us valuable information. Doesn't this sound familiar with our oversimplified example above? Think about it.


Try to play a game of Pinoy Henyo with a friend or family member. While you're playing, try to ponder on how information theory can actually help you?

A simple case of data compression

Information theory was developed in the context of communication. The original problem it tries to solve is how to maximize the transmission of data over some channel. Let's try to understand this through a simple example. Imagine that some guy, named Bob, wants to express his love for Alice who lives miles away. Since a picture is worth a thousand words, and Bob decides to go the extra mile, he sends an animated video with 1080p quality running at 30 frames per second (FPS). For now, let's assume that their only means of communication is through a satellite with very little bandwidth (i.e., the satellite communication can only transfer 19.2 million bit/s). The GIF below shows a cute animation of this.

TODO: Add GIF for no encoding here.


Let's do a bit of math. Each frame of the video is one image. Each image consists of three color channels: RGB. Each color channel consists of pixel dimensions, and each pixel ranges from which is equivalent to bits per pixel. In total, one image consists of bits of information. Moreover, the video runs in 30 FPS so the rate of data that needs to be transmitted over the satellite channel is or bits per second. Clearly, 30 images per second cannot fit in the satellite channel that can only support 19.2 million bit/s. If this happens, Alice needs to buffer the images on her side (i.e., wait a bit until the entire video has loaded). Since Bob wants to make sure that Alice would get the best quality and experience, he decides to study CoE 161 and learn about data compression. After hours of dedication and hard work, Bob developed an encoder that reduces the video so that Alice does not need to wait for buffering. When Bob designed his encoder (and of course the accompanying decoder), he considered the following:


  1. Squeeze out redundant data (information) in the video.
  2. Remove components that are essentially "invisible" to the human eyes.


Onion skin for pixel animations. For every change in frame, some pixels stay the same. Temporal redundancy exists in this example.

For the first, there is redundant information both in space and time for each video. For example, adjacent pixels in a video may look the same. Sometimes a block of pixels of the same color may go together. Instead of sending each pixel value, we can just send coordinates of that block in an equation form. Adjacent pixels with similar colors in space can be spatially redundant and this offers an opportunity to reduce the representations. When it comes to time, temporal redundancy may be in the form of changes in pixel for each frame. For example, the onion skin image on the right demonstrates how we pixel artists animate the succeeding frames. The pixels in an animation change positions through time. We do not need to send the entire image again. We could, instead, send out the change in position or change in color because these differences can be represented with fewer bits than an entire 8 bits per pixel. Overall, any spatial or temporal redundancy can be squeezed out depending on the characteristics of the original data. In doing so, we can reduce the representation for the entire video.

For the second item, sometimes there are things that we don't really need in our video. For example, when objects move too fast, for the human eyes these can get blurry. Again, we don't need to encode each snapshot of the image. If things move too fast then we can just send out a blurred version in the first place because it doesn't make sense for humans to see something they originally cannot see. Another example, which is somewhat similar to redundancy, is that some colors may already be invisible to the human eye. Consider the color palette below. Although for us we can still distinguish the two ends of the spectrum, but when placed in a video these colors may not be that visible especially when objects move. To further reduce the data, we can discard these "invisibles" since they do not contribute to our human perception.

Color example that may show spatial redundancy. Some colors may be invisible to the human eyes.

When Bob made his encoder and decoder, he was able to impressively compress the video without and send it to Alice. Alice can enjoy the video without buffering. The GIF below demonstrates this. You might ask, doesn't it take time to process the data through the encoder and decoder? Yes it does take time, but in several cases the encoding and decoding processes are several magnitudes faster than having to communicate data through a channel. The only cost is the complexity of the encoding and decoding algorithm. We'll get back to this in the latter part of this course.


TODO: Insert GIF for encoding form.

So where does information theory come in? Well, it comes in the redundancy part. When objects and symbols appear too frequently, then those data aren't new anymore. We expect them to appear and it we are more interested only in encoding information that are infrequent. Data that are less surprising provide good opportunities to be reduced or encoded in a different way. Data that are more surprising are those that need to be given more attention to. Later in the course, we will study about source coding which is reducing data based on the probability distribution.

Information theory to measure complexity

Figure 1: Measuring complexity (from Ron Eglash, RPI).

For any system we are studying, designing, or building, an interesting and important question is usually: "How complex is this system?" or "How complex should this system be?". In general, we want to be able to compare two systems, and be able to say that one system is more complex than the other. In this case, having a numerical metric would be very convenient.

Possible ways of measuring complexity (obviously not a complete list!):

  1. Human observation, thus making any kind of rating subjective.
  2. Complexity based on the number of parts, components, or distinct elements. However, this depends on the definition of what is a "part". This is dependent on the observer's scale, from functional components down to atoms at the extreme case, as shown in Fig. 1.
  3. Number of dimensions. This is sometimes hard to measure, e.g. degrees of freedom, etc.
  4. Number of parameters controlling the system.
  5. The minimal description needed. However, we need to figure out which language we need to use.
  6. Information content. We then need to figure out how to define and measure information.
  7. Minimum generator or constructor (to build the system). We need to define what tools and methods we need to use to build the system.
  8. Minimum energy and/or time to construct the system. However, some systems evolve with time, so defining the beginning and end points can be difficult.

All of these measures are associated with a model of the system in question. Thus observers could use different models, and therefore come up with different notions of the complexity of the said system. We do not expect to come up with a single universal measure of complexity, but we can explore a framework that is optimal for (a) a particular observer, (b) a particular context, and (c) a particular purpose. Let us focus on the measures related to how unexpected an observation is, an approach known as Information Theory.

Odd one out!

References

  1. C. E. Shannon, A Mathematical Theory of Communication, The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948. (pdf)
  2. D. Applebaum, Probability and Information: An Integrated Approach, Cambridge University Press, 2008.