The bandwidth of a signal is simply the range of frequencies that the signal contains. The range doesn’t always have to start at zero Hertz. For example, an FM radio station, which may be centered at 99.3MHz, does not have a bandwidth of 99.3 MHz. Its frequency range stretches from 99.225 MHz to 99.375 MHz, so its bandwidth is 99.375-99.225 = 150kHz.

How is the range of frequencies defined, i.e., what are the cutoff points to consider when setting the range of frequencies of a signal or a channel. In other words how strong must a frequency component of a signal be in order to be considered part of that signal? There is no official standard for defining the cutoff point in all situations, but typically we use the 3dB point – i.e. the point where the frequency’s power is 3dB less than the strongest frequency component (here’s a side question for you: in relative terms, how much smaller is the signal that is 3dB weaker than a reference signal?) Sometimes though, an absolute signal strength is used as the cutoff. In general, the bandwidth of a signal includes all the frequencies that have appreciable or useful content.

Some terms related to bandwidth:

Baseband – describes signals and systems whose range of frequencies is measured from 0 to a maximum frequency

Narrowband – refers to a signal that takes up a relatively small bandwidth on the frequency spectrum.

Broadband – refers to a signal that takes up a relatively large amount of bandwidth on the frequency spectrum. Broadband can also refer to data transmission where multiple pieces of data are sent simultaneously to increase the effective rate of transmission

Passband  – a portion of the frequency spectrum between two limits, an upper frequency limit, and a lower frequency limit.

Bandlimiting a Signal

An earlier section discussed how any periodic signal can be represented as the summation of a series of sine waves, and we specifically looked at a square wave as an example. This section showed that by using more and more of the harmonics of a square wave, you get a more and more accurate representation of that square wave. This idea can also be looked at from the opposing side.

What if we have a square wave, and we want to pass that square wave through a system that is bandwidth limiting. This bandwidth limiting system limits the frequency ranges that will be able to pass through the system, so while we started with a square wave, the effects of passing it through the system will bandlimit the square wave and make it look less “square”-like.  The following diagram shows the effects of bandlimiting a square wave:

Bandlimiting a Square Wave

As you can see bandlimiting, or bandwidth limiting has the potential to change the characteristics of a signal, so when analyzing a system to determine whether a signal will be able to pass through it, you need to determine whether or not all of the required signals can pass through the system.

Information Capacity

The information capacity of a channel is the amount of information that can be passed through a channel in a given time period. It is intimately related to the bandwidth of the channel because the faster a signal can change, the more information that it will be able to carry. This relationship is the first formal relationship between bandwidth and information capacity that we will considered. It is called Hartley’s law.  Mathematically it is:

(Information Transmitted) \propto (System Bandwidth) \times (time of transmission)

The next important relationship between bandwidth and information capacity is called the Nyquist rate which states that for any channel with a bandwidth of B Hz, the maximum number of symbols or code elements that can be resolved per second is 2B. For example if you have a channel with a bandwidth of 5 MHz, then the maximum number of symbols per second that can be sent through that channel is 10 million.

Nyquist Rate = Maximum Signalling Rate = 2 \times bandwidth

Shannon Limit

The final important relationship between bandwidth and information capacity was developed by Claude Shannon. In 1948, Claude Shannon published a landmark paper in the field of information theory that related the information capacity of a channel to the channel’s bandwidth and signal to noise ratio (this is a ratio of the strength of the signal to the strength of the noise in the channel). Shannon showed that this relationship is as follows:

I = B \times log_{2}(1+\frac{S}{N}) = 3.32B \times log_{10}(1+\frac{S}{N})

  • I = Information capacity in bits per second
  • B = bandwidth in Herz
  • S/N is the signal to noise ratio

Basically what Shannon did was extend the Nyquist rate idea (which states the maximum number of symbols that can be sent per second is two times the bandwidth) by adding the signal to noise ratio to the equation. It seems fairly intuitive to say that the signal strength and the noise strength are going to effect a receiver’s ability to receive the signal. What Shannon did was to quantify this relationship.

The contributing factors to the Shannon limit are the bandwidth, because that represents how quickly the symbols can change, and the SNR (signal to noise ratio) because that determines how many different symbols the system will support. The more symbols there are, the more data there is per symbol, but the more difficult it becomes to distinguish between symbols. As the noise increases the more difficult it becomes to distinguish between symbols so fewer can be used and the information capacity decreases.

It is important to note that the Shannon rate is the absolute maximum rate that data can pass through a channel. It is of course possible to pass data through at lower rates.

Here is an example:

A system has a signal to noise ratio of 1000 (30dB) and a bandwidth of 2.7kHz. What is the Shannon limit for this system?

I = (2700)log_{2}(1+1000) = 26.9 kbps

According to Nyquist, in a 2.7kHz system, only 5400 symbols can be sent per second, so in order to get a transmission rate of 26.9kbps, each symbol must contain more than 1 bit (specifically \frac{26.9}{5.4} \approx 5 Bits Per Symbol)