Wireless communication devices and networks are nearly everywhere and they give us the ability to communicate with each other from nearly anywhere to nearly anywhere. Getting in touch with a friend, colleague, client or employee does not depend on where you are or where they are. This communication ability is a boon to business people, students, internet junkies and really, just about everybody because our communication is no longer limited to any particular location.

Does this ability to communicate come at a cost to us in anyway? There is potentially a social cost, possibly because we no longer have to communicate face-to-face, but I’ll leave the question of a social cost to the psychologists and sociologists (armchair ones and otherwise). There is also potentially a health cost and I want to make a few comments about electromagnetic radiation and health.

### Ways That EM Radiation is Known to be Harmful

Electromagnetic radiation is undeniably dangerous to your health in two ways. First, if electromagnetic signals have a high enough power, they can literally cook you. If you were able to climb inside a microwave oven, and turn it on, you would feel sorry about doing so afterwards. You are extremely unlikely to encounter signals with high enough power to cook you like a microwave in your day to day activities.  Second, if the electromagnetic radiation is at a high enough frequency, it can ionize atoms and molecules and damage them.  If you have ever had a sunburn, you have felt the harmful effects UV radiation which is in a class called ionizing radiation, however communication systems do not use electromagnetic signals in the ionizing part of the spectrum.

UV radiation is in a class of EM radiation that is called ionizing radiation. Ionizing radiation is any kind of radiation that can ionize atoms or molecules. More specifically, the photons of ionizing radiation carry enough energy per photon to damage atoms by knocking electrons out of their orbitals. Energy in a photon is directly proportional to the frequency of the electromagnetic signal (inversely proportional to the wavelength) by Planck’s equation:

E = h\times f
• E is energy Joules.
• h is Planck’s constant: 6.63 \times 10^{-34} J\dot s
• f is the frequency in Hertz.

In order to ionize oxygen or hydrogen, the atom must be struck by a photon and that photon must carry at least 14 electron-volts (an eV is approximately 1.9\times 10^{-19}J) of energy.

Using Planck’s equation, we can find that 14eV corresponds to a frequency of about 2.4 PHz which is the frequency of near ultraviolet light. So electromagnetic radiation in the ultraviolet range and above is considered ionizing. While 14eV is about the minimum energy to cause ionization, the energy levels probably need to be a bit higher to have significant effects on human health since the molecules in the body have higher ionization energy than oxygen or hydrogen.

Fortunately, there are not a many sources of ionizing radiation that we would encounter daily. The most dangerous ionizing radiation would come from outer space, but the earth’s atmosphere is a good shield against that source. On Earth, there are a few natural sources such as Radon (which can seep into your basement from the ground underneath), Potassium-40 (found in bananas), Uranium and Polonium (found in tobacco). There are also a few man-made sources such as medical x-rays, medical and industrial tracers, radiation therapy for treating cancer, and smoke detectors. For the most part exposure to these sources can be easily minimized.

### Other Effects of EM Radiation on Health

We know that EM radiation with either high enough power, or high enough frequency can damage biological tissue. What about low powered EM radiation that is non-ionizing, can it be harmful? Researchers are still investigating this question and have completed many epidemiological as well as in vitro and in vivo studies to try and answer this question. None of them have been completely conclusive.

A few things that we do know are

• EM radiation can heat up biological tissue, however, at the very low power of the communication networks we are exposed to, the heating effect is unlikely to be harmful
• EM radiation also seems to have non-heating effects on biological tissue, especially on nervous system tissue (i.e., nerves)
• While these effects are measurable, it is still uncertain as to whether the effects are harmful
• Blue light seems to have a suppressive effect on melatonin production and may lead to interference with normal human biological rhythms
• Mobile phone signals are deemed “possibly carcinogenic to humans” by the World Health Organization which means that the risk of cancer cannot be ruled out by the available data.
• Because the results are not conclusive, the WHO recommends following the precautionary principle: “a risk management policy applied in circumstances with a high degree of scientific uncertainty, reflecting the need to take action for a potentially serious risk without awaiting the results of scientific research.”

### Radio Frequencies (3kHz to 300 GHz)

Radio wave frequencies consist of a large portion of the electromagnetic spectrum from 3kHz up to 300 GHz (which correspond to wavelengths of 100km down to 1 mm). This part of the spectrum is widely used for communications and different parts of the radio frequency band are used for different applications. For example, long wavelength communication is good for communicating long distances because the waves are so long that they bend around large obstacles and even around mountains. Short wavelength communication is good for high data rates because the higher the frequency, the more data that can be transmitted. Communication at frequencies about 300 GHz is not effective for wireless communications because these signals do not travel through the atmosphere very well.

The radio frequencies span a large section of the electromagnetic spectrum and is often split into subsections such as:

• VLF (Very Low Frequency): 3 to 30 kHz. Communication over VLF is low bandwidth (see Nyquist), but the large wavelengths (up to 100km) allow VLF signals to travel around mountain ranges. VLF radio waves can also travel through sea water more than other frequencies
• LF (Low Frequency): 30 to 300 kHz. LF waves also have long wavelengths (up to 10km) and can travel around mountains. They also effectively travel as ground waves by following the curve of the earth.
• MF (Medium Frequency): 300 to 3000 kHz. AM radio stations broadcast within the MF band. MF signals travel by ground waves as well as by skywaves (i.e., reflecting or refracting ooff the ionosphere).
• HF (High Frequency): 3 to 30 MHZ.  Shortwave radio uses the HF band. International broadcasting stations can use this band to cover large ranges because HF frequencies can also travel by skywave propagation.
• VHF (Very High Frequency): 30 to 300 MHz. The VHF band is used for FM radio and over-the-air TV broadcast. VHF signals do not travel by skywave and have limited range as ground waves because of their shorter wavelength. Most transmission is based on line-of-site
• UHF (Ultra High Frequency): 300 to 3000 MHz. UHF signals typically do not travel far and are nearly exclusively line of sight transmissions. Moisture in the air also has a larger attenuating effect on UHF signals compared to lower frequency ones. Using a broad definitions of microwave transmissions, UHF falls in to the broad definition of microwave (300 MHz to 300 GHz). The high data rates as well as the availability of the spectrum, including several bands that can be used license free make UHF a popular communication band. Common applications using UHF include BlueTooth, WiFi, cellular, and cordless telephones.
• SHF (Super High Frequency): 3 to 30 GHz. SHF frequencies also fall in the microwave band. This band is used for satellite communications, WiFi (5GHz band) and point-to-point microwave communication. Propagation in this band is solely via line of sight.
• EHF (Extremely High Frequency): 30 to 300 GHz. This band is also called millimeter wave because the wavelengths or 1-10 mm in length. This band is mostly unused for communications, because the small wavelength leads to high atmospheric losses. There are both licensed and unlicensed bands of EHF reserved for high-speed data links but these bands are mostly used experimentally at this point. (In)famously, millimeter waves are used in scanners to screen passengers in some airports.
• THF (Tremendously High Frequency): 300 to 3000 GHz. This frequency range is no longer in the RF band, but I wanted to add it anyway because it fits in to the <Superlative> High Frequency naming convention

You’ll notice that all of these frequency ranges begin with a ‘3’. This might seem unusual at first, but the reason is that when you convert the frequencies to wavelengths, you’ll find that the wavelength ranges all start with ‘1’.

$latex \lambda = \frac{c}{f} &s=2$           ($latex \lambda$ is the wavelength, $latex c$ is the speed of light and $latex c$ is the frequency)

It has also struck me as strange that the different frequency descriptors are kind of set in and arbitrary order. Who was it that decided that Very High < Ultra High < Super High < Extremely High < Tremendously High?

### Microwave Frequencies (300 MHz to 300 GHz)

Microwaves fall within the RF band at the upper end of that band. They have many uses including:

• broadcasting transmissions because microwaves pass easily through the earth’s atmosphere and have a higher bandwidth than the rest of the radio spectrum. Typically, microwaves are used in television news to transmit a signal from a remote location to a television station from a specially equipped van.
• Radar to detect the range, speed, and other characteristics of remote objects
• Many communications protocols including Bluetooth (2.4GHz), IEEE802.11g, n, ac (2.4GHz), WiMAX (Worldwide Interoperability for Microwave Access at 2-11GHz), IEEE802.11a(5GHz), Wide Area Mobile Broadband Wireless Access (1.6-2.3GHz), GSM.
• Cable TV and Internet access on coax cable as well as broadcast television use some of the lower microwave frequencies
• Generate plasma for such purposes as reactive ion etching and plasma-enhanced chemical vapor deposition (PECVD).
• Transmit power over long distances
• MASERs which are devices similar to LASERs but work at microwe frequencies

### Infrared Frequencies (300 GHz to 400 THz)

The infrared spectrum is often divided into three parts, but the three parts vary depending on which organization is doing the division.

• Far-infrared. This is the part of infrared that is farthest from visible light and the lower end borders on the microwave spectrum. Far infrared is not used for communications but it transfers its energy in the form of heat when it is absorbed by human bodies.
The International Commission on Illumination calls this band IR-C and assigns it the frequency range 300 GHz to 100 THz (1 mm to 10 μm wavelength). ISO 20473 sets it as 300GHz to 6 THz, and astronomers set it as 850 GHz to 12THz
• Mid-infrared. The International Commission on Illumination calls this band IR-B and assigns it the frequency range 1oo THz to 215 THz (3 μm to 1.4 μm wavelength). ISO 20473 sets it as 12 THz to 100 THz, and astronomers set it as 12THz to 120 THz
• Near-infrared are closest to visible light and can actually be captured by digital cameras.  The IrDA standards for infrared communications use the 850-900nm wavelength which falls in the near-infrared band. The International Commission on Illumination calls this band IR-A and assigns it the frequency range 215 THz to 430 THz (3 μm to 1.4 μm wavelength). ISO 20473 sets it as 100 THz to 385 THz, and astronomers set it as 120 THz to 430 THz

### Visible Radiation (380 THz to 750 THz)

Visible light is a very narrow range on the EM scale, but it is obviously very important because it is the range that the human eye is sensitive to. The colours in order from lowest frequency to highest frequency are red, orange, yellow, green blue, violet. Visible light is used for communication over fiber optic cables which carry modulated visible light signals and over free space optics which transmits modulated light signals through free space. Free space optics require line of sight and anything that would block visible light would of course block communication.

### Ultraviolet Light (750 THz – 30 PHz)

Ultraviolet or UV light is at frequencies just beyond the visible spectrum. It is highly energetic and can break chemical bonds making molecules reactive. Sunburn is caused by the highly energetic effect of UV light.

### X-Rays (30 PHz to 30000 PHz)

X-rays do not have any communications applications; they are mostly used for medical imaging and crystallography. X-rays are even more highly energetic than UV light and are therefore potentially more dangerous. Fortunately exposure to X-Rays is rare except during medical imaging.

### Gamma Rays (2.4 EHz and up)

Gamma rays are at the upper end of the EM spectrum. Astronomers monitor gamma rays to study regions of space and physicists use them to study radioisotopes. The Earth’s atmosphere blocks gamma rays so exposure to them on Earth is nearly 0.

The electromagnetic spectrum (often just called the spectrum) is the range of all frequencies of electromagnetic radiation. In theory the frequencies range from 0 Hertz to infinite Hertz.  Frequencies of 0 Hertz can exist, but the lowest non-zero signal known is the 22 year sun spot cycle which has a period of 22 years, or a frequency of 1.4*10-9 Hz. At the other extreme, the highest frequency measured is a 1024 Hz photon generated by colliding electrons with positrons of sufficient energy. In theory, photons of any arbitrarily high frequency could be created, but we are limited by the particle smashing technology that we currently have.

While the electromagnetic spectrum is simply a physical property of the world we live in, the use of bands in the electromagnetic spectrum is regulated by the government in most countries. The process of regulating and managing the use of the electromagnetic spectrum is called frequency allocation, or spectrum allocation. Generally, international bodies help guide the national bodies on how to manage frequency allocations because electromagnetic spectrum do not stop at national boundaries. A number of forums and standards bodies work on standards for frequency allocation, including:

• ITU – International Telecommunications Union
• CEPT – European Conference of Postal and Telecommunications Adminstrations
• ETSI –
• CISPR – International Special Committee on Radio Interference
• In Canada, the spectrum is managed by Innovation, Science and Economic Development Canada

## Wired and Wireless Communications

### Wired Communication Systems

Wired communication systems include all communications systems for which data is sent through a wire.

#### Types of Wired Connections

Twisted pair: Consists of a pair of wires that are twisted together. The twisting reduces noise on the wires by cancelling out, to a certain extent, the amount of electromagnetic interference from the environment and between transmit and receive

Coaxial Cable: Coaxial cables consist of a cylindrical wire running down the middle of an insulating sheath. Surrounding the insulating sheath is a conductive sheath, acting simultaneously as a shield and a return path for the signal. Coax cables are highly resistant to noise due to the shielding which keeps most of the EM energy inside the surrounding conductive sheath.

Fiber Optic Cable: A fiber optic cable consists of a very long thin fiber of glass down which light pulses can be sent. The data rates supported by fiber optic networks are incredibly fast.  So fast in fact that most people involved in fiber optic development now say that in relation to network speeds, computers are hopelessly slow, and so we must try to avoid computation at all costs.

• higher immunity to outside interference and noise
• Allocation of frequencies is determined by the owner(s) of the wire, not by regulatory authorities

### Wireless Communication Systems

Wireless communications systems are, of course, communications systems that do not use wires. This category could include such anachronisms as smoke signaling and semaphores, but on this site, we are going to study only wireless electromagnetic communications which includes RF, microwave, and light.

• cheaper to deploy, especially if the network covers a large area with no current coverage. There is no need to connect wires to all the points that need coverage.
• Usually easier to deploy, but that depends a bit on the size of the network. A point to point connection might be easier to wire.
• Users of the network are more mobile; they are not tied down to any particular spot.

All systems can be simplified to the following block diagram structure:

The stuff inside the system block may be a whole bunch of subsystems, or it may be simply one block performing one function. Regardless, the system block will have some sort of effect on the signal and that effect will be seen at the output signal. The following subsections describe the different effects that a system may have.

### Gain and Loss

We have already studied gain and loss in systems and know that when a system adds a gain or a loss to a signal, the signal either increases in power and/or amplitude (gain) or decreases in power and/or amplitude (loss).

Practically speaking, this gain or loss results in a multiplication of the input signal by the gain/loss of the system to give the output signal. Or if values are given in dB, then the gain in dB is added to the signal. Mathematically, the relationship between input and output can be expressed as

Output = Gain \times Input

or as:

Output = Gain(dB) + Input

For the first equation, the input and output signals would typically be in Watts of Volts. For the second equation, the input and output signals should be in dBm, dBW, or dBV (or some other type of power or amplitude measurement expressed in dB).

System gain is usually the result of an active process. In other words it is the purposeful adding of energy to the system to increase the signal amplitude and power. System loss, may be purposeful attenuation of a signal that is too strong, but it is more often simply the result of natural signal attenuation. For example, as signals propagate through the air, they lose strength due to traveling away from the transmission point. Similarly, as signals travel down a wire, they can lose strength due to the resistance/impedance of the wire.

### Filtering and Bandlimiting

We have also already studied filtering and bandlimiting (which are more or less interchangeable terms).  They both refer to removing a range of frequencies from the signal. For now, it is sufficient to understand that when you filter a signal, there will be a lower cutoff frequency and an upper cutoff frequency and only frequencies between those two limits will be allowed through. Low pass filters are filters that have a lower cutoff frequency of 0 Hertz and all frequencies below the upper limit are allowed through. High pass filters are filters that have an upper cutoff approaching infinity and all frequencies above the lower cutoff are allowed through.

### Noise

All systems are going to add noise, and we have seen one method of quantifying the amount of noise added – the noise factor and noise figure. Noise is any unwanted signal and will obscure the signal, making it more difficult to discern.

In order to include noise when we are talking about a signal, we need to talk about the signal to noise ratio. Noise is never gong to be zero so there will always be a noise component passing through the system along with the signal. To include the noise added by the system, add the noise power of the system to the noise power of the signal going through. It is important that these powers be in watts; it is not possible to add together power values when they are in dBm.

Example:

An input signal has a power of 100mW, and the noise on the signal is 3 \times 10^{-5} Watts.  If a system adds 5 \times 10^{-6} Watts of noise, what is the output SNR?

Signal Power Out = Signal Power In = 100mW

Noise Power Out = Noise Power In + System Noise Power = 3.5 \times 10^{-5} W

SNR_{out} = \frac{0.1W}{3 \times 10^{-5}} + 5 \times 10^{-6} = 2857

### Changing the Signal Type

Changing the signal type is a system function that we have not yet encountered. This system function involves changing a signal from one of the types covered here into another signal type. The following are examples of these conversions:

• analog ↔ digital
• continuous ↔ discrete
• deterministic ↔ random

Analog to digital converters (or ADCs) are systems that convert analog signals into digital ones.  They do this by periodically sampling the analog signal, and converting that signal into a digital number representing the signals analog values.  We will look at ADCs in more detail later in the course.

Digital to analog converters (or DACs – pronounced “dac”) are systems that convert digital signals into analog ones. These systems convert the digital number representing the signal’s value and output it on a continuous scale. We will look at DACs in more detail later in the course and will examine such things as how does the converter fill in all of the values between the digital values to make the signal have continuous values.

Most of the time, ADCs also convert the signal from a continuous one into a discrete one.  Similarly, DACs also usually convert the signal from discrete into continuous.  We will study all of these cases in a later chapter.

### Modulation/Demodulation

Modulation involves taking a carrier signal, which is a single frequency and modifying it somehow (modulating it) to add information to that signal. By changing one or more of frequency, amplitude or phase of the carrier information from a second signal can be added to the carrier. This information carrying signal can be an analog signal (such as audio), or a digital signal.

Simple examples of analog modulators are AM and FM radio transmitters:

• AM transmission systems add information to the single frequency carrier by modulating the amplitude using the information signal.
• FM transmission systems add information to the single frequency carrier by modulating the frequency using the information signal.

Digital modulators put the data bits on to the carrier by either

• changing the amplitude of the carrier to one of two or more discrete values. This is called amplitude shift keying (ASK)
• changing the frequency of the carrier to one of two or more discrete values. This is called frequency shift keying (FSK)
• changing the phase of the carrier to one of two or more discrete values. This is called phase shift keying (PSK)
• changing the amplitude and phase of the carrier to one of two or more discrete values). This is called quadrature amplitude modulation (QAM)

Demodulation is the process of extracting the original information from the modulated carrier. Modems are devices that can do both MODulation and DEModulation.

While signals are the means by which information is propagated, systems are the environment in which the signals are propagated. When we consider signals, we have no choice but to consider the system in which those signals exist because the system will have some sort of effect on the signal. That effect may be positive (amplifying or filtering the signal) or it may be negative (adding noise or attenuating the signal). Look at the block diagram below which is a communication system model. We can see the strong relationship between the system (the boxes) and the signals (the lines connecting the boxes):

As the signal moves through the system it will be affected; sometimes those effects are by design (like the effects of the transmitter on the signal) and sometimes those effects are just incidental (like the introduction of noise by the environment).

The subsections in this section will describe more about what a system can do to a signal.

The behaviour of electromagnetic wave reflection occurs not only with electromagnetic waves propagating through the air, but also with signals propagating through electrical wires and cables. When electromagnetic waves in a wire reach a point where the properties of the wire change, a reflection can occur. Of course this reflection cannot just occur in any old direction, it must occur along the cable itself, so when a signal is reflected, it travels back along the cable in the opposite direction from which it came.

Reflections in a cable or wire occur when there is a discontinuity (a change in impedance) of some kind. This discontinuity may be one of the following:

1. A connector connecting the cable to equipment or another cable.
2. A change in the size of the cable
3. The end of the cable (it may be left open, or it may be shorted out)
4. A fault orbreak in the cable which may be a complete break or just a partial one.
5. Any other case where there is a change in impedance along the path the signal is propagating

Depending on the type of impedance mismatch the signal encounters, the signal may be partially or fully reflected. If the signal reached the end of the cable, and there was no where else to go (because the end was open, or it was shorted to ground), all of the energy will be reflected back to the source.  If the signal hits a discontinuity that is not the end of the cable (e.g., a connector or a partial break of the cable), then part of the signal will be reflected, and the rest will propagate forward (but with a loss of some power due to the reflection).

A time domain reflectometer (TDR) takes advantage of this behaviour to determine the location of faults and breaks in cables.

Wikipedia has a good description of how a TDR is used (from http://en.wikipedia.org/wiki/Time-domain_reflectometer):

Consider the case where the far end of the cable is shorted (that is, it is terminated into zero ohms impedance). When the rising edge of the pulse is launched down the cable, the voltage at the launching point “steps up” to a given value instantly and the pulse begins propagating down the cable towards the short. When the pulse hits the short, no energy is absorbed at the far end. Instead, an opposing pulse reflects back from the short towards the launching end. It is only when this opposing reflection finally reaches the launch point that the voltage at this launching point abruptly drops back to zero, signaling the fact that there is a short at the end of the cable. That is, the TDR had no indication that there is a short at the end of the cable until its emitted pulse can travel down the cable at roughly the speed of light and the echo can return back up the cable at the same speed. It is only after this round-trip delay that the short can be perceived by the TDR. Assuming that one knows the signal propagation speed in the particular cable-under-test, then in this way, the distance to the short can be measured.

A similar effect occurs if the far end of the cable is an open circuit (terminated into an infinite impedance). In this case, though, the reflection from the far end is polarized identically with the original pulse and adds to it rather than cancelling it out. So after a round-trip delay, the voltage at the TDR abruptly jumps to twice the originally-applied voltage.

Note that a theoretical perfect termination at the far end of the cable would entirely absorb the applied pulse without causing any reflection. In this case, it would be impossible to determine the actual length of the cable. Luckily, perfect terminations are very rare and some small reflection is nearly always cause

The magnitude of the reflection is referred to as the reflection coefficient or ρ. The coefficient ranges from 1 (open circuit) to -1 (short circuit). The value of zero means that there is no reflection. The reflection coefficient ( \rho) is calculated as follows:

\rho = \frac{Z_t - Z_0}{Z_t + Z_0}

Where Zo is defined as the characteristic impedance of the transmission medium and Zt is the impedance of the termination at the far end of the transmission line.

Any discontinuity can be viewed as a termination impedance and substituted as Zt. This includes abrupt changes in the characteristic impedance. As an example, a trace width on a printed circuit board doubled at its midsection would constitute a discontinuity. Some of the energy will be reflected back to the driving source; the remaining energy will be transmitted.

The Wikipedia entry has a few terms that need a little bit of explanation

• Impedance is essentially the amount of opposition a cable or wire has to the current of a signal propagating down a cable. Impedance does not slow a signal down, but reduces the amount of current that can flow for a given voltage. Impedance can also cause a phase shift between voltage and current
• Transmission Line is simply two long conductors, electrically isolated from each other (except perhaps at the end) but somehow physically connected together. This “physical connection” may be that the two wires are twisted together – giving a twisted pair, it may consist of one wire conductor with the other conductor completely wrapped around it – giving a co-axial cable, or it may even consist of two traces on a printed circuit board. When we talk about “long” conductors, we are using the term “long” in a relative manner. It is long relative to the wavelength of the signals that are passing through it. If the conductors are longer than 1/100th of the wavelength of the signal passing through it, then you have a transmission line.
• Characteristic Impedance is the impedance that a particular transmission line has.

Eric Bogatin uses an analogy for helping you to think about transmission lines.  He says, you need to, in a zen-like manner, “be the signal”.  Imagine you are a signal, just leaving the transmitter on your way down a transmission line.  With your first step, you charge the line where you step to 1 volt.  The transmission line in front of you is still at 0 volts, because you have not reached there yet.  With every successive step, you are probing to see how much current it takes to charge the line up to 1 volt.  Since impedance equals volt/current, as long as the amount of current it takes remains the same, the impedance that you see remains the same.  As long as the impedance that you see remains the same, you can continue on forever.

You take a few more steps, pull more charge from the battery, and leave behind you a wake of charge keeping the line at 1 volt. The line in front of you is still at 0 volts. As long as the impedance to your movement remains the same, you can continue on forever. As soon as you hit somewhere where the impedance changes, you will reflect a little bit (or a lot) of energy back in the direction from which you came.

There are two important things to glean from this analogy. In a transmission line, it takes a finite length of time for a signal to propagate from one end to the other; if the line is long enough, there will be several cycles of a wave, or pulses of a signal on it at one time. Also, a more complete definition of characteristic impedance is that it is the instantaneous impedance that a signal sees as it propagates down the line. If the signal sees a change in the impedance for some reason, then some of the signal will be reflected back towards the source.

Back to TDRs

So you thought you were going to get a little description of what time domain reflectometry is, and what you ended up with was an in depth discussion of transmission lines. Now, let’s get back to TDRs. A TDR sends a pulse down a transmission line, and waits for any reflection that may come back.  If there is some sort of impedance discontinuity, then there will be a reflection. The only time there will not be a reflection is if the signal path is terminated to the return path via an impedance that is equal to the characteristic impedance of the signal and return paths. In general, there will not be a perfect impedance match, so there will be some amount of reflection. When the TDR receives the reflection, it measures the time difference between sending the pulse out, and receiving the reflection back. Using that time, and the speed of light through the transmission line, you can calculate the distance to the impedance  mismatch.

The speed of light through transmission lines is a noticeably smaller fraction of the speed of light through a vacuum (c).  Typically it is between 0.4c and 0.7c.  This fraction is called the velocity factor.

Another way to look at impedance mismatch is to consider the Voltage Standing Wave Ratio, or VSWR which is a way of identifying the wave resulting from the sum of the incident wave and the reflected wave.  If you send a sine wave down a transmission line, and there are no mismatches, there is of course no reflection. If there is an impedance mismatch, some or all of the wave will be reflected.  The incident wave, and the reflected wave will then interfere with each other creating a new wave equal to the sum of two waves. If you alter the frequency of the input sine wave, you will find that there is a maximum and a minimum peak to peak amplitude of the waves; the ratio of the maximum to the minimum is the VSWR.

VSWR = \frac{V_{p-p max}}{V_{p-p min}} = \frac{V_{incidentWave}+V_{reflectedWave}}{V_{incidentWave}-V_{reflectedWave}} = \frac{Z_L}{Z_0}

Z_L is the impedance after the discontinuity

Z_0 is the original impedance

Depending on what characteristics of a signal or a system you need to measure you may be able to use standard instruments, but you may need to use specialized (and expensive) instruments. The equipment listed and described below covers most of the equipment you might use.

## Oscilloscope

An oscilloscope measures the voltage of a signal with respect to time. As such it is used for time domain analysis of signals. Oscilloscopes are rated by how high of a bandwidth they can measure, the higher the frequency the scope can measure, the more complicated and expensive the electronics become.

## Spectrum Analyzer

A spectrum analyzer measures the frequency components in a signal. It displays the power of the signal at different frequencies on its output screen.

## Network Analyzer

A network analyzer does not refer to measuring traffic on a computer network, it refers to measuring signal characteristics on an electrical network. Network analyzers usually measure very high speed electrical networks and characterize the network by measuring strengths of incident waves and reflected waves as signals travel through the network.

## Time Domain Reflectometer

A time domain reflectometer (TDR) is used to determine cable length or to determine the distance to a break in a cable. The TDR is very useful for finding cable faults in buried cable whether it’s copper cable or fiber. The TDR works by sending out a pulse on the cable and timing how long it takes for a reflection to return to it. Whenever there is a change in the nature of the cable (i.e., at a break or at a connector), some (or all) of the signal will be reflected instead of passed through. By timing how long it takes to get a reflection and using the speed of the signal through the cable, the distance to the reflection point can be determined. For more information, see this in depth page on transmission lines and TDRs.

We’ve talked enough about analyzing signals. It’s now time to talk about the signal’s arch-enemy, noise. Why do I say noise is the signal’s arch enemy?  Well it is because noise is the factor that limits the information capacity of the signal. Look what happens to the Shannon limit if the noise is zero:

I = 3.32B \times log_{10}(1+\frac{S}{N})

• If the noise is zero, the S/N will approach infinity
• If S/N approaches infinity, then so does (1+S/N), so does log(1+S/N), and so does 3.32B \times log_{10}(1+\frac{S}{N})
• In other words, the information capacity of the channel approaches infinity

With no noise, the only limiting factor to the information capacity would be how much information you can pump into the system at a time.

Noise is any undesirable electrical energy that falls within the passband of the intended signal.  For example in all electronic systems in North America, there is 60 Hz noise due to the nature of the AC power delivered by the electric companies. This can affect an audio system because we can hear 60 Hz signals, therefore 60 Hz falls in the passband of an audio system. The 60 Hz “hum” would not affect a WLAN network because those signals are in the GHz range.

Noise can fall into one of two general categories – correlated noise which only exists when the signal exists and uncorrelated noise which exists whether there is a signal or not.  It is good to have an understanding of these different noise sources so that if you are setting up a communication system, you can eliminate, or at least reduce many of the sources of noise.

### Uncorrelated Noise

Uncorrelated noise is always present in a system, and it can come from sources external to the system or from devices or circuits inside the system.

#### External Sources

1. Atmospheric Noise – This is commonly called static, and is due to electrical disturbances that occur in the atmosphere. A common source is lightning.  This source of noise is relatively insignificant above 30MHz.
2. Extraterrestrial Noise – This is noise generated outside of the earth’s atmosphere, and as such, the atmosphere often shields the earth from this type of noise. Satellites can be affected by it though. The two primary sources of this noise are the sun (sun spots and solar flares) and cosmic background noise
3. Man-Made Noise – These are noises generated by humankind, and comes from everything from electric motors, to AC circuits, to fluorescent lights, to radio and television stations

#### Internal Sources

1. Thermal Noise – is the noise generated by the thermal agitation of electrons inside an electrical conductor. This kind of noise is sometimes called Brownian Noise (after an early researcher), Johnson Noise (after another researcher), random noise or white noise (because it is due to random movement).  Thermal noise is constant across the entire frequency spectrum which makes it a source of noise for any and all systems.
Thermal noise can be predicted mathematically by:
N = KTB
N = noise power in Watts
K = Boltzmann’s Constant:   1.38\times 10^{-23} \frac{joules}{Kelvin}
T = Temperature in Kelvin (to convert Celsius to Kelvin, add 273)
B = the bandwidth in Hertz
2. Shot Noise – occurs in semiconductors and is noise generated by random fluctuations in electric current due to the fact that electric current occurs from flow of electrons, and sometimes electrons take different paths as they flow through a semiconductor.

### Correlated Noise

Correlated noise is noise that is somehow related to the signal.  If there is no signal, there is no correlated noise.  This type of noise is produced because of some effect the system has on signals passing through it. A good example of this type of distortion is non-linear amplification which occurs when a signal is amplified but is distorted somehow. The most extreme case is when the amplifier is overdriven and the peaks of the signal get clipped as shown in the circuit below. The input signal is a 10V peak sine wave, but the amplifier circuit is only powered by +/-10V, so the output is clipped:

The output signal is still periodic, but is no longer sinusoidal which means the new signal will have added harmonics. This added non-linearity is called harmonic distortion.

Amplifiers will always create some amount of harmonic distortion in a signal, the situation does not have to be as extreme as the example above.

### Noise Calculations

#### Signal to Noise Ratio (S/N or SNR)

The signal to noise ratio is just what you would expect; it’s the ratio of the signal power to the noise power. It is often expressed in dB.

Example: If the strength of a received signal is 2mW and the noise power is 0.2mW, the SNR is:

SNR = \frac{2mW}{0.2mW} = 10

SNR(dB) = 10log_{10}\frac{2mW}{0.2mW} = 10dB

#### Noise Factor (F) and Noise Figure (NF)

Noise factor and noise figure are useful for relating the amount of noise a device or stage of  a system adds to a signal going through it. Take this example block diagram:

The amplifier has an input signal and input noise enter into it. Both the signal and the noise get amplified in some manner, and then an amplified output signal and noise are driven out of the system. The amplifier may not affect the signal and the noise in the same manner, so in order to examine the relationship between input and output, we use the noise factor which is simply

F = \frac{SNR_{in}}{SNR_{out}}

Alternatively, we can use the noise figure which is:

NF = 10log_{10}\frac{SNR_{in}}{SNR_{out}}

If an amplifier is ideal and affects the signal and the noise exactly the same then the noise factor will be 1 and the noise figure will be 0. There is no ideal amplifier or system, therefore noise factor and noise figure give one indication of how non-ideal the amplifier (or other system component) is.

Example

The input signal to an amplifier is 10mW and the noise is 2\times 10^{-11} W. The output signal is 100mW and the output noise is 3\times10^{-8}W. What is the noise factor? What is the noise figure?

SNR_{in} = \frac{10mW}{2\times 10^{-11} W} = 5.0\times10^8

SNR_{out} = \frac{100mW}{3\times 10^{-8} W} = 3.33\times10^6

F = \frac{5.0\times10^8}{3.33\times10^6} = 150.2

NF = 10log_{10}\frac{5.0\times10^8}{3.33\times10^6} = 21.8dB

The bandwidth of a signal is simply the range of frequencies that the signal contains. The range doesn’t always have to start at zero Hertz. For example, an FM radio station, which may be centered at 99.3MHz, does not have a bandwidth of 99.3 MHz. Its frequency range stretches from 99.225 MHz to 99.375 MHz, so its bandwidth is 99.375-99.225 = 150kHz.

How is the range of frequencies defined, i.e., what are the cutoff points to consider when setting the range of frequencies of a signal or a channel. In other words how strong must a frequency component of a signal be in order to be considered part of that signal? There is no official standard for defining the cutoff point in all situations, but typically we use the 3dB point – i.e. the point where the frequency’s power is 3dB less than the strongest frequency component (here’s a side question for you: in relative terms, how much smaller is the signal that is 3dB weaker than a reference signal?) Sometimes though, an absolute signal strength is used as the cutoff. In general, the bandwidth of a signal includes all the frequencies that have appreciable or useful content.

Some terms related to bandwidth:

Baseband – describes signals and systems whose range of frequencies is measured from 0 to a maximum frequency

Narrowband – refers to a signal that takes up a relatively small bandwidth on the frequency spectrum.

Broadband – refers to a signal that takes up a relatively large amount of bandwidth on the frequency spectrum. Broadband can also refer to data transmission where multiple pieces of data are sent simultaneously to increase the effective rate of transmission

Passband  – a portion of the frequency spectrum between two limits, an upper frequency limit, and a lower frequency limit.

### Bandlimiting a Signal

An earlier section discussed how any periodic signal can be represented as the summation of a series of sine waves, and we specifically looked at a square wave as an example. This section showed that by using more and more of the harmonics of a square wave, you get a more and more accurate representation of that square wave. This idea can also be looked at from the opposing side.

What if we have a square wave, and we want to pass that square wave through a system that is bandwidth limiting. This bandwidth limiting system limits the frequency ranges that will be able to pass through the system, so while we started with a square wave, the effects of passing it through the system will bandlimit the square wave and make it look less “square”-like.  The following diagram shows the effects of bandlimiting a square wave:

As you can see bandlimiting, or bandwidth limiting has the potential to change the characteristics of a signal, so when analyzing a system to determine whether a signal will be able to pass through it, you need to determine whether or not all of the required signals can pass through the system.

## Information Capacity

The information capacity of a channel is the amount of information that can be passed through a channel in a given time period. It is intimately related to the bandwidth of the channel because the faster a signal can change, the more information that it will be able to carry. This relationship is the first formal relationship between bandwidth and information capacity that we will considered. It is called Hartley’s law.  Mathematically it is:

(Information Transmitted) \propto (System Bandwidth) \times (time of transmission)

The next important relationship between bandwidth and information capacity is called the Nyquist rate which states that for any channel with a bandwidth of B Hz, the maximum number of symbols or code elements that can be resolved per second is 2B. For example if you have a channel with a bandwidth of 5 MHz, then the maximum number of symbols per second that can be sent through that channel is 10 million.

Nyquist Rate = Maximum Signalling Rate = 2 \times bandwidth

#### Shannon Limit

The final important relationship between bandwidth and information capacity was developed by Claude Shannon. In 1948, Claude Shannon published a landmark paper in the field of information theory that related the information capacity of a channel to the channel’s bandwidth and signal to noise ratio (this is a ratio of the strength of the signal to the strength of the noise in the channel). Shannon showed that this relationship is as follows:

I = B \times log_{2}(1+\frac{S}{N}) = 3.32B \times log_{10}(1+\frac{S}{N})

• I = Information capacity in bits per second
• B = bandwidth in Herz
• S/N is the signal to noise ratio

Basically what Shannon did was extend the Nyquist rate idea (which states the maximum number of symbols that can be sent per second is two times the bandwidth) by adding the signal to noise ratio to the equation. It seems fairly intuitive to say that the signal strength and the noise strength are going to effect a receiver’s ability to receive the signal. What Shannon did was to quantify this relationship.

The contributing factors to the Shannon limit are the bandwidth, because that represents how quickly the symbols can change, and the SNR (signal to noise ratio) because that determines how many different symbols the system will support. The more symbols there are, the more data there is per symbol, but the more difficult it becomes to distinguish between symbols. As the noise increases the more difficult it becomes to distinguish between symbols so fewer can be used and the information capacity decreases.

It is important to note that the Shannon rate is the absolute maximum rate that data can pass through a channel. It is of course possible to pass data through at lower rates.

Here is an example:

A system has a signal to noise ratio of 1000 (30dB) and a bandwidth of 2.7kHz. What is the Shannon limit for this system?

I = (2700)log_{2}(1+1000) = 26.9 kbps

According to Nyquist, in a 2.7kHz system, only 5400 symbols can be sent per second, so in order to get a transmission rate of 26.9kbps, each symbol must contain more than 1 bit (specifically \frac{26.9}{5.4} \approx 5 Bits Per Symbol)