Detailed knowledge of various skills of EVM and RF

When you finish writing "EVM may deteriorate with the increase of Front-End's IL", if the reader is an engineer with poor basic concept knowledge (many of the engineers in the factory), the first reaction It is "what is EVM", and then "why is the relationship between EVM and IL", and then it may be "what EVM is related to what indicators" - this is endless.

So I am going to "what to count", list some common concepts, throw bricks and jade, and see how it works.

1, Rx Sensitivity (receiving sensitivity)

Receive sensitivity, which should be one of the most basic concepts, characterizes the lowest signal strength that the receiver can recognize without exceeding a certain bit error rate. Here, the bit error rate is a general term used in the CS (circuit-switched) era. In most cases, BER (bit error rate) or PER (packet error rate) is used to check the sensitivity, and it is used in the LTE era. The amount of Throughput is defined - because LTE does not have a circuit-switched voice channel, but this is also a real evolution, because for the first time we are no longer using a 12.2 kbps RMC (reference measurement channel, which actually represents a rate of 12.2 kbps). The speech coding) is a "standardized alternative" to measure sensitivity, but is defined by the throughput that the user can actually feel.

2, SNR (signal to noise ratio)

When we talk about sensitivity, we often contact SNR (signal-to-noise ratio, we generally talk about the demodulation signal-to-noise ratio of the receiver). We define the demodulation signal-to-noise ratio as the demodulator can not exceed a certain bit error rate. Demodulation signal-to-noise ratio threshold (someone will give you a question during the interview, give a bunch of NF, Gain, and tell you the demodulation threshold to push the sensitivity). So where are S and N?

S is the signal, or called the useful signal; N is the noise Noise, which refers to all signals without useful information. The useful signal is usually transmitted by the transmitter of the communication system. The source of the noise is very wide. The most typical one is the famous -174dBm/Hz - the natural noise floor. Remember that it is an amount independent of the type of communication system. In a sense, it is derived from thermodynamics (so it is related to temperature); another thing to note is that it is actually a noise power density (so there is a dimension of dBm/Hz), how much bandwidth we receive. It will accept much bandwidth noise - so the final noise power is integrated into the bandwidth using the noise power density.

3, TxPower (transmit power)

The importance of transmit power is that the transmitter's signal needs to be spatially faded before it can reach the receiver, so the higher the transmit power means the farther the communication distance.

So, do we want to pay attention to SNR? For example, if our transmit signal has a poor SNR, is the signal SNR arriving at the receiver also poor?

This involves the concept just mentioned, the natural noise floor. We assume that the fading of space has the same effect on both signal and noise (actually not, the signal can be coded to resist fading and noise is not good) and it acts like an attenuator, then we assume a spatial fading of -200 dB and a transmitted signal bandwidth of 1 Hz. , power 50dBm, signal to noise ratio 50dB, what is the SNR of the receiver received signal?

The power received by the receiver is 50-200=-150Bm (bandwidth 1Hz), and the noise of the transmitter is 50-50=0dBm through spatial fading. The power reaching the receiver is 0-200=-200dBm (bandwidth 1Hz)? At this time, this part of the noise has been "submerged" under the natural noise floor of -174dBm/Hz. At this time, we calculate the noise at the receiver inlet, and only need to consider the "basic component" of -174dBm/Hz. This is true in most cases of communication systems.

4, ACLR / ACPR

We put these items together because they are actually part of the "transmitter noise", but the noise is not within the transmit channel, but the part of the transmitter that leaks into the adjacent channel, which can be collectively referred to as "Neighboring road leaks."

Among them ACLR and ACPR (actually a thing, but one is called in the terminal test, one is called in the base station test), are named after the "Adjacent Channel", as the name suggests, are describing the machine pair Interference from other devices. And they have one thing in common, the power calculation of the interference signal is also based on a channel bandwidth. This measurement method indicates that the purpose of this indicator is to consider the signal leakage from the transmitter and the interference to the receiver of the same or similar device. The interference signal falls into the receiver band in the same frequency and bandwidth mode. The same frequency interference is formed to the received signal of the receiver.

In LTE, the ACLR test has two settings, EUTRA and UTRA. The former describes the interference of the LTE system to the LTE system, and the latter considers the interference of the LTE system to the UMTS system. Therefore, we can see that the measurement bandwidth of EUTRAACLR is the occupied bandwidth of LTE RB, and the measurement bandwidth of UTRA ACLR is the occupied bandwidth of UMTS signal (FDD system 3.84MHz, TDD system 1.28MHz). In other words, ACLR/ACPR describes a "peer-to-peer" interference: interference from a transmitted signal to the same or similar communication system.

This definition is of great practical significance. In the actual network, there are often signals leaking from neighboring cells in the same cell and neighboring cells. Therefore, the process of network planning and network optimization is actually the process of maximizing capacity and minimizing interference, and the adjacent channel leakage of the system itself is typical for neighboring cells. Interference signals; from the other side of the system, users' mobile phones in crowded people may also become mutual interference sources.

Similarly, in the evolution of communication systems, the goal of “smooth transition” has always been to upgrade to the next generation network on existing networks. Then the coexistence of two or even three generation systems needs to consider the interference between different systems. The introduction of UTRA by LTE considers the radio frequency interference of LTE to the previous generation system in the case of coexistence with UMTS.

5, Modulation Spectrum/Switching Spectrum

Returning to the GSM system, Modulation Spectrum and Switching Spectrum (also called switching spectrum, which is different for imported products) also play a similar role in the adjacent channel leakage. The difference is that their measurement bandwidth is not the occupied bandwidth of the GSM signal. By definition, it can be considered that the modulation spectrum is a measure of the interference between the synchronous systems, and the switching spectrum is a measure of the interference between the asynchronous systems (in fact, if the signal is not gating, the switching spectrum must flood the modulation spectrum. ).

This involves another concept: in the GSM system, the cells are not synchronized, although it uses TDMA; in contrast, TD-SCDMA and later TD-LTE, the cells are synchronized (The flying saucer shape or the GPS antenna of the ball head is always the embarrassment that the TDD system can't get rid of).

Because the cells are not synchronized, the power leakage of the rising/falling edge of the A cell may fall to the payload part of the B cell, so we use the switching spectrum to measure the interference of the transmitter to the adjacent channel in this state; and the GSM in the whole 577us. In timeslot, the proportion of rising edge/falling edge is very small. Most of the time, the payload part of two adjacent cells will overlap in time. To evaluate the interference of the transmitter to the adjacent channel, the modulation spectrum can be referred to.

6, SEM (Spectrum Emission Mask)

When talking about SEM, we must first pay attention to it as an "in-band indicator", which is distinguished from spurious emission. The latter contains SEM in a broad sense, but the focus is on spectrum leakage outside the working frequency band of the transmitter. Its introduction is also more from the perspective of EMC (electromagnetic compatibility).

The SEM provides a "spectral template" and then looks at the point where the stencil limit is exceeded when measuring the spectral leakage in the transmitter band. It can be said that it has a relationship with ACLR, but it is different: ACLR considers the average power leaked into the adjacent channel, so it uses the channel bandwidth as the measurement bandwidth, which reflects the "noise floor" of the transmitter in the adjacent channel; The SEM reflects the super-punctuation in the adjacent frequency band with a small measurement bandwidth (often 100 kHz to 1 MHz), which reflects "noisy emission based on noise floor".

If you scan the SEM with a spectrum analyzer, you can see that the stray points on the adjacent channel are generally higher than the ACLR mean, so if the ACLR indicator itself has no margin, the SEM will easily exceed the standard. Conversely, SEM exceeding the standard does not necessarily mean that the ACLR is bad. There is a common phenomenon that there is LO spurs or a certain clock and LO modulation component (often narrow bandwidth, similar to the frequency), which is connected to the transmitter link. ACLR is very good, SEM may also exceed the standard.

7, EVM (error vector)

First, EVM is a vector value, that is, it has amplitude and angle, which measures "the error between the actual signal and the ideal signal". This measure can effectively express the "quality" of the transmitted signal - the point distance of the actual signal. The farther the ideal signal is, the larger the error and the larger the EVM's modulus.

In (a) we have explained why the signal-to-noise ratio of the transmitted signal is not so important for two reasons: first, the SNR of the transmitted signal is often much higher than the SNR required for receiver demodulation; the second is our calculation The receiver sensitivity refers to the worst case of the receiver, that is, after a large spatial fading, the transmitter noise is already submerged under the natural noise floor, and the useful signal is also attenuated near the receiver's demodulation threshold.

However, the "inherent signal-to-noise ratio" of the transmitter needs to be considered in some cases, such as short-range wireless communication, typically such as the 802.11 series.

When the 802.11 series evolved to 802.11ac, 256QAM modulation has been introduced. For the receiver, even if spatial fading is not considered, the high-order quadrature modulation signal is required to demodulate such a high-order quadrature modulation signal. The worse the EVM, the worse the SNR and the higher the demodulation difficulty.

Engineers who do 802.11 systems often use EVM to measure Tx linearity; engineers who do 3GPP systems prefer to use ACLR/ACPR/Spectrum to measure Tx linear performance.

From the origin, 3GPP is the evolution path of cellular communication. From the beginning, it has to pay attention to the interference of adjacent channels and alternative channels. In other words, interference is the first major obstacle affecting the rate of cellular communication. Therefore, in the process of evolution, 3GPP always targets "interference minimization": frequency hopping in the GSM era, spread spectrum in the UMTS era, and the LTE era. This is true of the introduction of the RB concept.

The 802.11 system is an evolution of fixed wireless access. It is based on the spirit of the TCP/IP protocol. With the goal of “best service”, 802.11 often has time-division or frequency hopping methods to achieve multi-user coexistence. The network is more flexible (after all, based on LAN), the channel width is also flexible. In general it is not sensitive to interference (or tolerant).

In layman's terms, the origin of cellular communication is to make a phone call. If the phone does not work, the user will go to the telecommunications bureau. The origin of 802.11 is the local area network. The network is not good. The probability is to be resistant to the temperament. (In fact, the device is working at this time. Correction and retransmission).

This determines that the 3GPP series must use the "spectrum regeneration" performance of the ACLR/ACPR as an indicator, while the 802.11 series can adapt to the network environment at a sacrifice rate.

Specifically, “accommodating the network environment at a sacrificial rate” means that the 802.11 series responds to propagation conditions with different modulation orders: when the receiver finds a signal difference, it immediately informs the opposite transmitter to reduce the modulation order. vice versa. As mentioned earlier, SNR is highly correlated with EVM in 802.11 systems, and a large EVM reduction can improve SNR. In this way, we have two ways to improve the receiving performance: one is to reduce the modulation order, thereby reducing the demodulation threshold; the second is to reduce the transmitter EVM, so that the signal SNR is improved.

Because EVM is closely related to receiver demodulation effects, EVM is used to measure transmitter performance in 802.11 systems (similarly, in 3GPP-defined cellular systems, ACPR/ACLR is an indicator that mainly affects network performance); EVM degradation is mainly caused by nonlinearities (such as PA's AM-AM distortion), so EVM is often used as a measure of transmitter linear performance.

7.1. Relationship between EVM and ACPR/ACLR

It is difficult to define the quantitative relationship between EVM and ACPR/ACLR. From the perspective of nonlinearity of the amplifier, EVM and ACPR/ACLR should be positively correlated: AM-AM and AM-PM distortion of the amplifier will expand EVM, and also ACPR/ACLR. The main source.

But EVM and ACPR/ACLR are not always positively correlated. Here we can find a typical example: Clipping, which is commonly used in digital intermediate frequency, is clipping. Clipping is to reduce the peak-to-average ratio (PAR) of the transmitted signal. The peak power reduction helps to reduce the ACPR/ACLR after passing the PA; however, Clipping will also damage the EVM, because either the clipping (windowing) or the filter method, Both will damage the signal waveform and thus increase the EVM.

7.2, the source of PAR

PAR (Signal Peak-to-Average Ratio) is usually represented by a statistical function such as CCDF, which represents the power (amplitude) value of the signal and its corresponding probability of occurrence. For example, if the average power of a signal is 10dBm, the statistical probability of its power exceeding 15dBm is 0.01%, we can think of its PAR as 5dB.

PAR is an important factor in the spectrum regeneration of transmitters in modern communication systems, such as ACLP/ACPR/Modulation Spectrum. Peak power pushes the amplifier into the nonlinear region to produce distortion, often with higher peak power and higher nonlinearity.

In the GSM era, because of the eigen envelope characteristics of GMSK modulation, PAR=0, we often push it to P1dB when designing GSM power amplifiers to get maximum efficiency. After the introduction of EDGE, 8PSK modulation is no longer a balance envelope, so we tend to push the average output power of the amplifier to about 3dB below P1dB, because the PAR of the 8PSK signal is 3.21dB.

In the UMTS era, both WCDMA and CDMA, the peak-to-average ratio is much larger than EDGE. The reason is the correlation of the signals in the code division multiple access system: when the signals of multiple code channels are superimposed in the time domain, the same phase may occur, and the power will peak at this time.

The peak-to-average ratio of LTE is derived from the burstiness of RB. OFDM modulation is based on the principle of dividing multi-user/multi-service data in both the time domain and the frequency domain, so that high power can occur on a certain "time block". For LTE uplink transmission, SC-FDMA is used to first spread the time domain signal to the frequency domain with DFT, which is equivalent to "smoothing" the burstiness in the time domain, thereby reducing PAR.

8. Summary of interference indicators

The "interference indicator" here refers to the sensitivity test under various applied disturbances in addition to the static sensitivity of the receiver. It is interesting to actually study the origin of these test items.

Our common interference indicators include Blocking, Desense, Channel Selectivity and more.

8.1, Blocking

Blocking is actually a very old RF indicator that existed at the beginning of the radar invention. The principle is to sink the receiver into the receiver (usually the first stage LNA), so that the amplifier enters the nonlinear region and is even saturated. At this time, on the one hand, the gain of the amplifier suddenly becomes small, and on the other hand, the nonlinearity is extremely strong, so that the amplification function of the useful signal cannot work normally.

Another possible blocking is actually done by the AGC of the receiver: the large signal enters the receiver link, and the receiver AGC therefore acts to reduce the gain to ensure the dynamic range; but at the same time the useful signal level entering the receiver is very low. At this time, the gain is insufficient, and the useful signal amplitude entering the demodulator is insufficient.

The Blocking indicator is divided into in-band and out-of-band, mainly because the RF front-end generally has a band filter, which has an inhibitory effect on out-of-band blocking. But whether in-band or out-of-band, the Blocking signal is usually point-frequency, without modulation. In fact, point-frequency signals without modulation at all are rare in the real world. Engineering only simplifies them into point frequencies to replace (approximate) various narrow-band interference signals.

For solving Blocking, it is mainly RF output. To put it bluntly, the receiver IIP3 is improved and the dynamic range is expanded. For out-of-band blocking, the suppression of the filter is also important.

8.2, AM Suppression

AM Suppression is a unique indicator of the GSM system. From the description, the interference signal is a TDMA signal similar to the GSM signal, synchronized with the useful signal and has a fixed delay.

This scenario is a signal simulating a neighboring cell in a GSM system. This is a typical neighbor cell signal configuration in view of the frequency offset requirement of the interference signal being greater than 6 MHz (GSM bandwidth 200 kHz). Therefore, we can think that AM Suppression is a reflection tolerance of the receiver to the neighboring cell in the actual work of the GSM system.

8.2, Adjacent (Alternative) Channel Suppression (Selectivity)

Here we collectively refer to "adjacent channel selectivity." In the cellular system, in addition to considering the same-frequency cell, we also consider the adjacent-frequency cell. The reason can be found in the transmitter indicator ACLR/ACPR/Modulation Spectrum we discussed earlier: because the spectrum of the transmitter is regenerated. There will be a strong signal falling into the adjacent frequency (generally, the farther the frequency is, the lower the level is, so the adjacent channel is generally the most affected), and this spectrum regeneration is actually related to the transmitted signal. The receiver of the same system is likely to decipher this part of the regenerated spectrum as a useful signal, so-called 鹊 鸠.

For example, if two neighboring cells A and B happen to be adjacent frequency cells (generally avoiding such networking mode, only a limit scenario is discussed here), when a terminal registered to the A cell walks to two At the campus junction, but the signal strength of the two cells has not yet reached the handover threshold, so the terminal still maintains the A cell connection; the B cell base station transmitter has a higher ACPR, so the terminal has a higher B cell ACPR component in the receiving band. The useful signal of the A cell overlaps with the frequency; since the terminal is far away from the A cell base station, the received A cell useful signal strength is also low, and the B cell ACPR component can enter the terminal receiver when the cell is received. The original useful signal causes co-channel interference.

If we pay attention to the definition of the frequency offset of the adjacent channel selectivity, we will find the difference between Adjacent and Alternative. Corresponding to the first adjacent channel and the second adjacent channel of ACLR/ACPR, we can see the "transmitter spectrum leakage (regeneration)" in the communication protocol. The "receiver adjacent channel selectivity" is actually defined in pairs.

8.3, Co-Channel Suppression (Selectivity)

This description is absolute co-channel interference, generally referring to the interference pattern between two co-frequency cells.

According to the networking principle described above, the distance between two co-frequency cells should be as far as possible, but even if it is far away, there will be signals leaking from each other, but the difference in strength. For the terminal, the signals of the two campuses can be considered as “correct useful signals” (of course there is a set of access specifications on the protocol layer to prevent such mis-access), and whether the receiver of the terminal can avoid the “Westerly winds overwhelming the wind” ", look at its co-frequency selectivity.

8.4 Summary

Blocking is a "big signal interference small signal", RF still has room for maneuver; and the above AM Suppression, Adjacent (Co/Alternative) Channel Suppression (Selectivity) these indicators are "small signal interference big signal", the meaning of pure RF Not big, or rely on the physical layer algorithm.

Single-tone Desense is a unique indicator of CDMA systems. It has a characteristic: Single-tone as an interference signal is an in-band signal and is close to the useful signal. In this way, it is possible to generate two kinds of signals falling into the receiving frequency domain: the first one is due to the near-end phase noise of the LO, the baseband signal formed by mixing the LO with the useful signal, and the signal formed by mixing the LO phase noise and the interference signal, Will fall within the range of the receiver's baseband filter, the former is a useful signal and the latter is the interference; the second is due to the nonlinearity in the receiver system, the useful signal (having a certain bandwidth, such as a 1.2288MHz CDMA signal) It is possible to generate intermodulation with the interfering signal on the nonlinear device, and the intermodulation product may also fall within the receiving frequency domain to become interference.

The origin of Single-tone desense is that when North America launches CDMA system, it uses the same frequency band as the original analog communication system AMPS. The two networks coexist for a long time. As a latecomer, the CDMA system must consider the interference of the AMPS system.

Here, I think of the PHS that was called "the general rule does not move, the move is nowhere", because the long-term occupation of 1900 ~ 1920MHz frequency, so the implementation of the TD-SCDMA / TD-LTE B39 has been in the low section of B39 1880 ~ 1900MHz, until the PHS retired.

The textbook's interpretation of Blocking is relatively simple: a large signal enters the receiver amplifier so that the amplifier enters the nonlinear region, and the actual gain becomes smaller (for useful signals).

But this is hard to explain two scenarios:

Scenario 1: The linear gain of the pre-stage LNA is 18dB. When the large signal is injected to make it reach P1dB, the gain is 17dB. If no other effects are introduced (the default LNA NF and so on have not changed), then the noise figure of the whole system. The effect is actually very limited. It is nothing more than that the denominator of the latter NF becomes smaller when it counts into the total NF, and has little effect on the sensitivity of the whole system.

Scenario 2: The IIP3 of the pre-stage LNA is so high that it is not affected. The second-stage gain block is affected (interference signal makes it reach P1dB). In this case, the impact of the whole system NF is even smaller.

I am here to make a point of view. The idea of ​​Blocking may be divided into two parts. One part is that Gain is compressed in the textbook, and the other part is actually the distortion of the useful signal in this area after the amplifier enters the nonlinear region. This distortion may consist of two parts, one is pure spectral nonlinearity resulting in spectral regeneration of the wanted signal (harmonic component) and the other is Cross Modulation of large signal modulated small signal. (can understand)

Therefore, we also propose another idea: If we want to simplify the Blocking test (3GPP requires sweeping, it is very time consuming), we may choose some frequency points, which have the greatest impact on the distortion of the useful signal when the blocking signal appears.

Intuitively, these frequencies may have: f0/N and f0*N (f0 is the useful signal frequency, N is the natural number). The former is because the Nth harmonic component generated by the large signal in the nonlinear region itself is superimposed on the useful signal frequency f0 to form direct interference, and the latter is superimposed on the Nth harmonic of the useful signal f0 to affect the output signal f0. Time domain waveform - Explain: According to Paseval's law, the waveform of the time domain signal is actually the sum of the frequency domain fundamental frequency signal and each harmonic, when the power of the Nth harmonic in the frequency domain changes, The corresponding change in the domain is the envelope change of the time domain signal (distortion occurs).

9, dynamic range, temperature compensation and power control

Dynamic range, temperature compensation and power control are “invisible” indicators in many cases, and they only show their effects when certain limit tests are performed, but they themselves represent the most delicate part of the RF design. .

9.1, transmitter dynamic range

The transmitter dynamic range characterizes the maximum transmit power and minimum transmit power of the transmitter "without damaging other transmit metrics."

"Does not damage other emission indicators" appears to be very broad. If you look at the main effects, you can understand that the maximum transmission power does not impair the linearity of the transmitter, and the signal-to-noise ratio of the output signal is maintained at the minimum transmission power.

At maximum transmit power, the transmitter output tends to approach the nonlinear regions of active devices at all levels (especially the final amplifiers), and the often occurring nonlinearities are spectral leakage and regeneration (ACLR/ACPR/SEM), modulation errors ( PhaseError/EVM). At this point, the most embarrassing is basically the transmitter linearity, this part should be better understood.

At the minimum transmit power, the useful signal from the transmitter is close to the noise floor of the transmitter, and there is even the danger of being "submerged" in the noise of the transmitter. What needs to be guaranteed at this time is the signal-to-noise ratio (SNR) of the output signal. In other words, the lower the noise floor of the transmitter at the minimum transmission power, the better.

One thing happened in the lab: When engineers tested ACLR, they found that the power reduction ACLR was worse (the normal understanding is that ACLR should improve with the output power reduction). At the time, the first reaction was that the instrument had a problem. But the result of testing another instrument is still the same. The guidance we give is to test the EVM at low output power and find that the EVM performance is very poor; we judge that the noise floor at the entrance of the RF link is very high, the corresponding SNR is obviously poor, and the main components of ACLR are not The spectrum of the amplifier is regenerated, but the baseband noise is amplified by the amplifier link.

9.2, receiver dynamic range

The dynamic range of the receiver is actually related to the two indicators we have talked about before. The first one is the reference sensitivity, and the second is the receiver IIP3 (mentioned many times when talking about the interference indicator).

The reference sensitivity actually characterizes the minimum signal strength that the receiver can recognize, and will not be described here. We mainly talk about the maximum receiving level of the receiver.

The maximum reception level is the maximum signal that the receiver can receive without distortion. This distortion can occur at any level of the receiver, from the pre-stage LNA to the receiver ADC. For the pre-stage LNA, the only thing we can do is to increase the IIP3 as much as possible to withstand higher input power. For later step-by-step devices, the receiver uses AGC (Automatic Gain Control) to ensure that the useful signal falls on the device. Enter the dynamic range. Simply put, there is a negative feedback loop: detecting the received signal strength (too low/too high) - adjusting the amplifier gain (up/down) - the amplifier output signal ensures that it falls within the input dynamic range of the next level of the device.

Here we talk about an exception: the LNA of the preamplifier of most mobile phone receivers has the AGC function. If you study their datasheet carefully, you will find that the pre-stage LNA will provide several variable gain segments, each of which has its corresponding The noise figure, in general, the higher the gain and the lower the noise figure. This is a simplified design, the design idea is that the receiver RF link aims to keep the wanted signal input to the receiver ADC within the dynamic range and keep the SNR above the demodulation threshold (not demanding SNR) The higher the better, but the "enough to use", this is a very smart approach). Therefore, when the input signal is large, the pre-stage LNA reduces the gain, loses the NF, and simultaneously increases the IIP3; when the input signal is small, the pre-stage LNA increases the gain, reduces the NF, and reduces the IIP3.

9.3, temperature compensation

In general, we only make temperature compensation at the transmitter.

Of course, receiver performance is also affected by temperature: receiver link gain is reduced at high temperatures, NF is increased; receiver link gain is increased at low temperatures, and NF is reduced. However, due to the small signal characteristics of the receiver, both gain and NF effects are within the system redundancy range.

For transmitter temperature compensation, it can also be subdivided into two parts: one is to compensate the accuracy of the transmitted signal power, and the other is to compensate the transmitter gain with temperature changes.

Modern communication system transmitters generally perform closed-loop power control (except for the slightly "old" GSM system and Bluetooth system), so the power accuracy of the transmitter calibrated by the production process depends on the accuracy of the power control loop. Generally speaking, the power control loop is a small signal loop, and the temperature stability is very high, so the temperature compensation requirement is not high unless there is a temperature sensitive device (such as an amplifier) ​​on the power control loop.

Temperature compensation for transmitter gain is more common. This type of temperature compensation is commonly used for two purposes: one is "visible", usually for systems without closed-loop power control (such as the aforementioned GSM and Bluetooth), which typically do not require high output power accuracy. Therefore, the system can apply the temperature compensation curve (function) to keep the RF link gain within a range, so that when the baseband IQ power is fixed and the temperature changes, the RF power output by the system can be kept within a certain range; One is “invisible”, usually in a system with closed-loop power control. Although the RF output power of the antenna port is precisely controlled by closed-loop power control, we need to keep the DAC output signal within a certain range (most common). The example is the need for digital pre-distortion (DPD) in the base station transmission system, then we need to control the gain of the entire RF link to a certain value more precisely - the purpose of temperature compensation is here.

The means of transmitter temperature compensation generally have variable attenuators or variable amplifiers: when the early precision is slightly lower and the low-cost accuracy is lower, the temperature-compensation attenuator is more common; in the case of higher accuracy requirements, the solution Generally: temperature sensor + digital attenuator / amplifier + production calibration.

9.4 Transmitter power control

After talking about dynamic range and temperature compensation, let's talk about a related and very important concept: power control.

Transmitter power control is a necessary function in most communication systems. Commonly used in 3GPP, such as ILPC, OLPC, and CLPC, must be tested, frequently problematic, and complicated in RF design. Let us first talk about the significance of transmitter power control.

All transmitter power control purposes include two points: power control and interference suppression.

We first say power control: in mobile communication, in view of the distance between the two ends and the level of interference, for the transmitter, it is only necessary to maintain the signal strength "sufficient for the receiver to accurately demodulate"; If the communication quality is low, the communication quality is impaired. If it is too high, the power consumption is meaningless. This is especially true for battery-powered terminals such as mobile phones, where every milliamp of current is required.

Interference suppression is a more advanced requirement. In a CDMA-like system, since different users share the same carrier frequency (and are distinguished by orthogonal user codes), in the signal arriving at the receiver, any user's signal covers the same frequency for other users. On the interference, if the power of each user signal is high or low, the high-power user will flood the signal of the low-power user; therefore, the CDMA system adopts the power control method for the power of different users arriving at the receiver (we call it For the air interface power, referred to as air interface power, a power control command is issued to each terminal, and finally the air interface power of each user is the same. This power control has two characteristics: the first is that the power control accuracy is very high (the interference tolerance is very low), and the second is that the power control cycle is very short (the channel change may be very fast).

In the LTE system, the uplink power control also has the effect of interference suppression. Because LTE uplink is SC-FDMA, multi-users also share carrier frequencies and interfere with each other, so air interface power consistency is also necessary.

The GSM system is also power-controlled. In GSM, we use the "power level" to characterize the power control step size. Each level is 1dB. It can be seen that the GSM power control is relatively rough.

Interference-limited system

Here is a related concept: interference-limited systems. A CDMA system is a typical interference limited system. In theory, if each user code is completely orthogonal and can be completely distinguished by interleaving and deinterleaving, then the capacity of a CDMA system can be infinite, because it can completely use one on a limited frequency resource. Layer-level extended user codes distinguish between an infinite number of users. However, in reality, since the user code cannot be completely orthogonal, noise is inevitably introduced in the demodulation of the multi-user signal, and the more the user, the higher the noise until the noise exceeds the demodulation threshold.

In other words, the capacity of a CDMA system is limited by interference (noise).

The GSM system is not an interference limited system. It is a time domain and frequency domain limited system. Its capacity is limited by frequency (200 kHz one carrier frequency) and time domain resources (8 TDMA can be shared on each carrier frequency). user). Therefore, the power control requirements of the GSM system are not high (the step size is rough and the period is long).

9.5 Transmitter power control and transmitter RF specifications

After talking about the transmitter power control, we will discuss the factors that may affect the transmitter power control in the RF design (I believe many of my peers have encountered the depressed scene of the closed-loop power control test).

For RF, if the power detection (feedback) loop design is correct, then we can do very little for the closed-loop power control of the transmitter (most of the work is done by the physical layer protocol algorithm), the most important It is the flatness of the transmitter.

Because transmitter calibration is actually only performed on a limited number of frequencies, especially in production testing, the fewer frequencies you make, the better. However, in the actual working scenario, the transmitter is fully operational in any carrier in the frequency band. In a typical production calibration, we calibrate the high, medium, and low frequency points of the transmitter, meaning that the transmit power of the high, medium, and low frequency points is accurate, so the closed loop power control is correct at the frequency of the calibration. However, if the transmitter transmit power is not flat in the entire frequency band, the transmit power of some frequency points deviates greatly from the calibration frequency point, so the closed loop power control with reference to the calibration frequency point will also occur at these frequency points. Errors and even errors.

PUR Cable

PUR Cable,Highly Resistant Against Wear Cable,Shielded Polyurethane Cable,Oil Resistant Cable

Ruitian Cable CO.,LTD. , https://www.hbruitiancable.com