Faster 3G Easier Said Than Done
Some wireless operators claim that faster 3G is just a software upgrade away. This technical overview explains why achieving cellular data speeds faster than 1Mbps won't be a walk in
January 28, 2005
Since its introduction, third-generation (3G) cellular technology has been heralded for its ability to deliver more voice channels and higher-bandwidth pipes. But, in reality, operators have started to realize that, while 3G allows for high-quality voice and media streaming services, it is a poor fit for-high speed data.
High-speed downlink packet access (HSDPA) technology promises to bridge the gap between 3G and the Internet, providing an overlay for the existing protocol stack that makes the delivery of high-speed data access to many users in a cell a reality. Instead of limiting high-speed data access to fewer than five users in a cell, HSDPA can deliver 384-kbit/s data to many more, maybe 30, users.
Realizing HSDPA will provide them with cost-effective high-speed data, operators are pushing for deployments. Testing of HSDPA by Cingular in their 3G trial network in Atlanta got underway for launch this year; NTT DoCoMo anticipates commercial deployment of HSDPA in its network this year and others expect to start rollouts by 2006. Some carriers, such as 02 in Europe and SK Telecom and KTF in Korea will leap from earlier releases and go directly to HSDPA for launch.
Not Simple
But HSPDA is not a simple software upgrade to 3G systems. In many respects, the change from Release 99 to HSDPA is as dramatic as that from voice-only GSM to EDGE: changing both modulation and the way packets are processed.
There are parts of the HSDPA standard that are relatively simple to implement using existing hardware. But, taken as a whole, HSDPA will simply break many deployed architectures and will require new hardware. Most base stations (also known as Node Bs) will need significant upgrades to cope with the increased data throughput and the consequences of moving to a more complex protocol.HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14 Mbit/s, with 2 Mbit/s on the uplink. However, it is not about delivering Ethernet bandwidth to one fortunate user. What is important is the ability to deliver, reliably, many sessions of high-speed, bursty data to a large number of users within that cell. The changes that HSDPA enables include better quality and more reliable, more robust data services. In other words, while realistic data rates may only be a few megabit per second, the actual quality and number of users achieved will improve significantly.
Burst Problems
IP is a bursty protocol that demands changes to the wideband CDMA (W-CDMA) protocol stack to support IP efficiently. Bursty protocols provide a poor fit with the dedicated channel (DCH) that is used in existing W-CDMA networks. Although the DCH can support many different types of traffic, the utilization of the channel for bursty traffic is typically quite low. This is because the process of channel reconfiguration that can be used to tune the DCH for a change in traffic mix traffic is a slow process, taking on the order of 500 ms.
These issues have been addressed in Release 5 of the 3G Partnership Project (3GPP) standards, which radically changes the network to make it far better suited to data traffic. Support for IPv6 has been incorporated into the core network together with a key enhancement to provide high-bandwidth support for bursty IP traffic for the mobile user.
Instead of sending data using individual DCHs, HSDPA extends the downlink shared channel (DSCH), allowing packets destined for many users to be shared on one, higher-bandwidth channel called the high-speed DSCH (HS-DSCH). As with wired networks such as Ethernet, this allows for the more efficient utilization of the available bandwidth. On top of that, a faster channel configuration process allows the basestation to control the channel more effectively, further improving efficiency Figure 1.
Figure 1: Diagram showing HSDPA's air interface channels. Too Many Options
There are many options available to basestation designers and to operators when dealing with HSDPA. This complicates the provision of HSDPA as the network is upgraded but intelligent choices over basestation implementation can result in higher throughput for high-revenue services, improving operators' margins.
The maximum bandwidth that can be achieved with HSDPA depends greatly on cell size. To limit the power needed to send each bit of information, the maximum achievable bit rate tends to fall away for users at the edge of the cell. For a large cell with a diverse range of users, the peak aggregate data rate will be in the range of 1 to 1.5 Mbit/s. This can increase to 4 to 6 Mbit/s and beyond as the cell size decreases to the micro cell level and beyond. In principle, a picocell could see data rates of 8 Mbit/s or more.
To achieve higher raw data rates, HSDPA uses, at the PHY layer, higher-level modulation schemes such as 16-point quadrature amplitude modulation (16QAM), together with an adaptive coding scheme based on turbo codes. An important point to note is that the modulation scheme is adaptive and is changed on a per-user basis. The spreading factor used for the HS-DSCH remains ?xed at 16, but the coding rate can vary, on a per-user basis, between 1/4 and 3/4. In theory, the protocol allows an uncoded link of 4/4 but that is only useful for lab tests to achieve the theoretical maximum of 14 Mbit/s using 16QAM modulation.
It is notable, however, that many of the results announced so far fall far short of what should be expected from a mature, robust system. Certainly in small cells (the 3G equivalent of hotspots) very high data rates should be realistically attained.
Under poor reception conditions, the modulation can vary as well, possibly reverting to QPSK from the higher-order modulation of 16QAM. Link adaptation ensures the highest possible data rate is achieved both for users with good signal quality, who are typically close to the base station, and for more distant users at the cell edge, who may receive data with a lower coding rate. The link adaptation is performed on each transmission timing interval (TTI), with the user equipment sending an estimate of the channel quality to the Node B that is then used to select the modulation and coding rate for that user on the next transmission.Moving MAC Control
The more important change that HSDPA makes is to move control of the medium access control (MAC) from the radio network controller (RNC) into the basestation. Crucially, this move enables the use of fast-scheduling algorithms where under constructive fading conditions users are served data based on the channel-quality estimates. This compares to risking high error rates that would be experienced by users in poor reception conditions using a conventional user-priority or round-robin scheme and where the scheduler uses average channel conditions to select the modulation and coding scheme used. As a result, fast scheduling works hand-in-hand with the algorithms used to select optimum modulation and coding schemes.
This greatly increases the responsiveness of the basestation. The 16QAM coding change increases the peak speed, in the same way that a high-powered engine can boost the performance of a car; but it is the MAC change that makes HSDPA deliver a real-world speed increase, much like replacing a learner driver with a Formula One racing driver.
Using the race car example, performance would be better even with the same engine, and here the performance will be noticeably better even if the 16QAM modulation cannot be used. It demonstrates how a shift in the 3G architecture from a traditional "dumb pipe with intelligent center" towards a more datacom like "smart edge" can yield better results.
In all, the datarate has increased seven-fold, the response time has reduced by 80% and the algorithms, scheduling and complexity have increased dramatically. These changes will be hard to achieve in a hardware design that was not architected to support them.
Indeed, some of the early demonstrations only implemented a few of these features, or only achieved limited data rates. While 16QAM modulation is the most obvious change, and easy to demonstrate at a trade fair, the capabilities of the MAC-hs and adaptive control loops are more important but less visible. Developing, testing and "hardening" these algorithms for field deployment is the major challenge for manufactuers.Increasing Complexity
Obviously, the improved performance and processor power provided by HSDPA implies an increase in complexity. There are many high-speed feedback loops needed to implement HSDPA efficiently and provide users with the best data rates possible.
For example, the TTI used for modulation and coding selection for individual frames in the HS-DSCH is just 2 ms compared with a typical time of 10 ms (and up to 80 ms) for the TTI used for power control in the existing Release 99 shared channel. Further, the algorithms needed to make good use of the possibilities provided by fast scheduling will be more complex than those implemented by existing radio network controller (RNC) software, but those decisions have to be made within a millisecond.
When link errors occur, data packets can be retransmitted quickly at the request of the mobile terminal. In existing W-CDMA networks, these requests are processed by the RNC. As with fast scheduling, better responsiveness is provided by HSDPA by processing the request in the base station.
The hybrid automatic repeat request (HARQ) protocol developed for HSDPA allows efficient retransmission of dropped or corrupted packets. The protocol has been designed to allow the average delivered bandwidth of HSDPA to be higher than would be possible if more extensive forward error correction were to be used. However, it puts significant demands on the basestation as support for HARQ calls for low latency. The latency demanded for efficient HARQ support calls for retransmissions to be processed within 2 to 7 ms. But the feedback loop that allows HARQ to be implemented is not one that exists in Release 99 basestations, as that function sits in the RNC for existing DCH and DSCH transmissions. So, not only must things work faster, many functions are new, adding to the capabilities and intelligence of the Node B.
In addition to fast retransmissions, a number of techniques are used to provide the mobile terminal with a better chance of receiving the data correctly. For users with a high coding rate, simple chase combining may be used, which simply repeats the packet. For users with a low coding rate, incremental redundancy can be used. In this scheme, parity bits are sent to allow the mobile terminal to combine the information from the first transmission with subsequent retransmissions.The consequence of these design decisions is that the scheduler and re-transmission manager require large buffers to hold all the packets that might need to be resent. This function was not present in earlier functions, and the hardware to support it needs to have been designed in readiness for it for existing implementations to support HSDPA at sufficiently high data rates.
Factors impacting Scheduling
A number of factors will control how well scheduling will work in the field. It is simple to devise a scheduling algorithm that will work well for a few users in the laboratory with artificially generated constructive fading conditions. It is much harder to develop one that works robustly in the field, for many users all with different, complicated and changing situations. There are many circumstances that will affect real-world systems. Not least of those are the evolving capabilities of the terminals themselves, whether they are handsets or data cards inserted into PCs. The latency demands of HSDPA mean that designs will react differently to changing fading conditions and packet delivery speeds.
Similar problems were seen in the early days of the Internet where interactions between the different layers of the protocol stack led to less efficient bandwidth utilisation than expected. Numerous techniques were developed to overcome the problems and inserted into terminal equipment and infrastructural systems to bring performance back up to their expected levels.
If a scheduler is not designed to react to problems, operators may see some users with terminals that are able to handle high-speed transfers starved of bandwidth while other users with less capable systems use up too much of the HS-DSCH bandwidth. Such a situation will see much lower data utilization than expected. A more intelligent scheduler that watches for changes to channel and terminal conditions — and schedules packets for terminals that are able to receive at higher data rates — will improve the overall revenue that can be derived.
However, the need to support different quality-of-service (QoS) contracts with each terminal will further complicate the situation for the scheduler, as it cannot simply deny bandwidth to a terminal with a high QoS setting because it happens to be in a poor reception area or unable to react quickly enough to the data it receives from the basetation.As well as allowing for evolution in scheduler design, in many cases, it will be desirable to have different scheduling policies in action at different times of the day or tuned for certain types of location, such as an airport waiting lounge. To test this requires multiple scenarios to be evaluated under different loading conditions. As a result, architectures that maximise flexibility will be key to efficient HSDPA implementation.
Granularity Needed
Processing granularity will be a major consideration for the efficient implementation of a HSDPA-compliant basestation. Systems based on a small number of high-performance DSPs tend to demand large buffers and, to reduce the overhead of switching between tasks, will tend to work on large groups of data at any one time. This makes things "clumpy" with high latency. However, such a coarse-grained approach to task scheduling is a poor fit for algorithms such as scheduling that need low latency to work effectively.
The advanced silicon processes available today for integrated circuits (ICs) make it possible to implement hundreds of processors on a single chip together with distributed memory blocks and an interconnect structure that provides for the efficient delivery of data needed to implement the many feedback paths needed. Protocols such as HSDPA, as with earlier versions of W-CDMA, work well on parallel-processing architectures, as the many different processes need to happen at the same time (Figure 2).
Figure 2: Diagram showing an HSDPA implementation using a set of array processors.
Fine-grained control will be necessary to implement features such as fast scheduling and per-user coding and modulation adaptation. With a large number of processing elements, it becomes possible to dedicate processing and buffer resources almost on a per-user or per-function basis. For example, one processor may collate information for a processor that just runs an advanced scheduling algorithm. This allows the processor to perform scheduling decisions all the time. This will yield much lower latencies than a system where scheduling is shared with other tasks on a general-purpose processor or DSP.Future-Proofing
A flexible, software-based design will be vital for future improvements to the W-CDMA service offering. HSDPA is an unbalanced system, with a maximum of 14 Mbit/s on the downlink and 2 Mbit/s on the uplink, from the terminal to the network. That can be a concern, as TCP can easily be "uplink choked" if acknowledgments are slow, reducing the downlink rate.
Release 6 of the 3GPP specification will change that by introducing high-speed uplink packet access (HSUPA). This allows users to take advantage of faster uplinks with lower latency when sending large files or emails. That in turn improves the efficiency of the link, increasing effective throughput, even though the modulation has not changed. Indeed, without the improved efficiency of HSUPA, it is highly likely that HSDPA will be impaired in applications that have more balanced bandwidth needs.
HSUPA puts even more strenuous demands on the basestation design and will mean that the processing electronics will have to deal with a much more complex decode environment in the same way that HSDPA demands much more of the terminals in terms of decoding. HSUPA means moving further control functions from the RNC to the Node B. As is the case for HSDPA, these will likely break many installed architectures. Given the speed these changes are arriving, having a flexible or upgradeable platform is important.
Wrap Up
In summary, HSDPA significantly improves the quality and performance of wireless data for 3G " with a corresponding dramatic impact on the operator's profit. Changes to the modulation, architecture and networking control algorithm are all required. However, despite some claims, this upgrade is not simple and many basestations will require extensive new hardware if they are to deliver on the potential. Immediately following HSDPA is its counterpart for the upstream. This is too has great advantages but likely requires further hardware changes. Carriers should plan both for the major opportunity, but also to minimise the disruption.
About the Author
David Maidment is a product manager at picoChip Designs. David can be reached at [email protected].0
You May Also Like