Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Faster 3G Easier Said Than Done: Page 7 of 9

As well as allowing for evolution in scheduler design, in many cases, it will be desirable to have different scheduling policies in action at different times of the day or tuned for certain types of location, such as an airport waiting lounge. To test this requires multiple scenarios to be evaluated under different loading conditions. As a result, architectures that maximise flexibility will be key to efficient HSDPA implementation.

Granularity Needed
Processing granularity will be a major consideration for the efficient implementation of a HSDPA-compliant basestation. Systems based on a small number of high-performance DSPs tend to demand large buffers and, to reduce the overhead of switching between tasks, will tend to work on large groups of data at any one time. This makes things "clumpy" with high latency. However, such a coarse-grained approach to task scheduling is a poor fit for algorithms such as scheduling that need low latency to work effectively.

The advanced silicon processes available today for integrated circuits (ICs) make it possible to implement hundreds of processors on a single chip together with distributed memory blocks and an interconnect structure that provides for the efficient delivery of data needed to implement the many feedback paths needed. Protocols such as HSDPA, as with earlier versions of W-CDMA, work well on parallel-processing architectures, as the many different processes need to happen at the same time (Figure 2).


Figure 2: Diagram showing an HSDPA implementation using a set of array processors.

Fine-grained control will be necessary to implement features such as fast scheduling and per-user coding and modulation adaptation. With a large number of processing elements, it becomes possible to dedicate processing and buffer resources almost on a per-user or per-function basis. For example, one processor may collate information for a processor that just runs an advanced scheduling algorithm. This allows the processor to perform scheduling decisions all the time. This will yield much lower latencies than a system where scheduling is shared with other tasks on a general-purpose processor or DSP.