The Latest In Developer Resource Blades

Denser, faster, more interfaces, hotter backplanes and chassis, better middleware and management-ware, host-based options - and better standards, everywhere - make it easier than ever to build feature-rich, scalable, high-availability

March 16, 2004

29 Min Read
Network Computing logo

High-density/high-availability (HD/HA) - especially for telecom - used to be a black art: equal parts application, operating system adjunct, hardware, and system integration engineering. At the deepest levels this is still true. Those with doubts (and everyone else) should visit Elma Bustronic's "community" website,

Of special interest is a new article titled "Designing a Backplane-Based Switched Fabric System," by Bustronic engineer Melissa Heckman and VP engineering Ram Rajan, which outlines in remarkably few words some of the constraints that bear on the new-product design process, and a few optimization strategies employed by elite engineers in overcoming them.

Heckman and Rajan's article makes it abundantly clear that "black art" still rules in the empyrean reaches of high-performance/high-availability system engineering, where every aspect of a finished system lies in the province of a specialist (EEs, MEs, TEs, etc.), but where interactions between domains (some previsible only through simulation) determine the viability and cost of a design.

At the application engineering level, however (and to an increasing extent, in the design of finished modular products), the industry has worked hard over the past few years to demystify the picture - creating backplanes, chassis, housings, resource components, middleware, OS adjuncts, and management systems that interoperate in standard ways, and offer predictable performance and cost at different scales.

The goal of this engineering effort, says Wendy Vittori, who became VP/GM of Motorola Computer Group early this year, is "to create an environment wherein application engineers can, with increasing confidence, assemble platforms from a growing range of off-the-shelf hardware and software solutions - secure in the knowledge that their needs for density, bus types, DSP and CPU horsepower and facilities, operating system type and package characteristics, power and cooling, form-factor constraints, connector types, cable routing, maintainability, availability and security have all been accounted for, and that the systems they're creating are, in a sense, pre-certified for standards-compliance, operating characteristics, and cost at various scales and volumes. Basically, we want our customers to be able to 'dial in' the platforms they need."The mechanisms are complex, but the goals - at least in their expression - are relatively simple. Telecom apps are supposed to get to market fast. They're supposed to be highly available and robust. They're supposed to be upgradeable, scalable, centrally manageable, and affordably maintainable. And they're supposed to be cheap enough not to break the deployment business model.

The Devil, of course, is in the details.


Ten years ago, at the beginning of the "computer telephony" revolution, the dominant mental model was of a single-chassis system: a PC stuffed with ISA- or EISA-compliant line interface and DSP boards. Boards in such a first-generation "voice response unit" interoperated at the clock speed of the system bus, generating interrupts for CPU service and requests for direct memory access.

This worked fine for low-traffic applications like four- to twelve-port voicemail. But performance began to flag as line-counts climbed and digital (T1/E1) line interfaces entered the picture. Components of the system were caught in lockstep, too dependent on CPU service. Horsepower was wasted by the bus's sharing scheme, which permitted only one device at a time to communicate. Functionality residing on individual cards was unavailable to other resources in the system.This vastly complicated software engineering. APIs were thin and programmers were forced to compose much of their own middleware for interrupt service, sometimes dropping into assembly language to insure that service was sufficiently rapid to avoid interrupt pileups and re-entrancy problems with the operating system.

As multiprocess/multithreaded OSs (e.g., Windows NT) came online, these asynchronous architectures had to be adapted to the imperious realtime demands of the hardware - another big project. Direct access to signal-processing features too, while arguably necessary, added yet another level of complexity, both for application creators and integrator/tweakers.

The industry attacked these problems simultaneously, on several fronts. The advent of telephony buses - PEB, MVIP, SCSA - implemented as a continuous series of ribbon-cable connections across the top of xISA cards, plus switching logic and firmware on the cards themselves - provided the first glimmerings of a solution to single-chassis performance issues; as well as the means (at least in principle) for extending the functionality of an application to multiple chassis, increasing line-counts and enabling new ways to package and scale telephony systems.

Telephony buses, clocked independent of the system bus and providing several thousand PCM timeslots of capacity, provided a separate set of pathways for switching isochronous data streams (usually PCM) among multiple devices - avoiding contention with basic system housekeeping and significantly increasing overall throughput.

The appearance of telephony buses worked a basic philosophical change in the economics and engineering of computer telephony systems. Availability of a systemwide "real-time resource switching and pipelining" facility encouraged engineers to begin thinking of computer telephony systems as "abstract pools of resources" that could be connected and released as needed, rather than discrete islands of functionality.This powerful and sweeping notion previsions today's "exploded" view of the converged enhanced service platform, where a "softswitching" facility connects resources encapsulated (and individually scaled as application requirements warrant) within discrete gateway, proxy, media and application servers.

An entire generation of middleware - most notably Dialogic's SCSA and CT Media initiatives (now further evolved by Intel, and referred to as Converged Communications Software) - was created to support resource abstraction, and the companion vision of application resource sharing: the idea that a single "CT server" could house multiple applications which would interoperate and exploit one another's specific functionalities.

Meanwhile, the ability to abstract and interconnect dispersed telephony resources encouraged development of a first generation of single-purpose, high-density boards for multi-T1/E1 interfacing and DSP processing.

Availability of such boards permitted the reasonably cost-effective integration of platforms to support applications with widely varying critical scale factors - for example, a CT-based PBX scaled to a traditional 1 trunk:4 stations rule of thumb, as opposed to a CT-based ACD with closer to 1:1 trunk/station scaling.

Both initiatives - toward better buses and higher levels of resource abstraction, on the one hand, and toward more comprehensive functionality and higher density, on the other - play important roles in defining today's state of the art.NEXT GENERATION BUSES

As computer telephony purveyors began going after bigger game - large-enterprise and telco-grade applications - weaknesses of the original PEB/ MVIP/SCSA vision became more apparent. Not only did the industry need to come to agreement on a common bus standard, but it was additionally necessary to develop a standard resource-sharing architecture for mission-critical enterprise and telco-grade applications.

As industry manufacturers finally standardized on H.100, a single telephony bus for the PCI platform, a a similar collaboration led by Performance Technologies and PICMG (the Peripheral Interface Computing Manufacturer's Group) began to develop CompactPCI, and the H.110 bus - an enhanced architecture capable of supporting hot-swap and other key features needed for mission-critical enterprise and carrier communications systems. CPCI is now the dominant overall architecture for conventionally integrated HP/HA communications platforms, and for many other mission-critical apps as well.

Initially, of course, CPCI was just a hardware/firmware specification - promising more in principle than in actuality. Making CPCI dominant has been the result of several years of effort on the part of server, backplane, single board computers (SBCs), and system manufacturers to master and standardize approaches to making CPCI's intrinsic capabilities useful - but not burdensome - to application developers.

Companies like Performance Technologies, Elma/Bustronic, Carlo Gavazzi and others have developed comprehensive libraries of backplane designs, enclosures, single-board computers, connector schemes, cooling subsystems, plus a host of monitoring, checkpointing, redundant failover and other software technologies, aimed at reducing the impact of failures on overall system availability, and the gradual elimination of single points of failure within a chassis or an extended platform.Their work is complimented by that of software makers such as GoAhead Software, who specialize in creating "abstraction layers" facilitating access to system-level HA functionality by application programmers who aren't HA specialists.

By the time of CPCI's emergence, it was already obvious that the "telephony bus" paradigm of switching PCM timeslots around a system was backward-looking.

IP telephony was already demonstrating that it was possible to build "telephony" systems whose connections to the PSTN - and timeslot-based digital voice communications - were entirely mediated through a gateway device, which encapsulated all "portwise" synchronous protocol and signal-processing functions. Remaining functionality - switching, content signal-processing, other application services - could be provided asynchronously, using the IP network as a generic transport and switching mechanism.

Even within the "gateway device" - an isolated and increasingly generic network component - it was apparent that asychronous internal resource-to-resource and resource-to-CPU communications would bring significant benefits - easier, more flexible scaling of resources to actual traffic requirements; and overall greater system scales.

In the end, the idea of "packet backplanes" seems to have emerged as a fused vision: the idea of "putting the Internet into the machine," coupled to the more hardware-bound vision governing the emergence of CPCI and the H.110 bus.Over the past several years, a range of competing packet backplane solutions have been proposed - ranging around the CPCI framework, as well as competing high-availability bus standards such as VME. All these solutions share the notion of running an asynchronous, switched network across the backplane of a chassis, and extending this network via optical or copper cabling to other chassis, creating a large virtual machine in which all components (or at least all functionality subsystems) can be, to some extent, mutually aware.

Getting five-nines or higher availability out of such a system, of course, depends on how its hardware is designed, and on how the firmware and software running on the hardware works - and of course, on how much money is available.

Most of today's competing packet backplane standards can be deployed several ways, depending on how resources are to be arranged in a finished product. At present, a "CPU or dual-CPUs, plus resource boards in a chassis" model still dominates; as a result, state-of-the-current-art packet-backplane systems are more and more being designed in star topologies.

The star topology - or its extension, the "dual star" - reflects and improves upon a generation of thinking about redundant, split-backplane system designs. There's general agreement, however, that the best (if also the most expensive, short-term) solution for system throughput and reliability will be a full-mesh architecture, where every component has an uncontended path to every other component in the system.

The "backplane problem," of course, is just one aspect of building viable, fast-to-market, robust, cheap (the euphemism is "cost effective") HA systems for the most demanding real-time communications apps.

Another aspect is that pesky, and eternally ongoing, dialogue about how to encapsulate and combine functionality within discrete components. How to provide hardware that - alternatively - can leverage practical allegiances between discrete functions (for example, the natural collaboration of portwise digital line interfaces with DSP for call progress analysis and tone generation), or serve the needs of special applications for particular kinds of function concentrations (e.g., the need of a speech-rec or high-density gateway application for "a big pool of DSPs") - all the while keeping things simple both for hardware designers and application developers.

The industry's response to these issues - and to issues limiting throughput on pure CPCI systems to around 1.3 Gbps - has been the creation of AdvancedTCA (ATCA) - the Advanced Telecomputing System Architecture, standardized as PICMG 3.0 (with decimal revs - 3.1, 3.2, etc., relating the standard to different packet backplane types: Ethernet, Infiniband, StarFabric, PCI Express, RapidIO, cPSB, and the various VITA implementations).

ATCA defines a new kind of resource board with an 8U x 280-mm form factor and 1.2 inch slot spacing, providing lots of 2D real estate, plus inter-slot headroom for "tall" CPU/sink and CPU/fan/refrigeration components. It uses a ZD backplane connector, capable of 5 Gbit/s throughput - fast enough to handle the demands of present generation gigabit-Ethernet-based packet backplane designs, as well as higher-speed architectures such as Intel and the PCI SIG's PCI Express 2.5 Gbps PCI-replacement initiative.

The standard requires dual-redundant -48VDC power, and cooling capacity of 200W per board. System management and other features are also closely specified.Over 100 companies helped develop the ATCA specification - none more important, perhaps, than Intel, who sees the spec as crucial to the design of future generations of communications servers based on PCI Express.

Vendors like Bustronic, Elma Electronic and Carlo Gavazzi have already developed 12U, 14-slot Dual Star (two hub switches running the fabric) or Mesh (each slot acts as a hub slot) backplanes. Elma has also recently announced 4U horizontal ATCA chassis, with five-slot Mesh backplanes. Development units in 2U height with three-slot backplanes should hit the market soon.


Certain kinds of converged communications apps bind signal and packet/protocol processing naturally to physical ports. For example, an analog IP gateway app needs to convert POTS voice to G.72X-compressed data - bogging down at least part of a DSP core.

In other cases, there may be higher-order reasons to couple signal processing more or less closely to the port - e.g., a call center app that wants to keep people on hold at the gateway and play them periodic messages, without requiring network-side, centralized media processing resources to play a role in this trivial transaction.Lots of other apps, meanwhile, endure no such constraints. "Name dialing" and dialogue-based speech recognition, for example, are handily done via centralized media processing on the IP side. Indeed, the fast-growing popularity of IP telephony points the way to a relatively near future in which gateway-style inline call-progress-and-core-audio-related signal processing is entirely decoupled from applications - where most "telephony" apps live entirely on the IP side of the universe, do nothing but (asynchronously) process and generate packet streams via RTP, and do no low- or network-level signal processing at all.

The result has been a sudden explosion of interest in host-based media processing - use of the present generation of really fast CPUs (and future generations of really, really fast CPUs) to do content-based signal processing tasks - either on a client/server basis (coarse asynchronicity) as in name-dialing, or on a much more granular basis, as required by audioconferencing.

The progress curve, here, is dramatic. Just five or six years ago, building a 64-line PC-based conference server required a significant investment in more or less task-specialized DSP-based hardware - bringing with it issues of cost, reliability, integratability and maintainability.

Now you can do the same app on a stock, high-speed Pentium 4 (or in fact, Celeron or PIII) server using Intel's Host Media Processing platform v.1.0, which presents (in programmer-friendly fashion) as a DM3 board, and works in a programming environment that encapsulates loads of functionality (up to and including procedures for conference management). The recent 1.1. release of HMP doubles capacity to 120 ports in a dual-processor server, and adds support for T.38 fax, speech, and G.XXX transcoding.

BLADE SERVERS Meanwhile, server clustering and other OS/software-based high availability strategies have become an increasing focus of Unix and Linux system development, with Windows not far behind. For all but the most mission-critical applications (this would include most conventional Internet apps) software/OS-based server clustering and redundancy/fallback provides more than adequate availability for large enterprise and even service provider needs.

The growing importance of server clustering and software high availability, of the "decomposed" vision for communications apps, and the emergence of host media processing, have all conspired to supercharge the development of true "blade servers" for the telecom marketplace. In recent months, Intel, IBM, HP, Sun and other vendors have fielded significant new products in this space, or "hard announced" initiatives this year.

A blade server is a chassis designed to support (house, power, cool, network, monitor and control) large numbers of discrete single board computers, plus monitoring equipment, power and cooling, mass storage arrays and other adjuncts - doing so in far less space, with less power dissipation, less need for cooling, and easier maintainability than typical rackmount "pizza box" servers.

Most current architectures put Gigabit Ethernet across the backplane in a star configuration, and include a high-speed switch for uncontended communications. All feature quick power-down and hot-swap of blade components.

The most impressive are NEBS, and/or ETSI-compliant for telco central office use. Some systems, like those made by Cubix, are radically scalable: you can buy a Cubix development platform with a single SBC for scarcely more than you'd pay for a standard server, develop your app against a perfect mimesis of Cubix' management and monitoring framework - then swap that SBC (populated with your application) into a large-scale chassis with dozens of peers and be assured of absolute hardware/software compatibility between development and deployment platforms.THE BOTTOM LINE

If you're developing telephony applications for large enterprise to carrier environments, you have more viable choices today than ever before. More options for developing in congenial environments, leveraging existing skills and avoiding the need to master arcana.

More options to exploit the engineering intelligence implicit both in standards and in individual vendor value-adds. More options for linking yourself to development paths that will, over time, offer ever-greater help in meeting time-to-market, reliability, scalability and TCO goals.


What's the next hurdle manufacturers need to overcome so that you can deploy carrier-grade applications? Here's your chance to let them know. Email [email protected] and give us your wish list. We'll publish the developers "Carrier Grade Wish List" in our next issue and online. Submissions will be anonymous if you'd prefer.

ESP Snapshot A fast roundup of products that underline some of the key factors developers need to consider when making basic platform/strategy decisions.


AudioCodes ( - 408-577-0488) makes a host of solutions for telecom developers. Among their latest are a series of voice-over-packet media gateway modules: PMC-sized boards that provide the primary building blocks for the development of next generation equipment such as media gateways, VoIP enabled class 4/5 switches, VoIP enabled conferencing bridges, IP PBXs, and VoIP enabled routers.

The modules use AudioCodes' award-winning and field-proven TrunkPack software. AudioCodes Media Gateway Modules range from low to very high density. Of particular interest is the TPM-800 - a CPCI-based MGM card that can support up to 240 channels of voice or fax with a 30msec echo tail.

AudioCodes also makes the impressive CPCI-based IPM1610, a voice-over-packet media processing card supporting 16 T1/E1s; and a host of other products for wireline and wireless markets.BROOKTROUT

Brooktrout Technology ( - 781-449-4100) recently introduced their TR2020 PCI platform for Voice Over Packet Gateways and Enhanced Services.

The board performs a number of key functions including voice compression and interactive voice response (IVR), as well as fax and data relay. The TR2020 incorporates a unique voice quality monitoring agent, Telchemy's VQmon (a Communications Convergence Editor's Choice and 2002 Product of the Year winner). The VQmon system, adopted by ETSI, reports user-perceived voice quality in real-time, for every call placed through a gateway. VQmon-based reporting can be used to dynamically manage the network, and to support service level agreements (SLAs).

Features available on the TR2020 family include onboard packetization, onboard telephony (T1/E1 ISDN, T1 robbed bit signaling and MFC-R2), echo cancellation, approvable NEBS compliance and a growing list of U.S. and international telephony approvals.

Brooktrout's Application Programming Interfaces for the TR2020 products provide high- and low-level programming interfaces to developers for a suite of functional components. These include: Voice Transcoding and Media Stream control, Voice and Media Packetization, WAN Access and Call Control, Operations, Administration and Maintenance (OA&M), Interactive Voice Response and Call Progress.The TR2020 supports all major operating systems used in carrier-grade applications: Linux, Solaris, UnixWare and Windows.

As far as we know, Commetrex's ( - 770-449-7775) BladeWare is the only shipping host-signal-processing product that's licensed to OEMs as a vendor- and resource-independent software platform. It's also the only one with a field-proven terminating T.38 capability.

Yep, a media-server OEM that's shipping an incredibly expensive DSP-based product can license BladeWare and use Commetrex Multi-Modal Terminating Fax (MMTF) and terminating voice and add his own or third-party media technologies, such as audio conferencing and ASR. This BladeWare-based media server can be indistinguishable from the DSP-based version, making the marketing issues much more manageable. For more, check out:


Cubix makes the BladeStation family, a sixth-generation blade server architecture supporting single or dual Pentium4 Xeon processing power in densely packed arrangements, with remote management and configuration flexibility. Complementing BladeStation systems are BladePoint systems, which offer the same Pentium 4 Xeon blade electronics in rackmount standalone server (or appliance) packaging.A true "blade server," a single BladeStation chassis can support up to seven processor blades, each with its own set of PCI adapters for peripherals. Blades are front-extracting, as are PCI adapters and power supplies.

BladeStation has support for one to four standard full-length PCI-X/ PCI adapters (64/32 bit) per blade. The blades and PCI adapters are vertically positioned next to each other, and are front-extractable as a set for easy serviceability. You can add any kind of PCI hardware to a BladeStation blade, enabling easy integration of Fibre Channel, RAID, T1, fax, video encoder/decoder adapters or any manner of standard PCI adapters to each blade server as needed - even legacy 32-bit, 5Volt adapters.

BladeStation also lets you integrate RAID1 hot pluggable mirrored drives (two) and/or RAID5 hot pluggable drives (four) per blade server within the enclosure. Internal power within a BladeStation is -48VDC - if this power can be supplied externally (as is typically possible in a telco central office or other high-density environment), you can configure your BladeStation without internal power supplies, increasing internal real estate, eliminating points of failure, and significantly reducing cooling requirements.

Cubix also provides KVM (Keyboard/Video/Mouse) switching for at-point management, and has a remote-management Web client that lets you control a whole server installation. Significant autonomic management software is included, as well.

DIVERSIFIED TECHNOLOGY Diversified Technology makes a broad range of industrial computers and blades, including a full line of PICMG 2.16-compliant chassis, SBCs and adjunct devices. Called PlexSys, the product family comprises 12U, 8U and 4U (horizontal) chassis. The 12U unit offers 20 node slots plus redundant fabric slots, dual-split backplane with up to four SBCs in active/active and active/standby configurations, and features IPMI-based star-topology shelf management (it's significantly feature-equivalent to the Performance Technologies 5085).

Complementing the chassis are a range of Pentium-based SBCs, including the PICMG 2.16-compliant cPB-4305, based on the Mobile Pentium 4. The cPB-4305 comes with a low power, 1.2GHz Mobile Intel Pentium 4 processor - M with 512KB L2 cache, 845E chipset with 400MHz front-side bus, and 256MB, 512MB, 1GB or 2GB ECC DDR200/266 SDRAM memory configurations (2GB achieved with 512MB addressing).

Onboard are two 10/100/1000Mbits/sec Ethernet auto-negotiating controllers, Ultra DMA/100 IDE 2.5" hard drive or 32-bit/33MHz PMC site, PS/2 keyboard and mouse interfaces, Universal Serial Bus (USB) and serial ports, floppy interface and a SVGA CRT controller. The 16MB of StrataFlash supports the field-upgradeable BIOS, and may store a bootable operating system image; an optional IDE-compatible CompactFlash carrier card is available in lieu of the hard drive.

The board, optimized for the Ethernet Packet Switched Backplane (PSB) architecture, forsakes a CompactPCI bridge to reduce cost and may be used as a system master or peripheral processor blade in 2.16-compliant platforms. The cPB-4305 will not run with or interconnect with other devices on the CompactPCI bus; it will not terminate the PCI bus.

Low-cost, Diversified Technology's cPB-4305 system/peripheral processor blade is the economical choice for application developers addressing high-performance, converging telecom and Internet IP-based markets such as VPN switchers, media gateways, 2.5G/3G wireless, server clusters, IP DSLAM and voice/video/data servers.EICON NETWORKS

Eicon Networks ( - 800-80-EICON) makes the Diva Server line - a range of Windows-compatible, plug-and-play DSP/line interface "universal port" adapter cards for ISDN BRI, PRI and T1/E1.

Onboard DSPs, firmware and available software libraries provide fax and fax-related host-based media processing (e.g., file conversion), soft modem up to V.90 (56kbps), and voice transcoding, including GSM - any port, any function: ideal for multipurpose communication servers, messaging, remote access and other apps at enterprise scales.

Up to four Diva T1 adapters can be installed in a single chassis. SDK options include TAPI, an ActiveX control plane, C/C++ libraries with extensive source code included, and a CAPI 2.0-compliant API.

FORCE COMPUTERS As a demonstration of ATCA's standards-based, multivendor, best-of-breed approach to platform integration, Force Computers, collaborating with Intel, recently demonstrated a four-SBC ATCA system with multiple redundant internals at Telecom World Geneva.

Built on Force's Centellis DS 31KX development system, the demo configuration incorporates a 13U/14-slot ATCA chassis with cooling; a Shelf Management Controller with SNMP remote access; and a redundant-fabric Ethernet switch. Two of the SBCs are by Force - they're ATCA-710 Mobile Intel Pentium 4 Processor - M blades running MontaVista Linux Carrier Grade Edition 3.0. Additional Force components include dual Force Fibre Channel ATCA storage blades - for server applications.

Intel equipment occupied a second shelf: two Intel NetStructure MPCBL0001 High Performance Single Board Computers, also running MontaVista Linux; and an Intel IXMB2401 Single Network Processor Base Card (ATCA blade) with Intel IXA application software for DSLAM - simulating streaming video via DSL.

Controlling system high availability was GoAhead Software's SelfReliant 7500 HA middleware - adapted and tightly integrated into Force's EndurX HA product series. The middleware permitted configuring the system's four processors as active and standby master and active and standby client - demonstrating application failover.

Mass storage was provided through Solid Technologies' Autonomic Data Management Platform - also integrated with Force's EndurX HA series - and GoAhead SelfReliant; providing a hot-standby database configuration for five-nines controller card data reliability.Component management within the system was enabled by UXComm's Network Element Management Solution - based on XTend Management - a unified, adaptive management fabric for the integrated monitoring and control of modular computing architectures; and by distributed MTP3 - part of Force's StackWare telecom protocol software - with DS1 interfaces and MTP2 layers provided by PMC modules and distributed MTP3 implementation on Linux hosts.


GL Communications makes a wide range of test equipment useful to developers and integrators of enterprise- and carrier-scale telecom and VoIP applications. Their new OC-3/STM-1 Ultra Card is a PCI card that can be used for analyzing, testing, simulating and monitoring OC-3 and STS-1 signals.

The Ultra OC-3/STM-1 card may be used in conjunction with GL's Ultra T1 Card, Ultra E1 Card and Ultra T3 Card in the same PC to provide a complete OC-3/STM-1, STS-1/STM-0, DS3, DS1, E1 and DS0 testing solution. GL has a full range of software applications, APIs and drivers that let you roll up exactly the testing facilities you need, under whatever user interface you feel like writing.

GL also offers a brace of turnkey test systems and software, including bulk call generators, programmatic call generators, IP voice quality evaluators, laptop-based protocol test/analyzers, as well as peripheral equipment, such as multi-T1/E1 repeater boxes (good for breaking off a test signal from a device presently in operation).IBM

IBM announced a new IBM eServer BladeCenter T blade server, for early release this year. The eServer BladeCenter T system will extend IBM's existing collaboration with Intel around the IBM eServer BladeCenter product family.

The entry of this new class of blade servers, coupled with the IBM Service Provider Delivery Environment (SPDE) framework, further reinforces IBM's commitment to open industry and IT standards. SPDE gives wireline and wireless telecommunications service providers the flexibility to introduce new, revenue generating voice, text and Internet services to their customers faster, easier and at a lower cost.

The BladeCenter T systems will be both NEBS 3 and ETSI certified.

INTEL Intel, of course, is a key player in most aspects of HA computing, particularly in the ATCA initiative (see writeup of Force Computers for news of an ATCA demo with Intel at Telecom World Geneva). They make a full range of SBCs, backplane network switches and other componentry.

Their Dialogic business unit (of course) also makes every conceivable variation on the theme of "telecom boards," from low-density multifunction and station-interface cards, to the highest density CPCI and H.110 products (there's a wonderful chart with links to every Intel/Dialogic telecom product, at the URL

At this precise moment, Intel's attention seems drawn at least four ways as regards telecom/carrier markets. The evolution of ATCA is clearly a big deal, because it promises to be a "final solution" for getting Intel hardware into carrier COs and POPs - answering carriers' needs for high availability, cheap maintainability, ease of configuration/reconfiguration, and high throughput.

Mobility is clearly another big thrust. Intel's Mobile processors with XScale technology are optimized for full-speed operation and low-power dissipation - processors in this series are applicable across the board, from cell phones, PDAs and laptops (where low power equals long battery life) to high-density SBCs and multiprocessor "CPU farm" implementations for network and host media processing (where low power equals lower cooling requirements and better runtime on backup power supplies).

Host media processing - where CPUs, instead of DSPs, bear the brunt of workload - is another area in which Intel is fast moving to capitalize. As noted elsewhere in this feature, they offer a complete host media processing system that presents exactly like an Intel/Dialogic DM3-series board.SS7 carrier network signaling is another of Intel's thrusts, right now. They've recently introduced a NetStructure SS7HDP board, XScale-enabled, that supports 64 SS7 links on four T1/E1 interfaces - for demanding mobile and intelligent networking applications. The initial product release is available in PCI form factor for Linux, and is compatible with existing SS7 protocol stack products from Intel (MTP, ISUP, TUP, SCCP, TCAP, MAP, IS41, INAP).


Under the leadership of VP/GM Wendy Vittori, Motorola Computer Group has embarked on a new epoch of internal rationalization and standardization of its embedded computing product lines, and has a newly clarified mandate to assemble complete hardware/software/OS platforms that meet customer needs for high availability, scalability and cost. MCG microprocessors, subsystems, component boards, chassis and firmware architectures play in every communications sub-market, including military, aerospace and mobility.

They've recently integrated media processing into their multi-service MXP platform, by means of a new ComStruct WTRB500 media processing board. Thus equipped, the MXP platform, with packet and media processing capabilities, can accelerate the rollout of services such as push-to-talk in a cellular network, mobile video messaging and video-on-demand, legal intercept and conferencing.

The ComStruct WTRB500 is architected to operate as an autonomous payload board to reduce overall system cost and improve availability and performance. The board tightly couples Motorola PowerPC control processing, DSP and network processing with a range of industry standard interfaces. It is supported by FACT-OR, Motorola's open software environment that abstracts the developer from the underlying hardware enabling equipment manufacturers to differentiate their applications.The MXP multi-service packet transport platform enables Motorola's customers to simplify and converge their applications into a single base platform. It features integrated combinations of general-purpose processors, network processors, digital signal processors, redundant IP switches and storage elements, supported by powerful high availability and telecom framework middleware. This enables the MXP platform to address many different applications with a common development and deployment solution.

NMS - like other key market players - heads off in many different directions. But their current interests clearly lie in the direction of the most challenging mobile applications, particularly personal video over 3G.A pair of fascinating papers by NMS' Brough Turner and Andrea Basso, recently posted on their site, detail some of the benefits new 3G standards bring to the table, detail some of the obstacles to full implementation by carriers, and suggest solutions. The papers are available at


Performance Technologies was the originator of CompactPCI, and key author of the PICMG 2.16 specification. And they have a range of products in this space - from straight CPCI to advanced packet-backplane managed systems - that's hard to beat for quality and innovation.

PT's products - particularly their top-of-the-line IPnexus product family - exemplify the benefits of the radical move towards standards in the HD/HA platform space. IPnexus products are CPCI - you can install third-party CPCI components, and they'll work, hot-swap, etc. Shelf management is PICMG 2.9 IPMI (Intelligent Platform Management Interface) compatible, so compliant cards from multiple vendors can be monitored, powered up and down and otherwise kept healthy by the platform and its firmware and software.

The platform itself - significantly redundant at the host level in compliance with PICMG's Redundant Host specification, as well as at the fabric and individual node levels (depending on configuration), takes care of the details.The result is that - with less difficulty than an integrator could 10 years ago, specify a standard non-redundant PC server to house non-hot-swappable voice cards - you can "dial in" an IPnexus configuration that will seamlessly house a wide range of components and support your application with 99.999% uptime.

In effect, you get most of the benefits of purpose-built hardware, with the effort/expertise-, cost-, availability, and time-to-market features of (admittedly high-end) off-the-shelf equipment.

PT's IPnexus family has recently been enhanced by the addition of the ZT5085e Redundant Host platform. This is a 12U, 19 inch rackmount chassis with 21 6U slots (over/under), including 18 node slots, two redundant fabric slots (letting you build dual stars for full backplane-network fallback), and two 3U PICMG 2.9 IPMI-compatible intelligent shelf manager slots (the latter arranged over/under in one 6U slot). Everything's serviceable from the front, and dual everything. This is a hard system to kill, and if something breaks, you can fix it in minutes.

The ZT5085e can be configured with several midplane options, too. You can have it set up with two CPCI bus segments, with 32/64-bit, 33/66MHZ bus support. You can have up to four hot-swappable host slots (compatible with the Redundant Host specification), enabling active/standby and active/active configurations. There's a single H.110 telephony bus, too.

Power to each slot is isolated - up to 50W per node slot and 70W per fabric slot, power and cooling. Fan trays are redundant and hot-swappable, as are the (N+N) power supplies. The system as a whole is NEBS Level 3/ETSI compatible.Performance Technologies also offers a wide range of SBCs compatible with the IPNexus family, including low-power-dissipation high-clocked Mobile Pentium 4 M-series designs; WAN adapters and communications boards. They also have a comprehensive suite of SS7 signaling components and control software, including their IPNexus 5500 signaling gateway blade.

The same technology is used in a range of turnkey products for SS7-over-IP-via-Sigtran equipment. Enabling software includes NexusWare - a Linux-based development environment for PICMG 2.16 - and a series of WAN Connectivity Kits: hardware/software solutions for multiprotocol support.

Earlier this year,

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights