Cenata Plots 'Transparent' Clusters

Startup founded by former JNI execs is developing RDMA-over-Ethernet I/O silicon. Will it fly?

February 1, 2003

3 Min Read
Network Computing logo

San Diego startup Cenata Networks Inc., founded by five former JNI Corp. (Nasdaq: JNIC) executives, is developing silicon-based technologies it claims will connect servers in high-availability clusters -- without any changes required to applications running on those servers -- using standard Ethernet technologies.

The pre-VC company is basing its products on the Remote Direct Memory Access specification, which is designed to improve server-to-server communication by offloading data-copy operations from the host CPU. RDMA allows computers to effectively access shared memory space across the network, meaning one server can directly place information in another computer's memory space without any intermediate processing.

Cenata's secret salsa is a proprietary layer it calls Transparent RDMA (tRDMA), which is supposed to allow existing applications to take advantage of RDMA with no modifications necessary. "You don't have to do anything except plug your card in," says Dan Asmann, VP of marketing and business development. "That's where we think the gold is."

This, of course, positively smacks of your usual startup propaganda. Who knows if it will ever actually work as advertised?

In Cenata's favor is the fact that it's developing its chips based on RDMA, which is emerging as a widely supported standard. Members of the RDMA Consortium include Intel Corp. (Nasdaq: INTC), Hewlett-Packard Co. (NYSE: HPQ), IBM Corp. (NYSE: IBM), and Microsoft Corp. (Nasdaq: MSFT) (see RDMA Rumbles Along).Then again, InfiniBand also received a tremendous amount of initial support from each of those same players -- not to mention hundreds of millions of dollars in VC funding -- and it's still stuck in first gear. [Ed. note: Or has it run out of gas?] (See our recent report, Whither InfiniBand?)

But another check in the plus column for Cenata is that among its 12 employees are some folks very well-versed in the art of high-speed I/O technologies. The company's founders essentially plucked the entire team out of JNI, the San Diego-based maker of Fibre Channel host bus adapters. Chuck McKnett, Cenata's president and CEO, was one of the original founders of JNI and served as its CTO. VP of sales Michael Orenich held the same position at JNI; prior to that he was with Prisa Networks, the storage management software startup acquired by EMC Corp. (NYSE: EMC) last year (see EMC to Acquire Prisa, Finally).

Allen Andrews, VP of software engineering, previously led JNI's development of Fibre Channel HBA drivers for Solaris and other operating systems. Randy Ralph, who's in charge of Cenata's hardware engineering, was one of the founders of JNI and helped design its first HBA product line. Finally, Asmann, who apparently is also an expert on applied physics, was responsible for business development at JNI.

Cenata, founded in April 2002, is currently actively seeking its first round of VC funding, Asmann says. The seed capital was anted up by Cenata's founders, with most coming from McKnett. (By the way, Cenata doesn't mean anything: "It's just one of those made-up words," Asmann says.)

The next step for the company will be to deliver a proof-of-concept of its RDMA-over-Ethernet implementation, and follow that with a field-programmable gate array (FPGA) and the software that goes with it. The initial implementation will be Gigabit Ethernet, with Cenata shooting for an application specific integrated circuit (ASIC) running 10-Gig Ethernet by the end of 2003."It's an enabler for high-density blade configurations," says Asmann. Also on Cenata's roadmap is a server-clustering appliance, which could "sit anywhere in the server equation," he says.

As a kind of side project, Cenata has also developed an iSCSI driver for Sun Microsystems Inc. (Nasdaq: SUNW) Solaris operating system. The startup teamed up with Alacritech Inc., which has created a storage-acceleration card for TCP/IP, to provide an iSCSI option for Sun servers. At least for the time being, Sun appears philosophically opposed to the IP-based storage protocol, so unsurprisingly it hasn't developed its own driver (see Sun Says iSCSI May Be a 'Mistake'

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights