First-Class IT Service

McCarran International Airport's management's commitment to customers, as evidenced in the success of its CUTE program, holds the promise of new initiatives -- from a gigabit network to secure wireless

April 14, 2003

12 Min Read
Network Computing logo

Tough Sell

The CUTE system and its cousins FIDS (flight information display system), LDCS (local departure control system) and the soon-to-be-born--and renamed--CUSS (common-use self-service) systems are some of McCarran's most critical applications. They require not only good application functionality but service guarantees and redundancy at the system and infrastructure level.

That's a tall order, one that made airlines skeptical at first. But McCarran's vision prevailed, and today, all but one tenant carrier have joined the collective (for more on the benefits of CUTE and how McCarran won over the airlines see "Air Power," page 32).

IT Powered Convenience

McCarran will soon let travelers get a boarding pass and check in luggage at their hotels, before they leave for the airport. The self-service systems are the same as the Windows 2000 self-service terminals in the airport, only they're connected to the airport over a WAN circuit.

Indeed, McCarran's IT department and its leaders face the typical technological challenges--staying current on hardware and other infrastructure, providing good customer service, maintaining a helpdesk, justifying purchases and keeping up with staff professional development. They also have to work closely with the federal government's TSA (Transportation Security Authority), which hasn't exactly been a smooth ride.

Gerard Hughes, senior network analyst in charge of infrastructure at McCarran, says, "We've had to teach them the feds how we do business here. That's been a process for us." In fact, McCarran did have an incident with the TSA over closet infrastructure, but after some tough meetings, lines of communications have been established, which has helped to prevent further conflict.Besides CUTE, McCarran has other projects keeping its IT staff hopping. Vincent Macri, a systems technician, gave us a peek at the airport's new video-surveillance system: Kalatel 2000E digital video recorders are hooked into more than 100 cameras all over the facility, from the gates to the parking toll booths, with gigabit uplinks to four Matrix E1 core switches. Even before 9/11, immigration-driven video-surveillance regulations had Hughes' team thinking about automating the system. Now, they're doing 8 frames per second and keeping the video in compressed format for about a month, which fits McCarran's surveillance needs nicely. Video is kept on the recorder's hard drive, and Macri burns DVDs as needed for evidence purposes.Hughes outlined plans for replacing McCarran's older FDDI network, built on Optical Data Systems components, with an Enterasys gigabit core. He says he's still fond of the ODS gear and can't complain about downtime. So why switch? "It doesn't do what we want it to do," he says. For starters, the FDDI (100 Mbps) network doesn't support VLANs, nor can it provide the kind of bandwidth that Hughes needs to ship GIS (geographic information system) data and video around the facility. The GIS need was addressed two years ago with an initial Gigabit Ethernet rollout using Cisco series 6000 gear. After success of that project and discussions with vendors, Enterasys was chosen for the airportwide deployment. With desktop connections at 100 Mbps and a gigabit core, the GIS department (one of the recipients of the new gigabit network) now enjoys faster response times.

In addition, the airport sells dark fiber to the airlines, but, Hughes says, "We could offer them VLANs over copper," which is cheaper and quicker than fiber to implement.

But what about the security implications? The CTO of some airline, for instance, might be uncomfortable using McCarran's VLANs. "His data is already on my network," Hughes says. "We have the firewalls in place. We have operational security in place. We have intrusion protection in place."

It's interesting to note that McCarran has no chief security officer. There is no dedicated network security person, and Hughes makes no apologies for it. "Security is a shared responsibility," he says, a sentiment with which we heartily agree. He points out that, though McCarran runs a number of Microsoft products, "Our impact from SQL Slammer was exactly zero."

Central Command

McCarran controls all the front-end systems that present information about flights, passengers and personnel throughout the airport, right down to the baggage tag printers.

This speaks well for the facility's patch procedures, network design, content scanning and default firewall rule sets. A quick scan reveals McAfee virus protection on workstations with up-to-date signatures and engines, and when we tried to connect back to our office VPN by plugging into an Ethernet port, we were categorically denied. McCarran's "default deny" posture requires authentication to connect out to the Internet.McCarran's Cisco PIX firewall handles packet filtering for the network, with failover planned by July. David Webb, the department's senior business systems analyst, has just finished deploying Novell's BorderManager as an authentication-based proxy for Web users.

Speaking of secure (or insecure) transports, no discussion about network facilities would be complete without talking about plans for wireless. The TSA's master IT plan, which includes a plan for wireless facilities, provoked something of a reaction from Hughes, but he diplomatically says, "We've got bigger plans than just sending a perpetrator's picture on a PDA."

McCarran is in discussions with several wireless providers, including AT&T, Roving Planet, SpectraSite, Sprint and T-Mobile, and is in talks with Arinc for the airlines' wireless needs. The facility has a limited wireless presence in conference rooms, with appropriate security, the details of which Hughes would prefer we not disclose, for obvious reasons. McCarran's desktop management philosophy: "Simpler is better." The biggest deployments are, of course, at the gates and ticket counters, and for desktop management McCarran uses Arinc's tools, which load a stripped-down version of Microsoft Windows NT, with a custom shell, over the network (see CUTE diagram, page 48). The Arinc package has the agent authenticate after choosing his or her airline (see CUTE chooser, below), and CUTE then launches the appropriate emulator. The airline uses straight IP routing or gateway-protocol-conversion technology to get the session back to corporate. Straight leased lines and frame relay are also in use.

As far as network management goes, as the gigabit upgrade progresses, the associated closet and core network gear is plugged into a Computer Associates Unicenter management framework, which should be completed by August. After experiencing one prolonged network outage, management made uptime a priority. The older FDDI network was exceptionally reliable, even though various pieces of the network broke down over the years as a result of dual counter rotating rings, dual power supplies and dual switched fabrics on the backplane of the core switch. To provide the same high level of fault tolerance on the gigabit network, the closet switches (Enterasys E7s and E1s) are all dual-trunked using Spanning Tree, with backup trunks on separate cards.

Service Levels and StaffingAirline systems manager David Bourgon's staff of eight technicians maintains about 3,000 bar-code readers and ticket printers, plus the PCs that connect to them.

"We take care of it all in-house," he says. "Traditionally this stuff is outsourced, but we do it internally, and our customers don't have to wait for service. We normally handle 50 trouble calls a day. We call the airline back within five to 15 minutes, and most problems are resolved within 30 minutes."

Compare that with phoning an off-site system integrator, staying on the phone with a helpdesk for 30 minutes, or, even worse, placing a call to headquarters in another city and waiting for someone to fly in. Bourgon says it's common for airlines to fly technicians in--makes sense, they don't have to worry about steep airfare. Problems that take longer to resolve tend to be WAN-based, with circuits back to individual airlines' headquarters. "Northwest was down for several hours yesterday when there was a circuit problem," he says.

Next Steps

• April: Thirty-eight common-use self-service (CUSS) systems due to go live in the main terminal
• May: CUSS systems to go live for off-site check-in at the Las Vegas Convention Center

• July: Backbone upgrade to Gigabit Ethernet due for completion
• December: Campuswide wireless network due for completion

The payback for the airlines that choose to participate in the airport's common-use systems: fast on-site help for local problems, making for happier and more productive agents. And that, in turn, boosts customer satisfaction.

How do Bourgon and the other IT managers keep service levels high? For one thing, the systems are redundant: They're TruUnix 64 version 5.1, Digital Equipment Alpha clusters running Oracle with failover capabilities for FIDS, Microsoft Windows 2000 clusters for LDCS, and a Novell NetWare cluster for CUTE. To standardize these, McCarran plans on transitioning to Windows 2000 once the systems are at the end of their useful lives.What about continuity of service? Say, for instance, if there are only two people who deeply understand the Alphas, is there a rule banning them from ever getting on the same airplane? Not a problem, Bourgon says.



CUTE/FIDS Network
click to enlarge

"All eight of the technicians are versed in both the systems on the software and on the hardware," he says. "That way I always have somebody in the building who knows what they are doing. We also contract support services, obviously, for our different servers. We have support for Compaq, for the Alpha clusters and for our regular servers."

McCarran also relies on good change management, through an internal application called ISOS (information systems operation schedule). Changes are posted two weeks prior to the implementation, which gives everyone a chance to comment, says Hughes. In addition, there's a "Tenant Bulletin," which goes out via e-mail and fax and warns airlines when there's a potential change in the works. For the airport operational systems, the change window is between 2 a.m. and 5 a.m. Internal (nonairline) systems have a more forgiving window, 5 p.m. to 6 a.m., plus weekends.Perhaps the most revealing comments about how McCarran keeps its service levels high came from the top brass.

"This arena isn't for the faint of heart," says Ross Johnson, assistant director for finance. Doing IT right in an airport environment is tough and takes serious commitment of both resources and staff time. The difference between being able to simply buy a bunch of clusters and fault-tolerant hardware and being able to effectively operate those clusters and hardware is night and day.

While it's easy to believe that it's just about money--unlike most airports, McCarran collects $40 million per year in slot-machine revenue, and 8 percent of the $218 million operating budget goes to IT--the culture that management has built is a key factor. Money can buy hardware. Money can buy services. But we've been in more than one shop with surpluses of both that have been IT-process train wrecks.Not so at McCarran, where the process bar has been set high by airport director Randall Walker. With an accounting and IT background, he's hell-bent on everyone in IT knowing that good execution and a focus on business objectives are what matter. Technology is a priority. "We found that when we had common use, we got more 'turns' out of that gate," he says. "The only way we could figure out how to do that is through technology."

Of course, the technology to perform these vital functions simply has to work, no excuses. There's plenty of evidence that technology at McCarran is well-planned, not implemented willy-nilly. There is documentation for just about everything, and there are works-in-progress documented in almost everyone's office.

Walker does acknowledge that there's no such thing as 100 percent uptime, which again brings up the Slammer worm. "How do you, as an IT guy, go up to your boss and say, 'Well, they did send us the fix, but I just never had time to install it'? The guy is sitting there calculating how much money they lost," Walker says. "As an IT guy, that's a hard discussion to have with your boss. I expect my guys to stay on top of it."

McCarran's systems, according to Bourgon, easily boast an uptime of 99.995 percent. So what's the control mechanism to ensure that staffers, not just equipment, continue to deliver? Are there pages and pages of standard operating procedures, or is it cultural?

Any organizational behavior textbook will tell you that culture is one of the most effective control mechanisms, and Walker confirms that. However, fostering a culture of teamwork is no easy task. The key?"Our guys have a lot of fun," he says.

In interviewing his employees, it became obvious that he is correct--as long as you adopt a geek's-eye view of what constitutes fun. At McCarran it means going to training at least once a year, working with new technology and cross-training, all of which prevents boredom and provides "fault tolerance" expertise among the staff. Walker says he used to worry about making such a big investment in training because it made the IT staff so marketable to other employers. But he seems to have gotten past that.



CUTE Network
click to enlarge

"They're having so much fun, I don't think they are leaving," he says. Walker is right on the money: Only three IT people have left in the past 10 years, and since 1999 only one technician has resigned, to continue his education in another field.

The alignment between upper management and IT staff is most telling in a comment that Johnson makes when discussing project justification: "It's the confidence in our team, in our people; they've got the same vision. They wouldn't have suggested something that was frivolous or outlandish." That enviable trust between business leaders and IT makes for the best business technology implementations.

Jonathan Feldman is director of professional services for Entre Solutions, an infrastructure consulting company in Savannah, Ga. He has worked with and managed technology in industries from health care and financial services to government and law enforcement. Write to him at [email protected].Post a comment or question on this story.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights