Five Ways To Modernize Your Mainframes

Here's how your company can add new value to its big iron, while lowering operating costs.

November 25, 2003

19 Min Read
NetworkComputing logo in a gray background | NetworkComputing

After three decades of innovation, miniaturization, and acceleration, the most viable platform for high-volume enterprise applications remains the mainframe computer. If trends continue, companies servicing 5,000 users or more worldwide will retain their investment in mainframe technology well into the next decade. Not only will this environment survive, it may thrive, and if indicators for 2003 are correct, it may actually grow. For the largest companies today, the move away from the mainframe and toward distributed client-server computing has stopped. But for many large companies, the biggest issue isn't whether to buy more big iron, it's how to better capitalize on their existing mainframes. There must be something that says "I'm risk-averse" about data-center managers, because apart from tactical moves such as jettisoning tape administrators, consolidating licenses, and off-loading systems management to third parties, many companies have been reluctant to apply innovative initiatives.

"I think there's some fear of digging into some of these legacy architectures, where a lot of skills have been lost," says William Ulrich, an analyst with the Cutter Consortium and author of Legacy Systems: Transformation Strategies (Prentice Hall PTR, 2002). During the migration of the past few decades, Ulrich says, many companies laid off the veteran IT staff they need to optimize and modernize their mainframes. "They may not understand," he says, "that there's tools out there to do it, and there's techniques and approaches for it."

"I think the notion of legacy, in many regards, has already passed," says Pete McCaffrey, IBM's director of product marketing for its flagship zSeries of mainframe computers, formerly known as System/390. "More than 70% of our [mainframe] business volume last year went to customers that were deploying new applications."

Have mainframe computers--even new ones--become more cost effective for large companies than distributed servers? Not surprisingly, mainframe (hardware and software) vendors we interviewed say it's also expensive to manage distributed servers, particularly for high-volume applications. Cost-of-ownership studies have shown the numbers tallied in various ways. But for most customers, the pertinent question isn't which platform is right, it's, how can we maximize our mainframe investment while saving a buck or two? InformationWeek researched the total-cost-of-ownership claims of dozens of vendors in the mainframe software and services markets. We identified five categories of modernization and best practices--some of which are state of the art, others of which are actually older than the PC--that add value and viability to large business systems and which are supported by fair arguments that their potential return on investment can lower TCO.As with any household that hasn't held a garage sale in the past decade, so much of what's stored inside a 20-year-old mainframe can just go, and quite a bit of it is stored redundantly. "Consolidation" is the term of choice to refer to the key project of cleaning house. Even though storage costs have plummeted, simply increasing or buffering a company's storage base or network won't go very far in solving the cause of its bottlenecks. Chuck Hollis, VP for platforms marketing at EMC Corp., says his company advises its customers to first make better use of the storage it already owns. "The first thing we ask people to do is understand who's using what--storage-resource management. It's pretty hard to figure out what you need if you don't know what you've got."

That management philosophy is part of the idea behind consolidating data storage while increasing the number of points of contact between data and users. EMC's Symmetrix DMX networked storage system is intended to multiply every user's access to data in a way that's transparent to the application. In so doing, it actually alters the nature of database-management logic. Databases such as IBM's DB2 can employ fewer locks to prevent alterations or updates out of sequence with one another, because the storage system has assumed the role of traffic cop. But unless it's managed properly, this kind of simplification can cause significant problems, because with redundant views of the data scattered all over the storage space, disparities and inconsistencies may emerge.

Because storage networking makes comprehending the entire process so much more complex, Hollis says, the management software must give administrators the ability to drill deeper. "It's very important that you go from application task, through the mainframe, through the storage area network, all the way down to the physical storage," he says. "End-to-end visualization is [also] extremely important when doing performance tuning."

EMC's ControlCenter software gives storage administrators visual tools for identifying and managing multiple layers of a mainframe storage hierarchy--logical volumes, datasets, qualifiers, aliases, and user catalogs--so they can resolve access conflicts in real time. And by redistributing access conflicts from the sealed vault of the database-management system to the open area of the storage network, administrators let human decisions play a role in conflict resolution--decisions that can be enforced through a well-reasoned storage-management policy. Users are held more accountable for the capacity they consume, and database transactions are smoother and more efficient.

A storage administrator, IBM's McCaffrey says, "wants to be able to centralize and share computing resources among multiple different users, multiple different applications, and have the management capabilities to understand who's using what." Redirecting application storage lets companies drive their resource assets "at 90%-plus, 100% utilization," he says.New hardware that achieves storage consolidation is still a big expense for many companies, but its costs are plummeting fast, and the return on investment has been shown to be exceptional. With storage-management software, a small investment in terms of operations cost can even more dramatically reduce crisis-containment costs and downtime, positively impacting the bottom line.

In many IT shops, there's a proliferation of disparate platforms, each with unique classes, manifestations, and eras of software that abound in a haphazard patchwork of functionality. Distributed applications from the '90s are transacting with mainframe database managers from the '80s that navigate through data schematics from the '70s. Then the tracking reports for lost records are examined on 21st century spreadsheets. The need to manage and regulate all these varieties of incompatible data has led to the creation of what Ron Hankison, CEO of Xbridge Systems Inc., calls "sneaker nets": basically a nonautomated means of moving data from one format to another. Sneaker nets are real cost generators; they require time and human power and don't foster high-data quality.

Savvier companies, of course, rely on variations of the data warehouse, a centralized repository that extends views of a database to a readily accessible cache. What Xbridge offers as either an alternative or a complement to the data warehouse is a lower-cost, intermediate approach called a data mart. Rather than a single repository, a data mart provides compartmentalized subsets of data views that are translated and updated in real time but tailored to the specific needs of certain departments or classes of users. This pre-tailoring, Hankison says, eliminates the need for users to drill down through a massive data hierarchy to extract the views they need, while at the same time triggering updates and other transaction cycles manually.

The data mart can provide integrated views of related data from disparate sources, such as DB2, VSAM, QSAM, and IMS, not only as datasets of records but as relational entities. This way, a large company with data scattered not only across different applications but different countries can present a single, unified view of data to its users, without having to reproduce that data on the back end. With simpler hierarchies, the Xbridge data mart can expose tailored views by way of simpler, more-open channels developed for small systems, including XML, ADO, .Net Web services, and OLE DB. This makes a simpler rendering of mainframe data readily available, in a constantly updated and translated format, for much simpler front ends such as a browser-based application, a Microsoft Access application, or an Excel spreadsheet.

Establishing real-time access to mainframe data doesn't require a sophisticated three-tier architecture. In fact, a temporary solution to a company's short-term needs could be conceived ad hoc. This, plus the elimination of sneaker nets, results in savings that contribute directly to the administration and operations categories of TCO reduction.

Many companies' core business applications aren't commercial packages with version numbers, site licenses, and beta test cycles. Often, they're clusters of thousands of Cobol procedures, nested and nurtured in a 35-year-old garden of business rules, standard procedures, and decision-support principles. More modern permutations involve DB2 procedures, IBM Language Environment modules, and Java components--perhaps migrated there or built there from the ground up. In any event, the entire ecosystem of these little applications has always been the mainframe platform on which they were born. For most of these applications' life cycles, there has probably existed in some tucked-away corner of the address space an application-performance monitor such as Compuware Corp.'s Strobe or Candle Corp.'s Omegamon. In 1976, during the early days of Candle, the purpose of performance monitors such as Omegamon was to optimize throughput during maximum workloads, enabling administrators to eek out as much from their active capacity as possible, says Pete Marshall, Candle's assistant VP. In the mid-1980s, that purpose changed to lowering users' response times to the green screens with which they were presented. The more active the user, and the less time his or her hands spent off the keyboard, the more productive the application."Where a lot of people are today is back to looking for the best total cost of ownership for any particular workload," Marshall says. With regard to z/OS, IBM's flagship mainframe operating system for zSeries, the primary customer focus is on maintaining application availability, he says.

To make the performance analysis make sense to the technician whose core skills may have been honed in the small-systems world, Compuware has developed iStrobe, a browser-based implementation that not only renders performance data numerically and graphically but also provides explanations and advice in English. By contrast, Candle's approach for Omegamon XE is to compress all the vital analytical data graphically into a single window, with visual cues that make their point simply and immediately.

The danger in having access to this depth and degree of data, Cutter Consortium's Ulrich says, is in misinterpreting its meaning to the organization and applying the wrong remedies. Ulrich says he's shocked to learn how many large companies focus their attention disproportionately on such factors as storage-utilization levels, saving half a million here and a quarter-million there, when the bigger picture is telling them that entire applications, databases, and even human power are being duplicated three or four times over worldwide. Some telecommunications companies, Ulrich says, "have the same people doing the same jobs, spread out all over the country. If you could consolidate those people into a single function, you're talking about tens or hundreds of millions of dollars of savings."

Being able to completely analyze the performance of applications as they're being developed, on the same site where they'll eventually be deployed, not only expedites development but can conceivably cut substantial time from debugging and, possibly, disaster recovery. What's more, utilizing storage-management and application-performance-management software together to present a bigger picture of what the various departments of a business are doing, when they're doing it, and what they're accomplishing can be key to a companywide consolidation project that encompasses not only IT, but every aspect of the organization.

What Ulrich would like to see is for his clients to develop small teams of representatives from many IT units, encompassing older technologies such as Cobol development and newer technologies such as Java. These teams would disseminate the data their management tools give them and coalesce to produce a reasonable, incremental, achievable plan for consolidation. Ulrich calls it a "joint solution that would incorporate the legacy side, the new development side, and the new architecture side."

The task of adequately explaining the true purpose of the Web-services model for applications got off to a rocky start in 2000, where, for a while, the marketing consensus seemed to be that Web services would make it easier for users to place quicker, lower bids on eBay. Wipe that slate clean in your mind, if you will, for Web services has little or nothing to do with Web media resources. The true purpose of Web services is to enable any type of program to advertise its functionality and certain elements of its data content, for potential use by other programs, using XML as its transport conduit (see story, "Three Tiers Minus One"). This way, a simple Web page, including buttons, text boxes, and other tools arranged to suit the job at hand, can all be directly socketed into a functional application--a process that, at least conceptually, resembles building a sophisticated control panel with Legos. When Web services work as intended, a server-based application assumes the job of communicating with the user and providing that user with a front end--replacing the green screen. This same application acts as a surrogate for the user in exchanges with the mainframe's transaction system, which in a majority of cases is CICS. The server-based application often adds functionality and is at any rate easier to use than the traditional mainframe green screens.Web-services development products aren't without substantial up-front costs. In-house developers, particularly mainframe veterans, may face a steep learning curve and a sharp shift in mind-set. Web-services methodology derives from object-oriented programming, which is as different in concept and execution from the procedural logic of DB2, IMS, and Cobol as are hieroglyphs from sheet music. But once the learning curve has been scaled, development time can be dramatically improved; a Web-based console page with the functions of the old green screens in a modern setting can be completed in as little as a day.

Perhaps not immediately, but within a short period of time, Web services help improve productivity and efficiency for both the company's development force and its departmental staffs. Ian Archbell, Micro Focus International Ltd.'s VP for product management, heads a product line that makes Web services available through Cobol. "What we're looking at is incremental investment instead of complete replacement, to make [Cobol apps] more flexible," he says. "It's much more about architecture than about anything else. You don't want to feel locked in to something and not be able to move the business forward."

Making the end-user application more flexible, more customizable, and accessible for almost instantaneous improvements involves the user more directly in the application process. When the user is encouraged to embrace the application to the point where he's actually steering its development, not only is time to market improved, but training costs--for some companies, the largest single cost after software licenses--plummet substantially.

Some Web-services development tools are being marketed for a different purpose: consolidating logons and application access for users. In typical mainframe environments, multiple third-party applications are used throughout the day, and in many call centers, employees spend valuable telephone time exiting some applications and re-entering others, just to complete a single, unanticipated customer task. "The whole single-sign-on, user authority, common directory challenge is a growing problem [for which] there aren't any quick and easy answers," analyst Ulrich says.

David Holmes, executive VP of Jacada Ltd., a provider of Web-services-enablement software, says his company's Interface Server product can create front-end applications that interface directly with portals to create a single look and feel "to mimic the desktop application or the portal and, at the same time, change the workflow. So if, as a user, I used to have to traverse through 15 or 20 screens to accomplish a specific task, I can actually reengineer that workflow so that I only enter information in two or three screens."Holmes describes a typical customer environment, which involves an average of five and often seven core applications, all of which utilize separate IDs and password authorization, and many of which identify the same customers with different key numbers. In some situations, the user is even forced to manually correlate these customer key numbers, perhaps with the aid of a spreadsheet. A portal, such as the kind created with Jacada Integrator, "sticks a veneer on top of all of that," Holmes says. "As an employee, you fire up your desktop, you get one view into your world, you enter one ID and password, one menu of the applications you need to use, [for] opening up an account or checking the status of a check or payment."

For one call-center customer facing a 40% annual turnover rate, operators were spending up to 12 weeks to learn their application and eight weeks on top of that to achieve full productivity, Holmes says. An Integrator-based portal was able to slash that time and reduce training costs by 40%. Again, this positively impacts both the administration and operations elements of TCO reduction.

Managing the mainframe environment is no longer about migrating to a smaller platform and less and less about camouflaging a tired legacy investment. The new management framework for mainframe computing is adaptation. As mainframes become more flexible platforms, companies may find themselves leveraging their legacy investments in untraditional ways, adapting them to serve new purposes in a changing IT landscape.

The purpose of Web-services software is to expose data that belongs to a mainframe database application, making it accessible by outside programs and means other than the mainframe application's own terminal screen. The purpose of Web-services software is to expose, to use the developers' term, data that belongs to a mainframe database application, making it accessible by outside programs and means other than the mainframe application's own terminal screen. Some Web-services packages also enable portions of the mainframe logic, such as Cobol procedures, to be addressable through other means, such as Java applications or HTML pages.

With Web services in place, customized methods and functions for various sectors or departments of a company, or even for individual users, can be crafted entirely in-house using inexpensive or even free development tools. Although most products in the Web-services category are designed for deployment on the middle tier (think of middleware), some are designed to be installed on the mainframe itself. The result for some customer sites is the elimination of the middle tier, at least insofar as custom applications are concerned.The multitude of Web-services-enablement software generally falls into one of three distinct categories: The first, most common, and conceivably least risky gather mainframe data by navigating the same green screens the user would, either in real time or well in advance of the actual transaction. WRQ Inc.'s Verastream for zSeries adopts this approach. "We allow you to leave the logic alone, leave the data alone, simply get rid of those multiple or arcane text screens, and replace them with a coordinated, consolidated composite interface that's designed with specific users in mind," says Shaun Wolfe, WRQ's president and chief operating officer.

The second category utilizes an IBM API to communicate with a department of the mainframe that IBM calls the Commarea--a sort of common meeting ground where transactions are directed to the appropriate task. This second category creates a type of "wrapper" around classes of transactions, enabling programmers in modern languages such as Java, C++, or C#--or, in the case of Micro Focus International Ltd., a Cobol standard--to address these transactions in an object-oriented fashion. This lets new programs communicate directly with old logic by way of middle-tier middleware.

Perhaps the most radical entry in this second category, from a marketing standpoint, is Micro Focus' Web-services development tools for Cobol. Ian Archbell, Micro Focus' VP for product management, softens some of the shock: "It's not so much the fact that it's Cobol," he says. "It's the fact that you've spent years and years investing in those applications. Those applications have real value, but it's hard today to get that new value into new applications."

The focus in mainframe applications, Archbell says, has shifted into new areas such as customer-relationship management. "So how can you actually connect those mainframe applications into new applications, which are often written in new technology [such as] Java or WebSphere?"

The third category of Web-services-enablement software is the one that's generating the most attention, and even some controversy. CommerceQuest Inc.'s Traxion Business Process Management suite "Web-service-enables all CICS resources, and then further gives [customers] the opportunity to create composite services by assembling together these individual services, all from the mainframe and/or from other Web-services-enabled components elsewhere in the network," says Paul Roth, chief technology officer at CommerceQuest. The result is a business process that exposes itself from the mainframe as a Web service, bypassing middleware entirely.ClientSoft Inc.'s ServiceBuilder software for Windows Server adopts an either/or approach, letting customers incrementally disuse middleware services as they see fit. "Solutions that utilize the application logic are typically the most valuable, because they take advantage of the 20 or 30 years' worth of business rules that have been encapsulated in many of these programs," says Brian Anderson, ClientSoft's director of product marketing.

The elimination of the middle tier is a new architectural trend, says Pete McCaffrey, IBM's director of product marketing for zSeries. "More and more, we're seeing customers move toward two tiers, with this middle tier getting squeezed out," he says. "So you may be seeing the beginning of a trend as a result of our embrace of open standards, but also the result of the technologies that are now being deployed that are going to drive us more toward at least a physical two-tier implementation."

But not everyone is so enamored by these claims that they're prepared to toss the middle tier entirely. Karen Larkowski, executive VP at analyst firm the Standish Group, is skeptical. "There are no benefits that I can think of and many costs to consider," she says. Without middleware outside the mainframe to perform the job of mediation, Larkowski says, companies are left to write infrastructure code by themselves or interpret the code produced by automated tools. "The biggest problem here is the cost of failure. We have found that applications which include homegrown middleware have a near 90% failure rate," Larkowski says.

Cutter Consortium analyst and author William Ulrich is more intrigued by the two-tier option but also warns of the dangers involved, such as creating new transactional logic that the underlying original logic does not, and cannot, expect. There's also the fact that simply wrapping Web-services documents around existing transactions bypasses what could be many companies' key problem: the spaghettilike behavior of the transactions themselves. "In reality," Ulrich says, "you're not really addressing the underlying segregated architecture."

There are benefits, such as improving workflow and eliminating the green-screen hassle, Ulrich says. But as far as improving the application itself is concerned, "you're putting icing on a half-baked cake," he says. "You still have a lot of starting and stopping, you don't have [proper] flow from system to system, a lot of the underlying systems use batch processes, and there's a lot of inelegant handshaking going on behind the scenes. You're not making that go away."

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights