Computing history offers great insight for today's data center managers, especially those of you who are constantly running out of floor space for equipment, frazzled by data backup concerns, weary of heightened security measures, and reminded of it all by the ever-annoying end-user.
Care to reminisce?
In the old days, computing centers held huge mainframe computers that filled rooms, floors, and buildings. These beasts served hundreds or thousands of users. Getting access to the computer involved deploying batch entry stations (a.k.a. keypunch machines), teletypes, or dumb terminals all over the place. Sometimes terminals were spread throughout an enterprise, requiring long cable runs or even leased bandwidth connections for remote terminals.
The client-server paradigm is just a reinvention of the old paradigm [ed. note: or what our primitive forbears naively called a "model"] of mainframe-terminal computing. An enterprise used to have one large mainframe with massive processing power and lots of I/O channels. Everyone connected to the mainframe through a terminal. We're returning to that paradigm, with thin clients that access data on an enterprise-class server or on disk arrays in the data center.
Today, lack of real estate is still a problem. What used to take rooms of computing power can easily fit into an equipment rack or two. Disk drives used to be the size of washing machines, now they fit in your coat pocket. But still, data center managers must cope with what always seems like not enough space for all the hardware that they must manage.