As is often the case, my own thoughts on the subject crystalized as I spent time talking tech with smart people--in this case, Vanessa Alvarez, late an analyst with Yankee and Forrester and now at scale-out storage upstart Gridstore, and my fellow Tech Field Day delegate Colin McNamara.
We were in New York talking technology with some of the wizards of Wall Street. As we explained trends we were seeing in the real world of IT to these financial folks, I, as the graybeard in the room, realized that several of them aren't entirely new.
First is software-defined storage. While those with a more limited historical perspective view software-defined storage as a new concept, I tend to think of it more as a pendulum swing. Twenty or more years ago, computers were directly attached to storage. (Back then, we didn't yet call Unix or other midrange/minicomputers servers, because that term was reserved for PCs running NetWare, Banyan Vines or that newfangled Windows NT.) Functions like volume management, RAID and even replication were provided by software like Veritas Volume Manager, which grew up into Symantec's current Storage Foundation product.
Now, software-defined storage, which moves functionality from hardware storage systems to host software, is just a return to host-based storage management.
We then started talking about how portions of the IT budget were moving outside the control of corporate CIOs as business units went directly to SaaS options such as Salesforce.com and applications they develop themselves on IaaS or PaaS cloud platforms.
While the youngsters saw this as an entirely new trend, it looks to me a lot like a repeat of the guerrilla IT process I helped lead in the 1980s. Managers, tired of waiting years for IT to write a new mainframe application to solve a pressing business problem, went out and bought PCs as "office equipment" and hired consultants like me to solve their problems with 1-2-3 spreadsheets and dBase II programs. It seems that once end users can uses their Amex cards to get technology solutions to their problems without IT knowing about it, they will. This of course creates other problems, but we'll leave those for some future blog posts.
The third case is the reemergence of non-volatile memory. Colin was excited about developments in this area; he told the financial folks about the not-so-distant future where NVMe flash, phase change memory and widespread use of RDMA would allow whole data centers to be viewed by applications as a single memory space that included very-high-speed but volatile DRAM, fast non-volatile PCM and slower flash.
This of course reminded me of my college days, when real computers used magnetic cores as main memory and the IBM 360/65 mainframe had 1 Mbyte of fast (OK, fast for the day) core and 4 Mbytes of much slower bulk core. Because core stored data magnetically, it was non-volatile. I remember watching the geeks of the day turn the department's PDP-8 off to perform some sort of maintenance and powering it back up without rebooting because the data remained in memory through the power outage.
This saved a lot of time and effort as the PDP-8 didn't have any read-only memory, and rebooting it meant someone had to enter about 100 bytes of boot loader through the front panel switches one 12-bit word at a time so the computer would know how to read the punched paper tape that held its operating system.
The moral of the story, children, is that while technology makes its relentless advance, good ideas we thought of as obsolete may come back. Of course, there are also technologies, like Token Ring, whose return, even as the storage medium FCoTR, would just be a bad idea.