The problem with determining needs is that there are a lot of them. To name a few: what are the needs of the users, how much performance, how much capacity, how will you connect to it (SAN or NAS), and what type of data protection.
The other factor is your available budget. This is something that needs to be at least ball-parked upfront and then refined as the storage buying process continues. Excess budget may mean that you can meet lower priority needs, while limited budget means that some needs may go unmet.
With an understanding of the key applications to be hosted on the storage platform the first step is to look at whether you need shared storage at all. While I am obviously a big fan of shared storage, direct attached storage (DAS) with intelligent software replication cannot be ruled out. Exchange 2010 does an excellent job of using DAS storage combined with replication to maintain data redundancy. My belief is that while the economics of DAS-based solutions are compelling, most people want the more complete feature sets and shared capabilities of a SAN or NAS, especially in a virtualized environment.
The next needs to determine--performance and capacity--have to be determined almost at the same time since they impact each other to a large degree. Usually depending on the environment, you will give one of them a greater level of importance and that will in large part determine which solutions you consider. Within each of these needs, though, there are still many sub-decisions to be made.
For example, is performance enough of a concern that solid state storage can be justified, or will mechanical storage give you the performance that you need? If you are selecting storage for an existing application, then performance data can be collected as we show in our "Visual SSD Readiness" white paper and make the result somewhat scientific. Our basic rule of thumb is that if you are adding mechanical drives for performance instead of capacity, you should look at solid state storage first.
Capacity is another highly variable need and new systems may actually need less, thanks to features like thin provisioning, cloning, deduplication, and compression. It is worth the time to look at how much usable capacity you need and then understand how efficient storage systems will be at storing that data. As we stated in our recent article "Faster Primary Storage With Data Deduplication" some capacity optimization technologies have the potential to improve performance, another reason to consider the capacity and performance needs in parallel to each other.
We've really just scratched the service on determining needs. At a minimum there is still the protocol issue of whether you should use NAS, iSCSI, Fiber Channel, or now even DAS. We'll cover that subject in more detail in a future entry.
Follow Storage Switzerland on Twitter
Network Computing has published an in-depth report on deduplication and disaster recovery. Download the report here (registration required).