Continuing my series on storage strategies and options (see part 1), today I want to briefly look at SAN and NAS options.
First, storage area networks. SANs are “dedicated network[s] that provides access to consolidated, block level data storage”. Storage presented to a target server appears to the machine as if it is a “real” drive (ie, as if it were DAS) – which means it behaves exactly the same from the OS’s point of view.
Second, network-attached storage. A NAS is “file-level computer data storage connected to a computer network providing data access to heterogeneous clients”. Storage presented by a NAS host can be mounted on a target server, but won’t support being installed-to*, since the space is presented at a Samba or NFS share. A NAS device differs from a “mere” file server that happens to be publishing NFS or Samba mounts in that there are dedicated management interfaces for handling quotas, publishing policies (eg which protocols should be used for different network segments), etc.
Pros:
- flexibly-deploy storage “on-demand” to target machines
- the storage appliance can be serviced – typically – while servers are utilizing it
- with technologies like vMotion from VMware, storage can be migrated “live” from one device to another based on load
Cons:
- most expensive – though pricing varies widely depending on manufacturer, protocols exposed, size, redundancy, management interfaces, etc
- dedicated storage technologies require dedicated storage admins over-and-above dedicated system admins to utilize and maintain properly
*The caveat here is if a NAS share has been mounted as a storage device for virtual machines through a hypervisor like VMware – then the vmdks could be stored on a NAS device, and the VMs will be running off the remote mount.