DAS vs. NAS vs. SAN

A quick web search of the terms DAS, NAS and SAN will yield many results, yet many near the top of the results list are almost a decade old and even the newer articles fail to account for recent developments in storage technology.  New technologies like NVMe have changed the landscape such that DAS is the indisputable leader in latency, NAS solutions can deliver very good performance and surprisingly low latency, while SAN continues to offer the largest ability to scale out.

DAS, Direct Attached Storage, describes storage that is part of a server, either internal ot external but attached by a storage bus like SAS3.  Today, direct attached storage also includes NVMe SSD storage on both 2.5inch U.2 and add-in cards. In 2017 these NVMe devices can be up to 4TB in capacity and offer latency as low as 20 microseconds.  For reference, a mechanical spinning disk at 10,000 RPMs will take, on average, 3ms for the data needed to rotate around to the spot where the head can read it and time to get the head to that track can be even more.  A single random Input Output operation to a disk has more than 150x the latency, or response time, of that to an NVMe SSD.

NAS, Network Attached Storage, refers to storage that is shared across a network using a file-sharing protocol like SMB/CIFS or NFS.  “Shared” is an interesting and crucial word there because it means that a volume and the files stored on it may be accessed by more than one client (which may be a server itself) at a time.  Traditionally, NAS file servers were regarded as low performance, high latency systems that provided a great deal of convenience on the network by sharing files and offering reasonably high capacity.  With current disk technology, that capacity can exceed 2PB and even with all SSD storage, capacity of a single NAS can exceed 150TB.  More interesting, perhaps, is the performance that can be delivered by an all-SSD NAS running Microsoft Windows Sever with SMB3 and sufficient network connectivity: up to 10 GigaBytes per second of 64kB random reads and up to 1M I/O Operations per second (IOPS).  Take a look at ION’s SR-71mach5 SpeedServer as an example.

SAN, Storage Area Network, is usually a large and expensive pool of storage connected to the servers using it by a fabric of some kind.  That fabric is most commonly Fibre Channel, but can also be Fibre Channel over Ethernet or iSCSI or even SAS3 via a SAS3 switch.  Where the NAS provides file access, the SAN provides block access, so that in general only one client has access to each LUN (a volume or Logical Unit) at any one time.  Because they are all connected via that fabric, a LUN that is released by a system ( or was in use by a system that fails) may be attached and used by another system.  SANs are often designed to scale across a number of rack cabinets and are capable of supporting many client systems.  Multiple levels of intelligence between the requester and the data can provide some interesting features but also increases the latency.

So, which storage architecture is best?  Well, all of them, of course.  And none of them.  It depends on the requirements of the specific environment and the specific applications in that environment.  Some of the Software Defined Storage approaches blur the lines even more, increasing the scalability and reliability of NAS in particular, but with the potential to increase latency at the same time.  As with most of information technology, the most expensive solution may not be the best solution for the problem at hand.  The only way to optimize capacity, bandwidth, latency and cost for a particular requirement is to understand that requirement in great detail before searching for a solution.

Leave a Reply

Your email address will not be published.

*