Latency of NVMe SSDs

NVMe SSDs, by eliminating the double translation of a block address to a head / cylinder / sector address and then back to a block address, delivers latency much lower than what can be achieved with any other storage interface. Low latency storage access can only be really achieved by locating those NVMe SSDs IN the server. Accessing storage through a network or fabric erases much of that advantage.

Latency is the amount of time it takes before data is delivered to an application after it has been requested.

There are certainly many advantages to having storage available on a storage fabric and probably more advantages to having shared storage on a network. None of those approaches can match the latency and bandwidth available to directly attached NVMe storage. Both block and file sharing over a network/fabric can deliver requests with a latency that is surprisingly good compared to what was possible just a few years ago, but are typically orders of magnitude above what can be achieved directly.

Many applications can tolerate the latency overhead of storage accessed through a fabric or network. There are some applications however where local, direct access to the storage is essential. For availability reasons, those applications probably need to be architected to achieve redundancy and high availability in other ways. When latency really matters, it is more important than the means necessary to ensure data availability.

NVMe is an access protocol on top of PCI Express. Intel Xeon processors with PCI Express interfaces on board, access NVMe SSDs directly, without any other protocol processing on either end and without roundtrip conversations with interfaces on both ends of a network or fabric connection. There is no way to achieve lower latency, wait time, when accessing storage than to directly, locally, access NVMe SSDs.

Leave a Reply

Your email address will not be published.

*