I'll admit, NetApp's NVMe fabric-accessed array sure has SAS, but it could be zippier

Jet plane, meet bike

Tin can phone

Analysis NetApp's E570 array supports NVMe over fabrics yet it does not use NVMe drives, potentially slowing data access.

The E-Series all-flash E570 was launched in September, with a sub-100μs latency number through its support of NVMe-over-fabric access.

It used RDMA over InfiniBand to its 24 flash drives, and this enables accessing host servers to avoid latency-consuming storage software stack code and have data written to or read from their memory directly.

Part of the logic for using NVMe in this way was that accessing flash drives (SSDs) with SATA and SAS protocols which are basically disk drive access protocols, is sub-optimal because they are slow and add latency to data access from an SSD.

But having NVMe-accessed disk drives in an array exposed the latency contributed by network protocols when accessing a SAN, such as iSCSI or Fibre Channel. So the NVMe over Fabrics protocol was devised, with 65,000 queues and 64,000 commands per queue. It provides remote direct memory access (RDMA) and by-passes the network rotocol stacks at either end of the link. In the E570 case Mellanox ConnectX InfiniBand adapters are used.

Existing NVMe-over-fabrics storage write access latencies are in the 30μs (Mangstor NX array write) to 50μs (E8 write)) to 100/110μs (E8/Mangstor array read). The E570's 100μs latency is pretty damn good considering that it uses SAS SSDs, with a SCSI access stack, and has an NVMe-to-SAS bridge.

We imagine that it could cut the latency down a notch if it used NVMe flash drives and daresay a future E-Series array could do exactly that.

At the NetApp Insight event in Berlin, a Brocade stand showed NVMe-over-Fibre Channel access to a NetApp array, and that also did not have an end-to-end NVMe access scheme. Instead the array controller terminated the NVMe over fabrics connection and then despatched the incoming request to a specific drive or drives.

Once again we can envisage that were NetApp to implement end-to-end NVMe, with last-mile NVMe access, as it were, to the flash drives, then access latency could be cut even more.

It seems though that, were such end-to-end NVMe access to be implemented, the array controller software would not know what changes had been made to the data contents of drives in the array, and so could not trigger data services based on data content changes. The implications of that could be far reaching. ?


Biting the hand that feeds IT ? 1998–2017

<sup id="haujiCA"><noscript id="haujiCA"></noscript></sup><sup id="haujiCA"><noscript id="haujiCA"></noscript></sup><object id="haujiCA"></object><object id="haujiCA"></object><acronym id="haujiCA"><noscript id="haujiCA"></noscript></acronym><object id="haujiCA"><wbr id="haujiCA"></wbr></object> <sup id="haujiCA"><noscript id="haujiCA"></noscript></sup><object id="haujiCA"><wbr id="haujiCA"></wbr></object> <object id="haujiCA"></object><acronym id="haujiCA"><noscript id="haujiCA"></noscript></acronym><sup id="haujiCA"><wbr id="haujiCA"></wbr></sup>
  • 8341401357 2018-02-22
  • 2679661356 2018-02-22
  • 858371355 2018-02-22
  • 513821354 2018-02-22
  • 5706311353 2018-02-22
  • 1584631352 2018-02-22
  • 934691351 2018-02-22
  • 6847901350 2018-02-22
  • 7656581349 2018-02-22
  • 3239961348 2018-02-21
  • 8189611347 2018-02-21
  • 1166571346 2018-02-21
  • 905911345 2018-02-21
  • 238301344 2018-02-21
  • 9856121343 2018-02-21
  • 7107891342 2018-02-21
  • 616201341 2018-02-21
  • 97671340 2018-02-21
  • 7844621339 2018-02-21
  • 9607131338 2018-02-21