Addendum: Should You Use SSDs For Your NAS?

@Will has published another great video, this time on SSDs as storage:

I wholeheartedly agree on SATA SSDs, but I have an addendum regarding PCIe SSDs and the topic of hotswapping.

Of course, you always have the option to use U.2 SSDs, and those are hotswappable out of the box. But those SSDs are pretty expensive, though you might have luck buying them on the used market with enough TBW left.

However, you can definitely go for M.2 SSDs, and you can make them hotswappable with the right cages/products. Just take a look at the stuff by Icy Dock, some of which I’ve linked below. Since gen3 M.2 SSDs are on average a little cheaper than SATA SSDs already, we’re actually getting close to affordable all-gen3-M.2 builds for home servers.

But currently you still need significant expenses, because the hotswap M.2 cages are still very expensive, and the cables are, too, e.g. OCuLink or SlimSAS to (dual) 4i OCuLink.

Here are some of the hotswap M.2 products by Icy Dock:

8-bay PCIe 4.0 5.25" cage (ToughArmor MB873MP-B V2)

4-bay PCIe 4.0 5.25" cage (ToughArmor MB720MK-B V3)

2-bay PCIe 4.0 ODD cage (ToughArmor MB852M2PO-B)

…et cetera.

Let’s say you use one of the (still few) W680 ATX boards (Asus or SuperMicro): you have 12 lanes of PCIe 4.0 & 16 lanes of PCIe 3.0 on the chipset, and a DMI throughput corresponding to 8 lanes of PCIe 4.0, i.e. the DMI would be able to handle data from four gen3 M.2 SSDs at full speed already via the PCH alone, which you can accomplish (e.g. for the Asus mobo) with one SlimSAS 4i to OCuLink 4i cable, two M.2 Key M to OCuLink 4i adapters/redrivers, and one PCIe x4 to OCuLink or SlimSAS 4i card. For the CPU-direct PCIe lanes, you have one M.2 slot, so you need another M.2 Key M to OCuLink 4i adapter/redriver, and for the PCIe 5.0 x8 slot, you can use the Broadcom HBA 9500-16i Tri-Mode Storage Adapter with two SlimSAS 8i connectors, so you need two SlimSAS 8i to dual OCuLink 4i adapter cables.

With this setup you can build a home server with nine (!) gen3 M.2 SSDs, and they’d be hotswappable with the right cages. (You could go for seven or eight, and you’d still have bandwidth via the PCH for at least eight SATA drives, i.e. basically two home servers in one.) If you want gen4 speeds, you’d be able to build a home server with five gen4 M.2 SSDs, in this case using only the 8i HBA by Broadcom.

This is frankly insane, but we’re getting there, with consumer-grade workstation boards. Your maximum theoretical read speeds with nine gen3 M.2 SSDs and RAIDz1/RAID5 would be a whopping 31 GB/s, and with five gen4 M.2 SSDs you’d get around 37 GB/s. (Hotswappable!) To enjoy those speeds on your network too, you’d need at least a 50G QSFP card. :exploding_head: (Though you’d probably still be more than happy with a dual SFP28 card. :wink: With seven 4TB gen3 M.2 SSDs, your theoretical maximum storage speed would be just short of a single 25Gb/s connection… and the dual SFP28 router by MikroTik isn’t that expensive either, actually.)