Energy efficient DIY NAS: SFP+ over one PCIe 3.0 x1 lane possible?

Hey everyone, need some help for selecting the right DIY NAS hardware.

I’m currently gravitating toward building my own NAS (unRAID or TrueNAS Scale) using an energy-efficient AIO board with a Celeron J6413 (8 PCIe 3.0 lanes) with 64GB of RAM and a SATA SSD storage pool:

https://www.amazon.com/dp/B0CFLQ41M5

It only has one CPU-direct SATA port, while the additional five SATA ports are added via the JMB585 chipset (PCIe 3.0 x2 to 5 x SATA). The board also has two M.2 NVMe slots, which however only run at PCIe 3.0 x1 speeds. (With this Celeron, and without a PCH, you simply can’t have x4.) Still, I would add one discrete M.2 to 5 x SATA adapter, which also uses the JMB585 chipset, e.g. the Delock 64051:

https://www.delock.de/produkt/64051/merkmale.html

Since the M.2 slots are only x1, I would only be able to use two of these five SATA ports for SATA SSDs, and one for a backup HDD.

The storage setup would be: seven 4TB SATA SSDs for the storage pool, one standalone 4TB SATA SSD for Time Machine backups, and one 20TB HDD for server backup.

For a storage pool of seven SATA SSDs, and considering the use of the two JMB585 chipsets, I would therefore get an average speed of ca. 441 MB/s per SSD, which in RAIDz1 (RAID5) would still amount to 3.087 GB/s read and 0.771 GB/s write, and an average of approx. 1.235 GB/s with 50/50 read/write. Obviously then, the network connection will be the bottleneck, at least for read operations.

Compared to the Topton N5105/N6005 boards, the J6413 is a next gen Celeron, albeit with a slightly less powerful iGPU, but imho still more than enough for Plex/Emby transcoding. The really nice advantage of this board is that it comes with an open PCIe 3.0 x1 slot, and this is where my question comes in:

I want to use this slot for a 10GbE SFP+ network interface card. I would only be able to use one PCIe 3.0 lane, of course, but that would still be 8 Gb/s = 0.985 GB/s, i.e. 78.8% of a full-blown 10GbE connection, i.e. slightly more than the built-in triple 2.5 GbE ports combined.

My first idea was to use a PCIe 3.0 x4 single SFP+ NIC, and those usually come with the AQC100S chipset by Aquantia (now Marvell), e.g. the Delock 89100 or the Sonnet Solo 10G SFP+ PCIe card. Another option is to use an older Mellanox card or a consumer-grade card like the Asus XG-C100F 10G.

However, when looking at discussions online, these cards don’t work well regarding energy efficiency, specifically with regard to C-states. The only option for energy efficiency, it seems, is going with cards using the Intel chipset.

So I looked at the Intel X710-DA2, which is a PCIe 3.0 x8 card for a dual SFP+ connection. Again, of course, I would only be able to use one of its eight PCIe lanes, but the same bandwidth calculation would apply as stated above, i.e. 78.8% of a full single 10GbE connection. The advantage of using this card is that I could use it in a future build that has a board with an x8 PCIe slot.

However, I don’t know if such a setup would actually work. Is this really feasible connecting via SFP+ using only one PCIe 3.0 lane in an open x1 slot? It looks good on paper, but I can’t say if this would work in real life. Furthermore, a dual SFP+ NIC will already be too long for this mini ITX board (22.23 cm vs. 17 cm) and would probably block the six built-in SATA ports. Thank you in advance for your comments.

PS: additional hardware would be a rackmount chassis (SilverStone RM23-502-MINI) with the IcyDock ToughArmor MB998SP-B, Noctua fans etc.

1 Like

I guess I could probably use a riser extension cable to place the NIC away from the board proper, e.g.

or

https://www.delock.de/produkt/85762/merkmale.html

or

https://www.delock.de/produkt/88047/merkmale.html

Still leaves the question, whether SFP+ via one PCIe 3.0 lane of an x8 NIC would even work.

Hi @Joss

Awesome setup!

So from the side of not using enough PCIe bandwidth for the card, it still should work. PCIe is pretty plug and play and will use what it can. The network chip on the card should tell the switch to stop sending packets if it cant get enough out of the PCIe bus, so it all should just work at the slower speed.

Jeff Geerling (highly recommend) is a youtuber who does a ton with RaspberryPi’s. He did the exact same setup you are taking about, and it worked! It did just completely overload the pi’s CPU, but he got 3.5 Gbit!

https://www.jeffgeerling.com/blog/2021/getting-faster-10-gbps-ethernet-on-raspberry-pi

1 Like

Thank you. I found this board after watching a video by Wolfgang (notthebee) on a DIY NAS using the Topton N5105 board, which is currently his no. 1 selection for the most energy-efficient DIY build. Speaking of which: there is also a mini-ITX board by BKHD, which looks like it has a PCIe x4 slot, the N510X by BKHD:

(NASCompares is working on a video on that one, I believe.)

The specs on the page are not very informative, but it seems like it only comes with one x1 M.2 slot instead of two, which is why I assume they were able to go with an x4 PCIe slot, but it’s possible that it will only be x2 electrically. (Don’t know from what components they would be able to steal the additional two PCIe lanes.) But x2 PCIe still be enough for full 10GbE.

However, the Celeron N5105 is one generation older than the J6413, so it will probably be less efficient, but afaict it has a slightly more potent iGPU (24 instead of 16 execution units). So this BKHD board might also be an option.

The above build is just a raw idea for a low-end energy-efficient NAS. I still haven’t given up on other options yet, including the high-end ones (13th gen Intel Core i9). For those a mini-ITX would probably not be enough: I’d have to do a lot more tweaks in the BIOS to bring power down.

One problem these boards apparently have is that they don’t support DIPM for SATA power management, even though the JMB585 does, which means a DIY NAS using them would consume unnecessary power when the drives are idle. So that’s obviously not good.