Hey everyone, need some help for selecting the right DIY NAS hardware.
I’m currently gravitating toward building my own NAS (unRAID or TrueNAS Scale) using an energy-efficient AIO board with a Celeron J6413 (8 PCIe 3.0 lanes) with 64GB of RAM and a SATA SSD storage pool:
https://www.amazon.com/dp/B0CFLQ41M5
It only has one CPU-direct SATA port, while the additional five SATA ports are added via the JMB585 chipset (PCIe 3.0 x2 to 5 x SATA). The board also has two M.2 NVMe slots, which however only run at PCIe 3.0 x1 speeds. (With this Celeron, and without a PCH, you simply can’t have x4.) Still, I would add one discrete M.2 to 5 x SATA adapter, which also uses the JMB585 chipset, e.g. the Delock 64051:
https://www.delock.de/produkt/64051/merkmale.html
Since the M.2 slots are only x1, I would only be able to use two of these five SATA ports for SATA SSDs, and one for a backup HDD.
The storage setup would be: seven 4TB SATA SSDs for the storage pool, one standalone 4TB SATA SSD for Time Machine backups, and one 20TB HDD for server backup.
For a storage pool of seven SATA SSDs, and considering the use of the two JMB585 chipsets, I would therefore get an average speed of ca. 441 MB/s per SSD, which in RAIDz1 (RAID5) would still amount to 3.087 GB/s read and 0.771 GB/s write, and an average of approx. 1.235 GB/s with 50/50 read/write. Obviously then, the network connection will be the bottleneck, at least for read operations.
Compared to the Topton N5105/N6005 boards, the J6413 is a next gen Celeron, albeit with a slightly less powerful iGPU, but imho still more than enough for Plex/Emby transcoding. The really nice advantage of this board is that it comes with an open PCIe 3.0 x1 slot, and this is where my question comes in:
I want to use this slot for a 10GbE SFP+ network interface card. I would only be able to use one PCIe 3.0 lane, of course, but that would still be 8 Gb/s = 0.985 GB/s, i.e. 78.8% of a full-blown 10GbE connection, i.e. slightly more than the built-in triple 2.5 GbE ports combined.
My first idea was to use a PCIe 3.0 x4 single SFP+ NIC, and those usually come with the AQC100S chipset by Aquantia (now Marvell), e.g. the Delock 89100 or the Sonnet Solo 10G SFP+ PCIe card. Another option is to use an older Mellanox card or a consumer-grade card like the Asus XG-C100F 10G.
However, when looking at discussions online, these cards don’t work well regarding energy efficiency, specifically with regard to C-states. The only option for energy efficiency, it seems, is going with cards using the Intel chipset.
So I looked at the Intel X710-DA2, which is a PCIe 3.0 x8 card for a dual SFP+ connection. Again, of course, I would only be able to use one of its eight PCIe lanes, but the same bandwidth calculation would apply as stated above, i.e. 78.8% of a full single 10GbE connection. The advantage of using this card is that I could use it in a future build that has a board with an x8 PCIe slot.
However, I don’t know if such a setup would actually work. Is this really feasible connecting via SFP+ using only one PCIe 3.0 lane in an open x1 slot? It looks good on paper, but I can’t say if this would work in real life. Furthermore, a dual SFP+ NIC will already be too long for this mini ITX board (22.23 cm vs. 17 cm) and would probably block the six built-in SATA ports. Thank you in advance for your comments.
PS: additional hardware would be a rackmount chassis (SilverStone RM23-502-MINI) with the IcyDock ToughArmor MB998SP-B, Noctua fans etc.