The J6413 and N5105 boards by Topton et al. which I’ve mentioned in this thread should imho be superseded soon by boards using the Celeron & Pentium successors, new mobile/embedded CPUs of the simple brand “Intel Processor”, started in 2023 specifically with the N Series (N50/95/97/100/200) and the low-power i3 CPUs (i3-N300/-N305).
The 2023 generation has 9 lanes of PCIe 3.0 and support for 32GB of DDR4 and DDR5 RAM. Even though Intel says 16GB max, some manufacturers have upped it to 32GB. Some boards with DDR5 are already available, but until now they are a bit inferior in terms of NAS/server expansion.
There are also (more powerful) U Series CPUs (U300/300E) with two memory channels, i.e. up to 64GB of RAM, one performance core with 2 threads, 15W TDP afaik, with eight PCIe 4.0 lanes (CPU) and twelve PCIe 3.0 lanes (via chipset), but no U series boards are currently available.
Imho these N or U series CPUs would also be a nice option for turnkey NAS products.
The one board that currently stands out for a home server/NAS DIY build is the ASRock N100M (Micro ATX, N100, 4 cores, 6W TDP, passive cooling):
It only comes with 1GbE ethernet and only two on-board SATA ports, but the following is what you can probably do with the board in terms of a NAS/server build. (Please correct me, if I’ve made any mistakes.)
RAM: only DDR4, so you should use faster (i.e. more expensive) DDR4 modules, because DDR4 has some disadvantages vis à vis DDR5, especially when you only have one DIMM, so you might get some extra performance if you go for something better than just “value RAM”
PCIe 3.0 x2 (in x16 physical slot): add a 10GbE SFP+ card; however, without deep OS tweaking, only the cards using Intel’s X710-DA2 support ASPM, and it would be counterproductive to build a low-power NAS and not use a NIC with ASPM support, so with the X710-DA2 you’d get a dual SFP+ card for maybe 800–900 MB/s per SFP+ port (i.e. per PCIe 3.0 lane), and maybe full 10GbE speed, if you use only one of the two SFP+ connections, but now with both available PCIe lanes
PCIe 3.0 x1 (open x1 slot): add a SATA card for 2+2 SATA ports, i.e. use 2 for SATA SSDs, or 4 for HDDs
M.2 M Key (PCIe 3.0 x2): add a 6-port SATA M.2 Key M adapter, and you can then use four of these SATA ports at almost full SATA SSD speeds, while using the remaining two for auxiliary drives that you don’t write to often; if you go for HDDs, you could use all six ports
M.2 E Key (PCIe 3.0 x1): add a 2-port SATA M.2 Key A+E adapter, which would give you almost full SATA SSD speeds
Note: struck; the Key E slot is a CNVio slot, not PCIe, so you can’t add a SATA adapter.
That would give you an insane number of 12 SATA ports for HDDs, or 8 SATA ports, if you plan to use SSDs for your storage pool.
As for power efficiency, i.e. the infamous C-states, the PCIe SATA card and the M.2 adapters should all support AHCI and NCQ, and if my research hasn’t misled me, products with the chipsets ASM1166 (6-port), ASM1064 (4-port) and JMB582 (2-port) should do the trick.
As for cooling, the N100 comes with a passive heatsink, but judging from reviews on some fanless Mini PC products using the N100 CPU, these CPUs tend to go into thermal throttling quickly under maximum load, so you should use a chassis with fan options.
The ASRock board itself comes with two 4-pin PWM fan headers, but imho the best solution is three fans, two for rear exhaust, and one for the front intake. So you’d need one of those PWM expanders that turn one into four or even eight or more 4-pin and draw the necessary extra power directly from the PSU over a SATA power connector.
Alternatively, you could concoct some way of attaching a fan onto the CPU’s passive heatsink. However, the board only has what looks like a 3-pin CPU fan connector, i.e. no PWM.
As for the boot drive, i.e. for Unraid or TrueNAS etc., you can always boot from a USB flash drive, externally attached to a rear I/O USB. (The board doesn’t come with an internal USB Type-A port, and you wouldn’t want to waste SATA ports on boot drives.) If you have a little more money, a good option is probably to use an internal USB 2.0 header, and stick an eUSB managed NAND on there. Or just use a USB 2.0 9-pin header to dual USB 2.0 Type A adapter… that would be the cheapest internal solution.
With up to 12 drives you’d need a chassis with lots of storage bays. Personally, I would probably go for the SilverStone RM41-506 with 5.25 inch cages by Icy Dock, namely the ToughArmor MB508SP-B for eight SATA SSDs, and the FatCage MB155SP-B for five HDD/SSD bays. (Those two come with their own fan respectively.) However, cages like these do consume a slight amount of power themselves, so if your goal is to reach the absolute minimum in terms of power consumption, a pre-built NAS or server chassis with enough hotswap + internal bays should be your choice.
As for a PSU, anything around 500W max should probably be A-OK. You can go lower than 500W, of course, even use a Pico-PSU with 100W or less, but afaik a few ATX PSUs in the 500W range do rival the power efficiency of Pico-PSUs, when the system is running at low loads.
What do you think?!