I bought a rs1221+ with the rx418 expander. I filled the expander with spinning rust iron wolf pros. The I maxed out the ram to 32GB, then I bought the 10g network card with nvme cache on it and maxed that out with two synology 800g in read write cache.
Now is the time to fill my 8 remaining slots in the main unit. I was planning on running 8 ssds in them in a raid 10 volume. But then I read in passing that nvme cache will have no benefit on all ssd volumes.
Is this true? And should I then instead go with spinning rust? The price difference is massive, so I don’t want to spend a small fortune in vain.
The server is intended to be used by like three people with all it has to offer from its apps, and with containers, maybe some minor virtualization, and as storage for proxmox that has only minor use on its VMs.
The rs1221 is connected to a 10gbe network.
I would not bother putting SSD’s in RAID10. RAID10 was really important back in the hard drive days if you needed fast random reads. But with SSD’s you really do not gain nearly as much as they already have insanely fast random reads and writes. I would go RAID5 and get faster write speeds and 80% more disk space.
This is correct. Your SSD volume is already going to be insanely fast for random reads / writes that having a NVMe cache will only hurt performance. But that is juts because its already so fast. The NVMe cache would still help on your HDD storage pool.
Thanks for your answer!
What I read somewhere was that you should never use raid 5 anymore due to very long rebuild times if you have a faulty disk. Someone said that the rebuild time is space dependant and if you have tons of Terabytes you will be rebuilding for a long time if the system is in use.
That in turn would increase the chances of a second disk breaking down on you resulting in loss of all your data.
But as I am not well read up on this, could this be something that applied more to spinning rust than SSDs?
So this is something that was really big and worry sum in like 2010. The thing is, it’s really not true. Not to dredge up this entire argument, but people did the math on UNC’s on HDDs and said that there was a 50% chance of getting a UNC reading 12 TB of data. The thing is the UNC’s are not randomly distributed and if that was true, every other time you did a scrub over a volume over 12 TB you would get a UNC.
Even if it was true UNC’s do not halt a RAID rebuild, and further, filesystems can handle having one bit of data corrupted. At worse you may loose a single file (if everything in the world went wrong)
Sorry for rambling and not explaining. It’s an argument that has been around for a really long time, and from experience scrubbing 120TB volumes, does not hold water.
This is especially untrue for SSD’s
If you are worried about it, backup your SSD volume to your HDD volume. For your HDD volume I would also do RAID5
TL;DR; The 12TB UNC issue is not the catastrophe that people thought it would be back in 2010. RAID5 is alive and well
The HDD volume is running raid 5 as I set it up after having watched a bunch of your fine instruction videos
Then maybe I can try the Synology F1 RAID that is supposed to be designed for SSDs.
And as I am aware that RAID is not a backup, I have also gotten me two old LTO-3 tape drives. I think the tapes are like 400G uncompressed, and this will hold my important stuff. imagine a $100 tape machine protecting your data if a $10000 synology nas dies
Thank you for your, as always, excellent advice
I guess the only benefit I would want from the ssd cache is pinning the metadata, it does speed up the lookup operations well for my use case.