Components outside of compatibility tables & Synology Support

Hey SpaceRex community,

I was looking into getting a DS1522+, but it turned out that most components I selected for it are not on the compatibility table of the product. Example - I wanted to get IronWolf Pro 20TB (NT series, the new one), but I was told by a local distributor of Synology that they are not officially supported and thus Synology would deny me any support if I decide to go with those disks.

What is your experience with Synology support, if you have used components which aren’t officially compatible?

Thanks.

So there are two ‘tiers’ of compatible hardware: Units with ‘required’ drives (larger units / XS units) and ‘recommended’ drives (the DS1522+ falls into this)

With the ‘required’ drive units support will not be super useful and will push back saying they can only really help with the official drives

With the ‘recommended’ drives they will support you, unless it is something to do with the drives.

Unofficial RAM upgrades (past what they allow (64 gigs in a 32 gig system)) they will not help with, and if you have unofficial DIMMs they often will say that could be the issue (for more advanced volume errors)

1 Like

Thank you, Will. I appreciate the fast and informative reply.

In this case, the configuration I am looking at becomes:

  • DS1522+
  • 6 x Iron Wolf Pro (NT series) 20TB (1 as spare “shelfware” :slight_smile: )
  • Kingston (2 x 16GB) KSM32SES8/16HC RAM
  • 2 x Samsung 990 PRO M.2 NVMe SSD (MZ-V9P1T0BW) 1TB PCIe 4.0

So the one thing I would probably not bother with upgrading is the RAM! Honestly you will not get a huge benefit from it, and there is a real possibility (~2% ballpark from my experience) that you get something weird going on with unofficial RAM

1 Like

Thanks, that’s an interesting feedback. What would you say would be the use-cases where I would definitely want to max out the RAM?

If you are running a bunch of docker instances for things would be a good use case for additional ram. the 990 pro being a gen4 drive is pointless in the system that only has gen3 support, you are better off getting larger gen3 enterprise drives. I ended up with wd sn700 1tb drives for that reason as they had 1dwpd rating and a 5 year warranty. I have a similar setup, 1621+ with 32gb ram (running a couple of docker instances) 2 sn700 1tb drives and six seagate 16tb drives.

1 Like

Thank you, @notsimar, very useful info. Interested to learn - what do people usually run in docker on their NAS?

I run a pihole container, a prometheus container, and a grafana instance in addition to a ubuntu vm.

1 Like

We use Synology NAS majorly for backups with ABB.
The backup plattforms are usually maxed out to official limit with RAM as we verifiy each backup through the VMM startup verification tool. EXTREMLY NICE FEATURE to check if your backups are actually working

DS1522+ is an excellent plattform for mid-seized companies but due to a single 10GB slot it can only be used for HA clusters with 1GB

The brand new DS1823xs+ is our new go-to plattform for HA clusters as we can use the built-in 10GB for network access and a 10GB NIC on the PCI slot for the heart beat connectivity. So far we had not issues with differemt cpu types (XEON on server, RYZEN on nas) when restoring the VMs

We are currently abusing/testing a DS1523xs+ with
HDD Volume (46TB SHR, migrated from DS415+)
1
ST6000DM003 SEAGATE Barracuda SMR
3* WD60EFAX WD RED SMR

SSD Volume (1*500GB)
CT500MX500

NVMe Volume
Kingston SNV2S1000G

NVMe Cache for HHD Volume
Kingston SNV2S1000G

We did the ususal hack to get the disks in the drive database and to create the NVMe volume.
So we are completly out of support but so far the system is stable and is performing nicely.

The next step is to add 4Seagate EXOS 16TB RAID5 (non SHR) and replicate snapshots from the HDD SMR volume as local backup. Subsequently we replace 1-by-1 the 6TB drives with 4Seagate EXOS 16TB SHR1 (Synology RAID5).

In the end we plan to have 2 volumes with the same content but different RAID systems (SHR1 vs. RAID5) that we plan to test for performance differences.

Let’s see if the system crashes or remains stable the whole way

1 Like