Does Qnap Offer a Faster Backup Then Synology Hyper Backup?

I’ve come to realize how unprepared Synology’s solutions are when it comes to companies that do a lot of day-to-day data movement. Unfortunately since we sometimes change TBs of data a day, Hyper backup to the cloud seems unequipped for the task.

Can anyone advise if Qnap offers a faster backup solution?

What part is ‘unprepared’?

Where in the Cloud are you backing up to? Synology’s Cloud or your own? What is your link performance?

I’ve managed a client that would have about 40-50TB of data new/changes a day and their site to site nightly backups would take a couple hours over a 10GB link.

Also, I’ve come to Synology from QNAP. I could never go back.

Ah, your client must have them magic fairy dust Synologies, lol. I’m kidding, I’ve just never seen HB move like that.

My HB moves more like your kids dying goldfish floating in the water cause they forgot to feed him for a few weeks. It’s floating on its side and you think it’s dead but every few minutes it’ll kick its tail. Poor thing.

It’s not like we’re using crap hardware, I’ve got an RS2818RP+

We’ve got millions of files that have to be sorted through each night and backed up to Synology C2. Our current backup is taking over a week, and there is nothing wrong with our network speeds that would justify 1TB of data taking over a week to backup. That’s crazy slow and certainly nowhere near the max our dedicated 500/500 business fiber can deliver. I could hand deliver those files faster.

I could setup some kind of experiment, maybe it’s a latency or firewall problem. You mentioned they had a s2s connection. Probably low latency there.

So you’ve got literally millions of files with changes or that are new each night?

That would explain it, there’s a lot of processing power and time required to compare exactly what’s changed. My client, which I should’ve mentioned, maybe had a few thousand files at most a day changed or new, but their files were huge as they were a video production house shooting in 4/8K so Hyper Backup didn’t have to do so much working calculating.

What kind of files are they? Just wondering if you could break up the tasks a bit or something. You’re welcome to DM me if there are more confidential type details you could share that might help that you’d prefer for not everyone to see.

Performed some more testing today, tried doing just a single large file hyper backup to the cloud. performance still really low.

i tried again with a external SSD plugged in, performance was great.

something about backing up to the cloud is causing the problems. I’ve tried a few different services with HB, they all have this issue.

Hyperbackup I have found to be very latency sensitive, especially when you have tons of files. Going to a cloud server, can take a very long time when you have millions of small files. I have not messed too much with an QNAP backups, but it tends to be an issue with any differential backup with tons of files.

What will work best in pretty much any case is doing a BTRFS snapshot replication (Synology) or ZFS send (QNAP / TrueNAS). The beauty of these is that the receiver already knows everything that it needs, due to the fact that they have a common snapshot. It means that sending a million small files, should not have a real performance difference then sending one single large file. Even over a high latency connection.

Hm, my understanding is you have to have the NAS on the same network to run a snapshot replication? I remember there was some reason it was less than optimal for remote backups…

So you ideally setup a site to site VPN connection between the two units, or just have on VPN back to the other. It works really well and will be faster than hyperbackup for sure

What if you had a 2nd synology you kept onsite and used as a hyper backup vault? Then you upload that completed hyper backup vault to the cloud with cloud sync? You could eliminate latency issues with hyper backup while protecting data in the case of fire/water/destruction. Can you copy and reuse a hyper backup vault taken like t hat?

Hi

We regularaly have/had a similar problem. Due to cost considerations our customers usully prefer off-site backups to cloud backups

When working with remote locations Hyperbackup was not stable as it often aborted jobs (and did not automatically resume)
Snapshot Replication (SSR) is very bandwidth efficient as it only copies the new increments (on bit level) and it seems also more stable with poor connections.

Our workaround:

Preliminary checks:

  1. Check the performance monitor for the usage of the swap file. A value significantly >0% indicates that your system would profit from more memory
  2. Check your HDD (not only smart but also the HDD logs) if you have a degrading disk which keeps your RAID busy
  3. Check that you do not have SMR disks List of WD CMR and SMR hard drives (HDD) – NAS Compares - this has MASSIVE impact on disk performance
  4. Important: Check that you have [Settings->File Services->Advanced->Enable file fast clone] enabled
  5. You might consider keeping the remote NAS as isolated as possible - no ActiveDirectory, no AzureActiveDirectory, no Synology Cloud Apps,…
  6. Use different user names (e.g. AdminNAS01 vs AdminNAS02) and different passwords

VPN setup

  1. Set up different VPN connections to an off-site location to compare performance
  • Try the integrated VPN Server of Synology
  • Use some VPN appliance such as PFSENSE Router or similar
  • Use some VPN service such as TAILGATE

The Synology apps might not automatically find the remote NAS but if you have a route defined in your DNS/VPN/Router you can use the remote IP address to connect to the remote device. If possible setup a IP reservations with the on-prem and remote DHCP servers.
With our set-up both HB and SSR work with the off-site NAS being in a different IP range.

  1. Centralized Management
    Consider setting up Centralized Management. Insert the off-site nas with the IP address of the remote location. This helps to monitor the device and - very handy - displays also the remote IP address
    You can also set a common time server which prevents blocked login-pages due to SSL certificate errors.

  2. Snapshots Replication SSR (requires BTRFS)

  • [Optional] Set up local snapshots to your liking, e.g. with high frequency (once an hour)
  • Set up the remote snapshots to your liking. e.g with low frequency. Since you can set up the SSR for each shared folder you can apply different sync frequencies (folder A: once a day; folder B: once a week). Most likely you do not want to replicate local snapshots (last page of the SSR assistant)
    You can also apply different retention policies between local (keep all very recent SS, no old SS) and remote (few recent SS but long backupchain into the past)

In particular when setting up the remote snapshots you might consider the immutable SS that come with the new DSM 7.2. I have not tested it yet but it is extremly high priority on my agenda.

Do the initial replication run in your local network. Once done put the off-site nas to the remote location and change the IP address in each SSR job.

  1. Hyperbackup
    Set up Hypebackup to copy the NAS applications & settings ´to the local NAS and use SSR to forward the whole folder to the off-site device. (This does not require the installation of Hyperbackup or Hyperbackup Vault on the remote device)
    This allows you to easily roll-back DSM settings on the local device. When disaster strikes, the replicated HB jobs also allows for very fast&easy conversion of the remote NAS into the unavailable local device .

  2. Test for bandwith
    You will probaly see that the performance differ significantly as

  • some ISP apply some kind of traffic shaping that can result in low bandwith for specific traffic types (e.g. limit bandwith to x MBit/s for all traffic going through provider ABC)
  • some routers/modems can not prodvide the performance to saturate the bandwidth provided by ISP. Try to disabling all kind of logging/packet inspection/intrusion prevention etc

This last point is very annoying because you just can not tell beforehand which Hardware/VPN/ISP configuration gives the best result. In general we have seen that using the same beefy firewall appliances give the best result as traffic shaping rules of ISP do not seem to apply. Consider that enterprise grade firwalls for 1GBit/s uploads easily come at price points of >3.000USD (HW + License fees). PFSence of Microtik are interesting options, even more when uploads of 10GBit/s are to be considered.

The above approach has proven to be stable and secure but I am always open for improvements

3 Likes

I like these snapshot replication ideas, I just don’t have a second location to house a NAS.

BTW, my idea to backup to a 2nd NAS then use cloudsync to send that backup online doesn’t seem to work. Every time there’s a new HB backup, cloudsync has to replace all the files so it never finishes.