For scenarios where you are not using rsync
, and you are just accessing your server share with SMB from your file manager, and if that file manager doesn’t auto-verify copied files, you could programmatically include the hashing with a proprietary file copy workflow on/from the client side.
Option #1
(1) Create a keyboard shortcut, e.g. CTRL-CMD-C
, and use it to trigger either an AppleScript-based workflow/QuickAction (if you use Finder) or a shell script, if your file manager supports direct shell script execution (e.g. Nimble Commander).
(2) The script receives the full paths to the local files that you have selected in the file manager on your macOS client.
(3) The script then opens a macOS file selection window pointing to your SMB share or Synology Drive shared folder etc., where you can select the destination directory for the file copy.
(4) Before starting any copy process, the script then uses the cintch
CLI to hash all the files you have selected for copying… or all the enclosed files, if you have selected one or more directories for copying.
(5) Only when the hashing is complete, the script will then copy the files over to the NAS/server share you have selected before, using a simple cp
command, which will preserve extended attributes (metadata) if you don’t use any special arguments.
(6) After the copy is complete, the script then executes the cintch
CLI again, but this time on the copies on your SMB share to compare the hash that has been stored in the file copies’ metadata with the newly calculated hashes.
(7) Play a success sound if there are no errors… or play an error sound, if cintch
has found a hash mismatch, and save a log file next to the source that includes the paths of the file(s) that produced the error.
With such a workflow, you would have to train yourself to refrain from using drag-and-drop operations. What could help in that regard is to auto-mount the SMB shares hidden. (I think that’s possible on macOS.)
However, you could probably also make this work for drag-and-drop, but you’d need a more complex workflow, which would be a fairly hacky workaround to “reroute” copy operations.
Option #2
(1) Ensure that the main server share is SMB-auto-mounted when you log in as your main local macOS user, whether hidden or visible.
(2) For the workaround you’d need a local sparsebundle that should also be automounted at login, and definitely mounted as visible. It’s prudent to auto-create the sparsebundle file anew at every login, after removing/deleting the old one, otherwise it would grow too big in size eventually. The mounted volume of this sparsebundle is your dummy share.
(3) Create a regular LaunchAgent, running at load and then e.g. once every hour, that on the dummy share will recreate/update the complete directory structure as present on the main server share:
rsync -a -f"+ */" -f"- *" /path/to/mainServerShare /path/to/localDummyShare
This will only sync the directories and exclude all other file types. After mounting the dummy share (sparsebundle) at login, load that LaunchAgent with launchctl
, which will then run, if you have enabled the RunAtLoad
key. (Afterwards it will run at regular intervals, which you also need to specify in the agent’s plist.)
(4) Create a second LaunchAgent running a watcher script that will trigger auxscript.sh
when there are file creations on the dummy share volume. You can’t use the WatchPaths
key that macOS launchd
offers, because sadly it will ignore file changes happening in subdirectories, so you need to use fswatch
, which you can install with Homebrew:
fswatch -0 -r --event Created /path/to/localDummyShare | xargs -0 -n 1 -I {} /path/to/auxscript.sh
(5) Now, whenever you want to copy a file or folder to your Synology, don’t copy it onto the proper SMB share, but copy it onto your local dummy share into the relevant subfolder.
(6) fswatch
will trigger auxscript.sh
which basically does all of the operations stated above under option #1: hash the new file on the dummy share with cintch
& store the hash as metadata, copy the file incl. metadata with cp
from the dummy share to your regular SMB share on the Synology, run cintch
on the destination file to hash again and compare the new hash with the hash previously stored as metadata. Then, if the hashes match, delete (unlink/rm -f
) the file from the dummy share. If there’s a hash mismatch, don’t unlink the file, but store an error log next to it.
PS (edit): the auxscript.sh
should of course ignore .DS_Store
files, which are created by Finder and certain macOS processes, and not copy those over to your server. If the file path received by auxscript.sh
is a directory file, no hashing and copying is necessary, of course, and auxscript.sh
should just execute mkdir
on the main SMB share.
PPS (edit): of course, if your file manager includes auto-verification of file copies, and your server is using btrfs or ZFS, you don’t really need any of the above solutions, but even in that case it would still be nice to have the file hash stored as metadata on your server, just in case something happens on your server’s file system that even ZFS or btrfs are unable to detect.