Samba Ransomware Protection

Ransomware-resilient Linux Samba file server

Pretend the following situation is played out:

The files are stored on a Linux server, and clients access them over the Samba protocol.
In this scenario, one of the client computers becomes infected with ransomware, which encrypts anything within the reach of the logged-on user, including files on the server’s shared folders.
Besides normal backups, I’m curious whether any solutions would enable complete and easy recovery from such attacks (apart from regular backups).

Some form of snapshotting/version management system was what I had in mind as a starting point for my design:

Whenever a file is modified or deleted, make a backup copy of the previous version and preserve it somewhere safe.
Ordinary users would be unable to modify these prior versions since they would be read-only, safeguarding them from alteration by any type of ransomware running on client machines.
Prior versions of a file would be accessible through a particular path, for example, a previous version of /home/john/path/to/file.odt would be accessible through /home/john/path/to/snapshot/20161120 163242/file.odt.
There should be some form of procedure in place to restore entire shares to their previous state.
A second option would be to use heuristics to detect suspicious activity (for example, enormous amounts of data being read from files, new files of the same size being created and the old files being destroyed, or large-scale modification of files) and to take necessary action in response (alerting the admin, blocking write access).
The usage of specific characteristics of known ransomware should not be relied upon purely for protection; while these qualities may be effective against that specific trojans, they may be completely ineffective against future ransomware that is executed differently.
Is there any software that performs this type of function, operates on Linux, and is free and open-source?

Linux \sransomware
Improve this question by sharing it with others.
Follow edited Nov 20 2016 at 20:07 asked Nov 20 2016 at 16:33 edited Nov 20 2016 at 20:07

user149408 33722 silver badges99 bronze badges 3 user149408 33722 silver badges99 bronze badges 3
Take a look at FreeNAS, which is powered by ZFS. ZFS allows you to take periodic snapshots of the filesystem, which is useful for disaster recovery. In the event of a ransomware attack, you can simply restore your system to a prior snapshot. – tlng05 tlng05 tlng05 tlng05
Nov. 20, 2016, 20:15 p.m.
Thank you for the advice; it was very helpful. I’d be concerned about a lot of redundancy if I used periodic snapshots, but delving a little deeper, I discovered that the copy-on-write strategy seen in ZFS and Btrfs is exactly what I had in mind. user149408 has left an e-mail address.
Nov. 20, 2016, 20:55 p.m.
In the past, I’ve experienced some serious performance issues with btrfs. Before putting it into a production machine, make sure it meets your standards of quality. — No longer in the band
Nov. 21, 2016, 19:15 p.m.
@Pascal, could you please elaborate (for example, what kind of operations are difficult, and how severely performance is degraded)? When it comes to a file server, the bottleneck is almost always the network connection, therefore a minor decline in filesystem performance may not even have a noticeable effect on overall performance. at 14:50 on November 22, 2016 – user149408
I didn’t take any precise measurements. I was drawn in by the cow features of the file system, just like you were. An older version of Ubuntu used it to handle recovery after failed system upgrades (e.g., by taking a snapshot beforehand and rolling back if necessary), and the system became extremely sluggish as a result, significantly more so than a comparable system that used ext4 or a slightly older one that used ReiserFS, both of which were much faster. btrfs was not functioning well under high load, which I traced back to a problem in the past (around 24 – 18 months ago), so it’s possible that it was an issue with a certain btrfs version at the time. –
Out of Band on November 22nd, 2016 at 3:39 p.m.
Please leave a remark.
There are three possible responses.

Backups, in my opinion, are your only safe bet. What you propose as a solution essentially consists of creating a unique backup; instead, I recommend that you just have numerous (at least two) complete backups on hand. The challenge is thus limited to identifying ransomware that is encrypting your files and restoring them from the most recent reliable backup. Deduplicating backups can be used to save space on your computer. The advantage of using the alternate route is that it is less stressful.

You reduce the amount of complexity you introduce (no additional system just for ransomware)
Increase the quality of your backup process – after you’ve automated warnings and recovery, you’ve put a lot of infrastructure in place to make backup recovery a snap. In addition to ransomware problems, this is beneficial for a much broader category of failures.
When it comes to identifying ransomware, it’s simple: Because encrypted files appear to be random data, you can do simple statistical tests on each file (is there an equal distribution of bytes? not compressible using zip? and so on) and count the number of files that appear to be at random.