Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


NFS or SSHFS or ? - Which one shall I use?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

NFS or SSHFS or ? - Which one shall I use?

AmitzAmitz Member
edited September 2016 in Help

Cheers my dears,

I would like to extend the storage space of an existing server by mounting a directory on another server (relatively close-by, good network connection (600 Mbps+), latency <20ms) for backup purposes.
It would be important that file/user permissions are preserved and that it would work fine with rsync. Both servers run Debian 8.

I have thought of either a NFS share or SSHFS to accomplish the task. I am aware that the two are not exactly in the same league. NFS is probably way better in a shared environment with several users and all bells & whistles, but I will be the only user in the given scenario. The shared directory will not get accessed by other users or servers at the same time.

However, NFS transfers its data unencrypted (at least before NFSv4) and it seems that I either have to send the data through a SSH tunnel, VPN or have to mess with Kerberos Auth which I would like to avoid.

SSHFS on the other hand is encrypted out of the box, even slightly easier to configure and does the job quite well for me. It just seems that it needs quite some more resources (CPU) while transfering data, due to the encryption, but that is okay too. Maybe I even use a weaker cipher to lower the load as I am not transferring state secrets. Just fully unencrypted seems unnecessarily negligent to me.

I still have to test both for rsync compatibility though. Probably SSHFS could bring in some trouble here. No idea about NFS yet, have to do some reading.

I just wonder: Which of the two will be more reliable for the given task in the long run? Anyone here who used both for some time and could give some insight? Maybe another alternative? I am open for any good solution besides the two aforementioned.

Thanks in advance!

Kind regards
Amitz

P.S.: Yes, I have thought of Samba too, but there is no Windows system involved and I see no advantage in using Samba compared to the 2 other solutions.

Comments

  • mailcheapmailcheap Member, Host Rep

    Used both NFS and SSHFS; SSHFS is really good for short term , low latency jobs; otherwise its a nightmare to manage with I/O errors cropping up every now and then. NFS locally is the best option.

    Thanked by 2Amitz FlamesRunner
  • AmitzAmitz Member
    edited September 2016

    mailcheap said: NFS locally is the best option.

    Thank you! But: The servers involved will never be on the same local network. It will always be a remote thing.

    a nightmare to manage with I/O errors cropping up every now and then

    sounds scary though...

  • Well, if you are only using rsync: Why wouldn't you just use it over SSH and skip the mounting part?

    Thanked by 2Amitz geekalot
  • In my experience NFS is much faster than SSHFS; SSHFS has its fair share of problems

    Thanked by 1Amitz
  • AmitzAmitz Member
    edited September 2016

    Bochi said: Well, if you are only using rsync: Why wouldn't you just use it over SSH and skip the mounting part?

    You are right, I worded that poorly. Let's say that the possible usage of rsync would be an additional icing on the cake for a secondary task that I would like to accomplish.

    @Yoda said:
    In my experience NFS is much faster than SSHFS; SSHFS has its fair share of problems

    I have read that very often and it is probably true, but I tested both now within my concrete scenario and they perform nearly identical for the task. As said, SSHFS consumes more resources, but they both max out the 1Gbit line of both servers. What are the other problems (compared to those that also NFS has) that you refer to?

    Thanked by 1FlamesRunner
  • NeoonNeoon Community Contributor, Veteran
  • Someone read the tags... ;)

    Thanked by 1yomero
  • My server hates it when I move large files via SSHFS, it can hang sometimes until the transfer is done.

    Samba will not cause you these issues, in my experience it actually works better.

    Thanked by 1Amitz
  • mailcheapmailcheap Member, Host Rep
    edited September 2016

    @Amitz said:

    mailcheap said: NFS locally is the best option.

    Thank you! But: The servers involved will never be on the same local network. It will always be a remote thing.

    a nightmare to manage with I/O errors cropping up every now and then

    sounds scary though...

    In this case, SSHFS with a few workarounds is a viable enough option. Enable arcfour in Debian 8 for better performance. SSHFS: sshfs -o reconnect -o nonempty -o allow_other -o ServerAliveInterval=15 -o cache=yes -o kernel_cache -o Ciphers=arcfour with -C tag for compression at the end; performance on par with NFS.

    Now comes the hard part, a script to monitor the service for I/O errors and whenever one is encountered killall SSHFS, unmount and remount; and restart the affected services. Sigh!

    Thanked by 1Amitz
  • @Amitz said:
    Someone read the tags... ;)

    Sure we do! At least lately yours contain always an easteregg. :P

    Thanked by 1Amitz
  • mailcheap said: Now comes the hard part, a script to monitor the service for I/O errors and whenever one is encountered killall SSHFS, unmount and remount; and restart the affected services. Sigh!

    Meeep. Indeed. Monitoring for I/O errors myself seems not suitable for a long-term solution. Not least because I have no ad hoc idea on how to do this hassle-free.

    So, NFS then. Mmmh. Even more reading. Okay. It's Sunday and I have time.
    @flamesrunner - Do you see any advantage that Samba would have compared to NFS in my scenario?

  • @Amitz said:
    What are the other problems (compared to those that also NFS has) that you refer to?

    @FlamesRunner said:
    My server hates it when I move large files via SSHFS, it can hang sometimes until the transfer is done.

    Had this behavior & mounts getting stuck sometimes

  • @Yoda said:

    @Amitz said:
    What are the other problems (compared to those that also NFS has) that you refer to?

    @FlamesRunner said:
    My server hates it when I move large files via SSHFS, it can hang sometimes until the transfer is done.

    Had this behavior & mounts getting stuck sometimes

    I use autofs for my SSHFS mounts, so they get mounted on access and unmounted after a set period. Maybe you can use this to "fix" the problem.

    Thanked by 2Tom mailcheap
  • mailcheapmailcheap Member, Host Rep

    The DO article recommends using static mounts like fstab for SSHFS. Use something like autofs (+1 @Bochi) for SSHFS. Static mounts and SSHFS are a very bad combo. Trust me on this one!

    Pavin.

  • I'd vouch for sshfs. more ressource usage due to encryption maybe. changing cipher to arcfour may not help that much nowadays as a lot of CPUs have AES instruction sets builtin. after all this should not be mission critical.

    It works out of the box, much less hassle then nfs and you probably would not want your data go unencrypted from one server to another?

    rsync onto an sshfs mounted drive isn't a problem at all if the user connecting has the right permissions on the remote server (most probably needs to be root though).

    if you connect to a server via any protocol as unprivileged user it may become hard to maintain attributes and permission when syncing or even copying.

    to avoid that while not (not being able to) use root you could put an image file on your however mounted destination dir and loop mount this into you local system with whatever filesystem on it you like ;-)

    (e.g. speaking of hetzner storage boxes where permission and ownership is handled on hetzners side)

  • Adding storage space to one server sounds more like a task for iSCSI than NFS/SSHFS.

  • jarjar Patron Provider, Top Host, Veteran

    +1 for NFS. I've experienced far less oddities with it over time in comparison. That said, I still use SSHFS regularly when I don't give a shit ;)

    Thanked by 1vimalware
  • @DBA said:
    Adding storage space to one server sounds more like a task for iSCSI than NFS/SSHFS.

    I'd say it heavily depends on what should be achieved. iSCSI is more likely to export whole block device over the net, especially locally.

    yet I agree you could use this either way, as it should support using a diskimage as fake blockdevice instead. you can achieve something similar with xnbd for instance...

    if you export/import blockdevices this way be aware that heavy load on that connection and latency issue may show high load due to IO-waits on that device...

    if you want to make use of encryption of your data on the destination server with that block device that's definitely a good way to go.

    for smaller purposes and to just simply mount a directory from one server to another I'd say sshfs is hard to beat.

  • NFS is Need For Speed right?....I am sorry I just had to.

  • @jarland said:
    +1 for NFS. I've experienced far less oddities with it over time in comparison. That said, I still use SSHFS regularly when I don't give a shit ;)

    Hopefully you don't transfer client data like that haha :)

    Thanked by 1MikeA
  • jarjar Patron Provider, Top Host, Veteran
    edited September 2016

    doghouch said: Hopefully you don't transfer client data like that haha :)

    Is that weird? I transfer all client data over that. Well, what data I don't send over port 25 to my inbox daily on cron (backing up via inbox).

    I kid. I use it for toying around. I've got some really badly made scripts that push static data to HTML, scripts hosted at {undisclosed location} and static data on a server. The static data is useless to anyone but me, but the scripts should be nowhere near a public server (although are equally useless to someone else, still vulnerable by design). I rigged sshfs because I was lazy.

  • BrianHarrisonBrianHarrison Member, Patron Provider

    +1 for NFS. I've used it in a number of backup storage deployments and it has always worked great -- even on high latency connections.

  • doghouchdoghouch Member
    edited September 2016

    @jarland UNENCRYPTED MAGIC PONY; at least it's sent over TLS/encrypted relays

    Thanked by 1jar
Sign In or Register to comment.