Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Remote mounting storage VPS
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Remote mounting storage VPS

Hi,

I've been following the forum for a while and even nibbling on some offers here and there. So, after BF / CF crazy threads I decided to register and try and contribute whenever I can offer my help.

First let me thank @Nekki, @WSS, @Harambe and all the providers for the great fun and offers during BF/CF. It was very entertaining.

Now to my question. I was running Nextcloud on a ZXHost CEPH VM and unfortunately due to the problems everyone is aware I need to move to a new server. I snatched UltraVPS 2TB deal and also I couldn't resist netcup's 8cpu/8GB offer (the 8am deal!). I've also pre-ordered the VirMach 1TB offer.

Given netcup's vm performance I was thinking on installing Nextcloud on this server and mounting UltraVPS 2TB server as storage. I'd then use 1TB for Nextcloud and 1TB for personal backup. The Nextcloud files would then be backed up on VirMach 1TB vm.

Despite the need for a large storage, the Nextcloud server will be used mostly by 2 users and I don't expect a lot of traffic. I tested the connection between nectup (FRA) and Ultra VPS (NL) and it seems reasonable with a 10ms ping.

My key concern is how to safely and reliably mount the storage on nectup's vm? I read that SSHFS performance would be poor, so I;m leaning towards NFS over a SSH tunnel.

I would appreciate your thoughts and experiences with a similar solution.

Thanked by 1greentea

Comments

  • for my light usage of nextcloud, SSHFS enough to handle my need.

    give samba try

    Thanked by 1beagle
  • sibaper said: give samba try

    If you decide to use Samba DO NOT do it over the internet and make sure you use a VPN, samba is one of the most insecure protocols to leave open on a public address.

    NFS would be a much better choice, whether over VPN, SSH or direct.

    SSHFS can handle some decent speed considering the low latency of your setup, but it does use CPU to provide encryption so your speed is bound to the CPU availability on both sides of the connection. So you are limited by the server with the least amount of cores. You could be surprised and it could work rather efficiently, I would say you could at least test it, it isn't like it takes a lot of work to implement.

    my 2 cents.

    Cheers!

  • IkoulaIkoula Member, Host Rep

    You could also setup a VPN tunnel between the two servers.

    What's the OS of your servers ?

  • MaouniqueMaounique Host Rep, Veteran

    NFS can fail badly, albeit 10 ms is very good, on a connection cut...
    I suggest you disconnect and connect from time to time to make sure. Best do it like on dial-up on a needed basis (establish tunnel, connect, transfer, disconnect, cut tunnel).
    If you need it permanently i suggest iscsi.

    Thanked by 1uptime
  • @Ikoula said:
    You could also setup a VPN tunnel between the two servers.

    What's the OS of your servers ?

    I'm running CentOS 7 on both.

  • You could try tinc and use NFS over the tinc interface, but I think SSHFS is the easiest to test out and see if it satisfies you.

    With sshfs, there are plenty of tweaks possible (including reducing encryption via ssh options if you find that there's too much cpu load - very unlikely though). It's much more secure and with a key based setup you could even automount it on demand.

    TheLinuxBug said: You could be surprised and it could work rather efficiently, I would say you could at least test it, it isn't like it takes a lot of work to implement.

    +1

  • IkoulaIkoula Member, Host Rep

    Here is a KB explaining how to setup VPN on a linux server:

    https://en.ikoula.wiki/index.php/Establish_a_L2TP/IPSEC_VPN

    Ignore introduction, the rest of the page is translated.

    Thanked by 2beagle uptime
  • @Maounique said:
    NFS can fail badly, albeit 10 ms is very good, on a connection cut...
    I suggest you disconnect and connect from time to time to make sure. Best do it like on dial-up on a needed basis (establish tunnel, connect, transfer, disconnect, cut tunnel).
    If you need it permanently i suggest iscsi.

    Although you can setup remote storage on Nextcloud, this feature has some limitations (e.g. doesn't automatically scan for new files or file changes). Therefore, I'm looking to use it as the main storage permanently mounted.

    Indeed one of my main concerns about this setup is the reliability of the remote connection. Simply mounting the NFS share will not automount if the connection drops. So, maybe will need to use AutoFS/ Automount.

  • TheLinuxBugTheLinuxBug Member
    edited November 2017

    Maounique said: NFS can fail badly, albeit 10 ms is very good, on a connection cut...

    I suggest you disconnect and connect from time to time to make sure. Best do it like on dial-up on a needed basis (establish tunnel, connect, transfer, disconnect, cut tunnel).
    If you need it permanently i suggest iscsi.

    Hey Mao,

    This response kinda confused me as my experience has always been the opposite where connection drop + iSCSI = data loss (my use being over local network). While utilizing NFS, other than if the one side goes perm down you end up with a stale mount, I have never seen any real data loss. Yes its annoying that if your mount point doesn't come back online you may have to reboot the node with the stale mount, where as iSCSI is a bit nicer about this, but I have always been apprehensive about using iSCSI over the internet.

    In your experience is iSCSI really that much more stable? What use cases have you tested?

    Thanks for your time in response.

    Cheers!

  • maybe look into glusterfs which utilizes nfs but can more easily be setup for encryption and stuff. or use something like: https://bitbucket.org/hirofuchi/xnbd/wiki/Home to export a whole block device. enables you to even setup encryption of that block device on the client side...

    Thanked by 2uptime beagle
  • MaouniqueMaounique Host Rep, Veteran

    Dataloss can happen only when you are actually writing something. As I see the requirements, he needs it always available, but mostly read than write.
    I never lost data over iSCSI in this exact usage scenario (write rarely, read often) and I am using the iSCSI "drive" as a holder for encrypted containers i mount remotely and write into, therefore extremely dangerous, but I only use it on reliable connections which should be between 2 datacenters and with a journalled FS. In the eventuality of a stall, iSCSI will reconnect easier and you can tweak it for safety.
    I am a fan of iSCSI as it provides everything i want for my particular usage scenario, i know NFS is de facto standard, but imo it is a bit outdated. I did not use NFS over the internet as I think it is not fit for the purpose and does not serve me well, I have yet to have data loss over iSCSI, but, again, I know it is over the internet from the start and play for safety over speed and I have superstable connection at 500/1000 mbps.

    In theory, yes, iSCSI is a block device exporter and is theoretically more prone to instability and dataloss over unstable connections (is like your hdd keeps losing the connection to the MB), but:
    1. It can be tweaked to be safer;
    2. If the connection is unstable, you should use something else, SSHFS provides good encryption and tolerance to errors, NFS is also better in this case, but if a connection so close between datacenters which probably peer somehow is not stable enough to use iSCSI, then something is wrong with your provider.

    If the usage is for backup (write often, read rarely) then you need to establish the connection only when you are actually sending data, make sure the dataset is complete before merging it with the rest or use a specialized solution which does that and more.

    In the end, keep backups, in the rare event dataloss will occur, you have the backup, dataloss will occur even if you use ftp or pigeons, so you need to keep back-ups anyway, in this case, why not use something is is more flexible and more feature rich?

    Thanked by 1beagle
  • @beagle said:
    Hi,

    I've been following the forum for a while and even nibbling on some offers here and there. So, after BF / CF crazy threads I decided to register and try and contribute whenever I can offer my help.

    You signed up because of BF/CM? I thought that would have scared folks off :-0

    I think this is a clear example of THE POWER OF EMMA

    Thanked by 2beagle WSS
  • @beagle: for what it's worth, you should be able to run the nextcloud instance on the ultravps VM itself just fine. after all it's just a website and more important it would skip the risk related to the remotly mounted storage at all.

    on the other hand if you like a backup/copy anyway on the second 1TB VM you have, glusterfs could take care of live replicating and also hold your data available even if one services drops out. latency may slow down though.

    Thanked by 1beagle
  • @Falzo said:
    @beagle: for what it's worth, you should be able to run the nextcloud instance on the ultravps VM itself just fine. after all it's just a website and more important it would skip the risk related to the remotly mounted storage at all.

    Indeed, I used to run it on one of ZXHost's 1TB vms which had similar specs, so it should work fine. I just thought the netcup vm would be a good fit for it with its biffier specs.

    on the other hand if you like a backup/copy anyway on the second 1TB VM you have, glusterfs could take care of live replicating and also hold your data available even if one services drops out. latency may slow down though.

    Apologies if I'm mixing things up a bit here, but after all the heartache ZXHost is having with CEPH I'm a bit reluctant with some of these distributed file systems. Do you have a good experience with GlusterFS?

  • @beagle said:
    Hi, ....

    My key concern is how to safely and reliably mount the storage on nectup's vm? I read that SSHFS performance would be poor, so I;m leaning towards NFS over a SSH tunnel.

    NFS over IPsec - ...

  • beagle said: Do you have a good experience with GlusterFS?

    yes, I have. I won't say it's failproof or whatever (most things are not) but as it copies the files transparently into a folder on the underlying filesystems you'll still be able to access all files without glusterfs running. so if shit hits the fan and your gfs-setup somehow gets f*cked up, you could still access the files directly in the folder which gluster uses to store them in. if you use replication you would end up having all files in each folder on two different machines.

    though you should be willing to read some things about it and how it works.
    sshfs would be the far more easier way to deal with a remote mount ;-)

    Thanked by 1beagle
  • virmach 1tb? was that a flash special?

  • @elofty said:
    virmach 1tb? was that a flash special?

    Pre-order:

    https://www.lowendtalk.com/discussion/comment/2525456/#Comment_2525456

  • found it. will give it a try.

  • 2 more suggestions: NBD and S3QL (which can be used over sshfs or your own S3 using minio). S3QL is pretty efficient on slow/unstable network as there is a built-in caching mechanism, but it's definitely not as fast as NFS or NBD in a low latency environment.

    Thanked by 1beagle
  • WeblogicsWeblogics Member
    edited November 2017

    @TheLinuxBug said:

    sibaper said: give samba try

    If you decide to use Samba DO NOT do it over the internet and make sure you use a VPN, samba is one of the most insecure protocols to leave open on a public address.

    NFS would be a much better choice, whether over VPN, SSH or direct.

    SSHFS can handle some decent speed considering the low latency of your setup, but it does use CPU to provide encryption so your speed is bound to the CPU availability on both sides of the connection. So you are limited by the server with the least amount of cores. You could be surprised and it could work rather efficiently, I would say you could at least test it, it isn't like it takes a lot of work to implement.

    One nitpick on this. Samba is not a protocol - it is a collection of programs and services that implement the SMB/CIFS protocol.

    I agree though, SSHF is a better option.

Sign In or Register to comment.