New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
GlusterFS Over WAN
So let me bounce this off the knowledge here.
Is glusterFS really slow over WANS?
Is there anything that can be done to improve the speeds?
Should I just built more concentrated network spots, instead of diversifying heavily? i,e ( just NY and SF) instead of (NY, LA, SF, Dallas)?
Any other suggestions?
I personally don't think I would need more then 4MB/s but, it is a tad slow for what I am used to.
Comments
For higher latency links use the geo replication settings; http://www.jamescoyle.net/how-to/1037-synchronise-a-glusterfs-volume-to-a-remote-site-using-geo-replication
Between Dallas and Seattle I get ~150mbit (18MB/sec) single thread speeds, so I've been thinking of testing it for one of my non latency sensitive hobby projects.
From what I read on geo-replication it is like a time rsync and thus file concurrency is dropped.
I was thinking of running it in an environment where files would need to be changed and somewhat frequently. Though it wouldn't matter most of the time, I am afraid of certain situations where the files may have two new versions and issues would arise.
This is another one I've been wanting to try; http://www.xtreemfs.org/all_features.php
this one's designed partly for wan case -- http://xtreemfs.com/
still experimental though
The only issue I see is a need for a Meta data server, which I was trying to avoid. I wanted everything to be in a master master setup thus allowing a quick pull of any node.
Well, you could run metadata server on every node -- it allows replication of the metadata server. Not sure how that impacts performance though. I also got into some issues with some attributes freezing while doing writes, not sure if that was only because of options like preferring local copies when writing from a storage node.