New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Faster Rsync?
Any faster way to send files over the network? The connection between both node can achieve 1Gbps so network is not an issue. (Tested using iperf)
Origin | Destination | |
---|---|---|
HDD Speed | 100Mbps | 200Mbps |
CPU Usage | 6% | 5% |
IOWait | 3% | 0% |
RAM Usage | 2% | 20% |
I am currently using rsync with the following command:
rsync -axvW --progress -e "ssh -T -o Compression=no -x" ./ 127.0.0.1:/some/remote/folder/
Although it should reach 100Mbps theoretically, the max speed is averaged at about 12Mbps only, any idea why? I tried to run several rsync in parallel but the speed still sum up to 12Mbps.
MiB (TX Bytes/second)
12.15 .......|..........|......|...|......|......|..............|.
10.13 .......|..........|......|...|......|......|..............|.
8.10 .......|...|......|......|...|......|......|......|.......|.
6.08 |......|...|......|......|...|......|......|......|.......|.
4.05 |......|..||......|......|...|......|......|......||......|.
2.03 |::::::|::||::::::|::::::|:::|::::::|::::::|::::::||::::::|:
1 5 10 15 20 25 30 35 40 45 50 55 60
(Network graph using bmon)
Comments
Maybe small files or bandwidth cap/bad routing.
All of them are big files of 2-30GB, the latency between both node is about 10ms only
Seems like my host is the one limiting the speed, I have switched to HTTPS and the speed fluctuate a lot between 10Mbps to 100Mbps. CPU usage and IO usage is still low.
Not sure some free alternatives like aspera. That could solve.
Parallel rsync or Syncthing?
https://github.com/facebook/wdt
Looks like 12MiB/s to me which is equal to 100Mbps
It is but it will pause for 5-6 seconds for unknown reason (as shown on the graph) between each peak so on average it has to divide by 7-8
Afaik, it will stop at the end of syncing of each separate file for a brief moment. If you have a lot of small files, you might achieve faster speed with tar piped over SSH.
It is some big files from 2 to 30GB each so I don't think that's the case. I suspect it is more like a limitation by the host, if I cold down for a minute and continue it can achieve the max speed. Both host advertised 1Gbps shared.
This project looks promising, thanks for pointing it out, will try it out soon.
Aren't you confusing Mb and MiB ?
12 MiB = 96 Mb so roughtly your 100 Mbps.
1 Byte (B) is made of 8 bits (b)
Maybe, but the speed I mentioned (12Mbps) is the average speed. It reaches 100Mbps during peak but the peak only happened every 6 seconds. The speed drops to a few Kbps during this "idle" period.
The graph is more or less like a Square wave instead of Sine wave (When viewed with 1 second precision)
FAT32 why are in such a hurry?
The server has several unknown hardware issues pending to be fixed (Random hard reboots with no record on logs), I need to backup it as soon as possible so that the provider can arrange people to change the components.
EDIT: It is unlikely to be memory issues as it will still reboot when the node is idle. I am suspecting more on the power supply or the motherboard.
did you try to boot from rescue and do the same rsync! do you get similar outcome? you'll have to be patient and move out.
It is a budget dedicated server so no IPMI available, currently HTTPS has the fastest speed as compared to Rsync via SSH, and NFS.
The best thing is HTTPS download client comes with resume functionality so even if the server reboot in the progress it will still works perfectly. (In fact, the server has faced another hard reboot just now, it happens 1-3 times a day)
if i was in this situation i would use http://duplicity.nongnu.org/
there was something else with webdav can't recall now..
relax and good luck.
Maybe I/O problem?NVM.I recently had similar problem with a server that had RAID10 where one of the disks in one RAID1 was failing. Read speed from the RAID1 with the failing disk was 1 Mbps and reading from anywhere else the speed was 360-400 Mbps.
I decided to give up on compiling this thing. Compiling things on Redhat servers usually caused me headaches.
Be sure to meet the dependencies.
EDIT:
Typos.
last idea:
1- tar.gz your dirs
2- use axel "yum install axel" you mayneed to open up port 80 "use caddy "etc ..
1) Read the dependencies requirements
2) Attempt to install the dependencies
3) Realise it requires manual build of the dependencies
4) Repeat step 1
Worst case: recompile the whole system.
If those are actually megabits per second, than the HDDs are your bottleneck. But where did you find HDDs from the 80s?
EDIT: NVM, I should read more carefully.
You are right, it is my mistake here. It is MBps instead of Mbps for the HDD speed. If that's the case the max speed should achieve even higher.
Compiling facebook libraries are a headache.. Maybe this steps will help. (Extracted from my personal install script for some other FB libs).
BTW Archlinux AUR has wdt and wdt-git install targets, though I have never attempted it.
That folly part maybe outdated, I think now it is
It's been a while since I recompiled folly. Just have the old static libs lying around.
Don't know what other dependencies WDT needs, but here is a link to my FB install script https://pastebin.com/5wMbsXfL
You might need to remove/adapt some parts as needed.