New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Dropbox memory usage on Linux
I set up duplicity with Dropbox to do encrypted backups from my VPS to the cloud. It all seems to work fine, but holy crap is Dropbox a memory hog. top output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
...
1584 root 15 0 146m 21m 4656 S 0 4.1 0:01.93 dropbox
...
The dropbox daemon by itself, with almost no files stored, uses about as much RAM as the entire rest of the system - OS, Apache, postfix, dovecot, yadda yadda. Is there anything that can be done to reduce/limit this? I'm open to "use xyz instead of Dropbox" if there's some other free/extremely cheap cloud storage with an easy-to-use Linux client or duplicity backend module.
Comments
This command will reduce your RAM usage to 0:
shutdown -h now
Just kidding. :P
On a serious note, why not just use rsync to another server?
You can also stop the LAN broadcasting too which helps.
Other than that, ^^ rsync instead.
Is all you want to do to backup to the cloud? If yes you can also use Amazon S3 instead of Dropbox
Yep, have not used it for a few years but s3fs seemed to work pretty well with s3 buckets.
It's really too bad there isn't a provider that offers a backup-focused VPS service where you could run an rsync server.
Dropbox, SpiderOak, CrashPlan, etc. really are not LEB-friendly. They assume you're working with a current desktop or non-LEB server.
Here's mine, on a Linux desktop:
13629 someuser 20 0 1513m 53m 7040 S 0.0 2.7 9:33.67 dropbox
Sheesh. I suppose you could start/stop dropbox to run it in a sort of batch mode, though I don't know how you'd know when it's synced up. SpiderOak does support a --batch-mode parameter...2GB free, $10 for 100GB. But it is a memory pig, too - 275MB is not unusual and it can run at 99% CPU for an hour.
s3cmd provides interaction with Amazon S3 in a nice way. Otherwise, Hostigation and/or SecureDragon for roll-your-own backup.
It really depends on what you mean by "cheap". That isn't generally one of the design goals of cloud services.
Your dropbox is using 21MB. Of course, if you run this in OpenVZ, then it uses a lot.
In Windows uses 50MB, which sucks
For dropbox you could try http://www.andreafabrizi.it/?dropbox_uploader, spideroak has few interesting options also
I don't currently have another server. I guess I could get one just for backups, but that would cost another cup of coffee per month...
Might end up using S3... for the low volume of data it might be pretty cheap. I'll have to crunch the numbers. A bonus with that is that no separate client is needed since duplicity can use it directly.
In Windows uses 50MB, which sucks
Now that's an interesting comment. Maybe I don't understand the top output vs. memory usage. I'm not sure how to calculate exactly what is being used, but it seems a lot more like the total size (VIRT) than resident (RES). For example, another memory hog that was running on this box was bzr/loggerhead. VPS console showed 311 MB in use. Top showed this:
When I shut down bzr and loggerhead, the console showed 209 MB in use. I can't figure out how to make those "top" numbers add up to the 102 MB that was freed up. But 102 MB is certainly more than 10M + 15M (RSS numbers).
Here is mine, on a 96MB LEB:
It's not really that bad
For reduced redundancy (which honestly is still more redundancy than a lot of VPS providers, no offense), it's 10 cents a GB per month. Inbound bandwidth is free.
However, there is no way to run rsync without either an EC2 server or a third party service, so you chew up bandwidth on the other end. The EC2 server doesn't need to run all the time (only when you need it), but you either have to pay for a reserved IP (which costs per hour) or have a dynamic IP for the EC2 that you discover when you launch it.
I wrote about this a little while ago:
http://www.raindog308.com/blog/2012/rolling-your-own-s3-rsync/
http://www.raindog308.com/blog/2012/minor-correction-on-amazon-rsync/
having access to fuse (kvm), one could try http://code.google.com/p/s3fs/ and then use rsync on it. I didn't tested it, so no idea how good it performs.
Yeah, ahm... OpenVZ memory management is a piece of crap. In general you will see your processes using around the VIRT parameter. In other virtualization platforms and real machines the processes use the RES (ok, the memory accounting is more complex than this but... is just for quick reference).
Sometimes what you can do is to reduce the max size stack allowed for a process.
You do it with
ulimit -s n
where n is a number in KB (I am not very informed about all this stuff). In general, is at 10240 (debian) but for this purpose and in openvz you can set it up to... let's say 192, and it will be fine. So you can put that line almost at the start of the init script for dropbox (I've never used it, so is like /etc/init.d/dropbox ??? ) Then restart it and see how it goes.If you want to do this change permanent for all your processes, google about the /etc/security/limits.conf file.
Of course, all this is for OpenVZ, and I am a debian exclusive user, but I guess is the same for any distro.
@vedran You are not using OpenVZ... you cheat xD
Oh yes, I forgot to mention it's Xen PV
Was about to get started on my own OAuth based Dropbox uploader but this looks like it will do the job nicely!
http://linuxatemyram.com
No buddy, isn't about that.
In other words... help, OpenVZ sucks and ate my RAM, literally.
Yeah it sounds more like an OpenVZ thing. On some kind of machine with swap, my dropbox process would only be using the 21M of real memory which I would be happy with.
I have a 256/512 so I'm OK for now anyway... just felt like complaining
I'm also using this to back up some personal stuff from my Rackspace VPSs and it works like a charm on my 256MB VPS, no memory issues at all.
Do what I told you n_n
It will reduce your ram usage.
It will reduce your ram usage.
Seems like artificially limiting the stack would just make it more likely to crash if it hits the new limit, no? Is this a fixed amount allocated per thread whether or not the process uses it? If so, how can you determine what size is "safe" - what is the maximum stack size that any thread in this particular program could possibly use?
I could see maybe limiting the heap - the program could conceivably detect and deal with heap allocation errors and keep working. But if it hits the stack limit it's game over.
In "normal" circunstances, yes, it happens and the process crashes (Xen, KVM, physical). But in OvZ it works fine n_n I do it always. How it works? Dunno
Google "fake-swap.sh" and start Dropbox after running it. Prepare to be confused. It won't crash either.
@yomero Coming from my Solaris background, your suggestion seemed a little insane but after doing some more research it does seem to be the way to go for OpenVZ. Thank you. I added "ulimit -s 256" to .dropbox-dist/dropboxd and now my usage is
which is obviously a huge difference. Total memory reported "in use" by OpenVZ is now ~90 MB less.
@jarland I will take a look at that too. Thanks for the suggestion.
@jarland
seems like the fake swap stops /proc/meminfo from updating. Use vzfree instead of free to check your memory use and see if it has any benefits.
Only seemed to have an effect on applications launched afterward for me. It's a pretty screwy workaround, that much is certain.
Yes, I know that using this tweaks will likely crash, but I told you. Is.. OpenVZ... is... weird n_n but it just works. Now you can do it globally for all the system :P