All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
VPS that won't do any work
I have had a collection of low end boxes for a few years. I understand when they are underpowered. One box took 12 hours to compile Ruby, for example. But this box has me puzzled.
Over a year ago, I loaded a Rails app from a production site to do some testing and further development, and easily moved 9GB of data onto the box, which has a 20GB disk and 1GB of memory with OpenVZ. The Rails app does work, maybe a bit slugishly. Sometimes ssh takes well over 30 seconds to get a login prompt. I decided to backup the data and release the machine, but I can't get this VPS to work long enough to do a backup.
rsync starts up fine and runs for a minute or two, and then stops/freezes. tar czf runs for a minute or two and then stops. I can restart it with "fg 1", and it runs a while longer, but it always just stops, long, long before it is complete. With top, I don't see anything putting much load on this box, but just to move forward I stopped nginx and mysqld and tried again. Here is free:
root:/data# free
total used free shared buffers cached
Mem: 1048576 15564 1033012 0 0 0
-/+ buffers/cache: 15564 1033012
Swap: 0 0 0
tar czf still always stops after a minute or two, and often faster if it has just stopped. ulimit shows "unlimited". The gzip and tar processes show .1% memory use. gzip builds up to about 30 seconds of run time after being restarted 15 or twenty times.
I'm going to submit a ticket for this, but I was wondering if this is some kind of standard sleazy procedure where the provider starves the VPS for resources that happens until I notice it and complain? Or do I have a super hot VM neighbor sharing this node? Is there any kind of logging to tell what part of my system is stopping my tar process?
If I can't get this issue resolved, I don't know how I'll retrieve the data from this VPS.
Comments
I hope not. That would be very unethical and disgusting. It could be neighbours. It could be say a failed HDD in a raid array, it could be anything. Ask them.
Could be just poor I/o
It could be the neighbours or the host itself which is overselling
And causing problem with i/o / disks
Although I don't have an answer for all the OP's questions. The slow SSH login is fairly common and there is an easy workaround by disabling DNS reverse lookups
edit /etc/ssh/sshd_config to add the line:
UseDNS no
Save the file then restart ssh
You mean unethical, right?
Sounds like it or Net related. I am Just going from a few experiences long ago But sounds like that's HW related.
Yeah, fixed. Typo.
Thanks for the advice. I'll submit a ticket and let you know what turns up.
sounds like that stupid OOM killer, i was trying to run handbrake cli on 128mb but process kept on getting killed. found out i needed more memory.
your probably on the same boat, your memory might be screwed or something . try adding more swap space
read more here
http://www.gentooexperimental.org/~patrick/weblog/archives/2009-11.html#e2009-11-13T13_20_59.txt
try it out > /etc/sysctl.conf:vm.swappiness=20
As concerto49 intimated could be a failing hdd that has yet to be dropped from the raid array
This sounds like a memory issue - either you don't have enough or the node has a memory problem.
I have had rsync problems in the past related to memory shortage.