Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What would you consider a "Normal" load average for a server of a VPS provider?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What would you consider a "Normal" load average for a server of a VPS provider?

sturdyvpssturdyvps Member
edited February 2012 in Providers

As per the discussion title what would you consider a "Normal" load average for a Server of a VPS Provider?

Comments

  • Depends on the amount of cores in your system, Single core then 1.0 Load is considered healthy, Dual Core around 2-3.00 and Quad Core 4.00, Dual Quad Core would be 7-8.00

    Here is a example of a Dual Quad Core server, In a live production state.
    top - 04:12:41 up 2 days, 19:50, 2 users, load average: 2.06, 2.22, 2.29

    The above system is a Dual Quad Core, 47 Containers, Partially full.

  • Are we talking about for a single VPS here, or for the entire node? By itself, load average is a poor metric to base anything on - a high load doesn't necessarily mean that the CPU is maxed out. Assuming that other parameters are within normal bounds, load average should always be less than the number of cores available.

  • High Load means that there is a proccessing queue, If the load exceeds say 1.50 on a single core proccessor then that means It is time to upgrade or make some adjustments.

    On multi-processor system, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.

    http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages
    It's a extremely useful guide.

    @NickM said: Are we talking about for a single VPS here, or for the entire node? By itself, load average is a poor metric to base anything on - a high load doesn't necessarily mean that the CPU is maxed out. Assuming that other parameters are within normal bounds, load average should always be less than the number of cores available.

    Thanked by 1djvdorp
  • My servers all have 16 cores - so in that case then having a load of 16 (or less) would be considered healthy?

    I'm just asking to make sure that my opinions agree with everyone else's opinions.

  • @Jacob yes i will agree that guide is extremely useful

  • Not neccessary, Since assuming a core is proccessing at a 1.0 Load of each then it will be fully utilized. Are these 8 Core machines with Hyperthreading or 16 Physical Cores?

    I also recommend running htop as it tells you the percentage on the specific cores and can track them annoying perl scripts, I found 2 yesterday.

    @sturdyvps said: My servers all have 16 cores - so in that case then having a load of 16 (or less) would be considered healthy?

  • @Jacob Its 8 core machines with hyperthreading. I've got vtop installed on all Servers, i'll also research on htop.

  • @sturdyvps said: My servers all have 16 cores - so in that case then having a load of 16 (or less) would be considered healthy?

    Since it seems that we are talking about a VPS node here, I would not consider a sustained (longer than 5 minutes) load average of 16 to be healthy, unless the node is full and all of your clients are pushing the CPU hard constantly. You need some overhead to account for the fact that your customers are NOT all using a lot of CPU cycles at one time.

    Thanked by 1sturdyvps
  • @NickM yes i agree with that

  • kiloservekiloserve Member
    edited February 2012

    Server CPU load usually isn't an issue on VPS nodes.

    Typically on our 48 core nodes, we run a load of maybe 12
    On a 24 core node, around 8 most of the time
    A 16 core node, maybe around 4 to 6 load.

    It takes alot of users and overselling to create a constant high load.

    You'll find that CPU load usually isn't maxed out. It's the I/O that will usually max out before the CPU starts to even break a sweat.

    If you find your VPS to be slow/unresponsive, check the IO wait times, that's usually a better indicator. The exception is if the host is using very old processors.

  • DamianDamian Member
    edited February 2012

    A good way to affect what @kiloserve mentioned is to install your distro's sar package. This gives you tracking every 5 minutes. For example:

    
    11:40:01 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
    11:50:01 PM       all      1.15      0.01      1.45      0.12      0.00     97.28
    12:00:01 AM       all      3.63      0.92      1.86      3.88      0.00     89.71
    12:10:01 AM       all      1.29      0.15      1.21      0.03      0.00     97.32
    12:20:01 AM       all      5.79      0.01      1.51      3.54      0.00     89.15
    12:30:01 AM       all      5.66      0.10      1.60      3.67      0.00     88.97
    12:40:01 AM       all      5.48      0.01      1.69      4.34      0.00     88.48
    12:50:01 AM       all      5.29      0.01      1.72      4.24      0.00     88.74
    01:00:01 AM       all      5.91      0.01      1.58      3.73      0.00     88.77
    01:10:01 AM       all      4.63      1.52      1.79     12.08      0.00     79.99
    01:20:01 AM       all      4.75      1.67      1.95     10.98      0.00     80.64
    01:30:01 AM       all      4.58      3.16      1.99     10.08      0.00     80.19
    01:40:01 AM       all      3.91      0.01      1.45     13.80      0.00     80.83
    01:50:01 AM       all      4.54      0.01      1.57     13.35      0.00     80.53
    

    As you can see in the columns, %iowait starts creeping up at around 1:00 AM, because daily backups have started. Even at this % of iowait the system is still responsive, because all of the backup things that are running are being nice'd by ionice.

    For reference, here's a listing during the daytime:

    04:30:01 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
    04:40:01 PM       all      1.52      0.01      1.46      0.15      0.00     96.86
    04:50:01 PM       all      1.08      0.01      1.28      0.15      0.00     97.48
    05:00:01 PM       all      1.03      0.01      1.06      0.08      0.00     97.83
    05:10:01 PM       all      1.25      0.02      1.29      0.25      0.00     97.19
    05:20:01 PM       all      1.34      0.01      1.34      0.17      0.00     97.15
    05:30:01 PM       all      1.21      0.05      1.17      0.21      0.00     97.36
    05:40:01 PM       all      1.27      0.01      1.07      0.08      0.00     97.57
    05:50:01 PM       all      1.15      0.01      1.27      0.10      0.00     97.47
    06:00:01 PM       all      1.17      0.02      1.06      0.13      0.00     97.63
    06:10:01 PM       all      1.26      0.11      1.32      0.22      0.00     97.08
    06:20:01 PM       all      1.35      0.01      1.18      0.21      0.00     97.26
    06:30:01 PM       all      1.16      0.04      1.19      0.14      0.00     97.47
    06:40:01 PM       all      1.14      0.01      1.13      0.08      0.00     97.64
    06:50:01 PM       all      1.14      0.01      1.31      0.14      0.00     97.40
    07:00:01 PM       all      1.09      0.01      1.06      0.09      0.00     97.75
    07:10:01 PM       all      1.15      0.01      1.31      0.13      0.00     97.40
    07:20:01 PM       all      1.04      0.01      1.03      0.05      0.00     97.87
    07:30:01 PM       all      1.14      0.04      1.16      0.12      0.00     97.53
    07:40:01 PM       all      1.08      0.01      1.09      0.08      0.00     97.74
    07:50:01 PM       all      1.85      0.01      1.97      0.33      0.00     95.84
    
    Thanked by 1Mon5t3r
  • @Jacob said: I also recommend running htop as it tells you the percentage on the specific cores

    Standard top can do the same, press 1 to toggle between showing the percentages averaged between cores, and showing the percentages per core.

    Thanked by 1Amfy
  • @Damian said: As you can see in the columns, %iowait starts creeping up at around 1:00 AM, because daily backups have started. Even at this % of iowait the system is still responsive, because all of the backup things that are running are being nice'd by ionice.

    Haha, that's pretty funny, I always run my backups at like 4-5am. I wonder how many users are doing backups on that node at 1am.

  • Standard Top, Does not have the colours though. It's all about them bright colours...

    @Kuro said: Standard top can do the same, press 1 to toggle between showing the percentages averaged between cores, and showing the percentages per core.

  • @Damian said: A good way to affect what @kiloserve mentioned is to install your distro's sar package. This gives you tracking every 5 minutes. For example:

    agree.. i always use sysstat package for monitoring our node server. iostat and sar still the best option for me..

  • @Kairus said: Haha, that's pretty funny, I always run my backups at like 4-5am. I wonder how many users are doing backups on that node at 1am.

    I think that really depends on where the admin is located and what timezone the target audience is in.

Sign In or Register to comment.