Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Observium optimization
New on LowEndTalk? Please Register and read our Community Rules.

Observium optimization

gsrdgrdghdgsrdgrdghd Member
edited January 2013 in Help

Hello,
I've recently set up Observium to monitor my servers but it's kinda slow. The graphs get loaded one-by-one and slowly appear on the pages.

It's running on a 128/256mb BuyVM with apache2, php5.4 + APC and mysql. The server also runs Smokeping with the RAM usage right now being ~140mb.

Are there any good ways to optimize the setup and let Observium load faster? I know rrdcached is supposed to help but i can't get it to work (and it takes up lots of RAM)

Comments

  • I think it's most likely the ram and CPU. We're running it on a 1gb VM with no problems after experiencing similar issues on a smaller VM.

  • I am running it on Prometeus 256MB SSD Plan and it is smooth so far.
    Monitoring 8 servers, graphs are loading fast.

  • @MartinD said: I think it's most likely the ram and CPU.

    Load and io are fine, i don't think this is the issue. Apache is set to 5 MaxClients and this could become a bottleneck when opening a page with 50 graphs + js + css as all of these files generate a HTTP request.

  • AdducAdduc Member
    edited January 2013

    If you'd be willing to switch to NGinx, it'd likely reduce the overhead (both memory and io) sufficiently enough to speed things up.

    What's the memory usage at with Apache set to a larger MaxClients count?

    DevOp based out of Chicago Somewhat knowledgeable about php.
  • gsrdgrdghdgsrdgrdghd Member
    edited January 2013

    @Zen said: You're essentially already in swap, which isn't great for running a website - especially not something quite so heavy

    Well it's only for me and i only use it every couple of days, so that shouldn't be too much of a problem.

    I don't think i can switch to NginX because that might break Smokeping, which is also running on the server.

    @Adduc said: What's the memory usage at with Apache set to a larger MaxClients count?

    In the >256mb region.

    Anyway i'm having another problem:

    http://i.imgur.com/q0iqvYR.png (why can't we even use IMG tags anymore?)

    My graphs have gaps. Take for example the one right before 16:00. The cronjobs for pulling the data are supposed to be running every 5 minutes. Some analysis has revealed that:

    -The cronjob started at 15:50 finished at 16:03
    -The cronjob started at 15:55 finished at 15:56
    -The cronjob started at 16:00 finished at 16:13
    -The cronjob started at 16:05 finished at 16:06

    Normally they complete within <2 minutes. What could be the reason that they sometimes take so long?

  • The gaps are because it can't connect to the server. I would say get a better VPS!

  • @Spencer said: The gaps are because it can't connect to the server.

    Actually it can connect to the server (and also stores the data in the database), i've got the logs of all the cronjobs. The problem is that they sometimes take 10+ minutes to finish, completly ruining the timing.

  • You might want to read my tutorial on optimizing Munin: https://raymii.org/s/tutorials/Munin_optimalization_on_Debian.html

    The tricks with tmpfs, rrdcached and nice can be applied to any rrd based monitoring system. Those optimizations make Munin run perfect on my Raspberry Pi, with an average load of about 0.8, and thats because of tor.

    Quis custodiet ipsos custodes?
    https://raymii.org
Sign In or Register to comment.