Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Advertise on LowEndTalk.com
PHP Query + Page-Load.
New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

PHP Query + Page-Load.

eastoncheastonch Member
edited January 2013 in General

Hi there,

Working with a client who's after some remote-server monitoring, well, something along the lines of a web-page that fetches the Server load, uptime etc from a remote location and shows it on the main page.

But, they don't just want a list of servers, they also want somebody to be able to 'search' for a Node and get the current status of it.

Now I've thought of a couple of ways to do this, one of which gathering every 5 minutes into a mysql db and storing results through Cron Scripts, then I thought of just making it retreive an output of a "status.php" on the remote node; something that displays in simple form "$UPTIME \n $LOAD"and I can just explode it into an array and do it that way.

I'm just wondering, at what point would 'file_get_contents' start to lag? I mean, is it safe to assume that getching a .php 2KB file asking for a load average and uptime unix command from ~50 servers at once would be acceptable without causing too much of a problem? Obviously the page needs to show little to no decrease in speed in comparison to the rest of the site.

Would anybody have any suggestions?

Security Consultant

Comments

  • The problem would be when the remote host is under a load or isnt responding, this would add time because the request would have to time out before breaking the php loop. you may have an issue with max exec time on the server doing the monitoring.

    The monitoring server wouldnt push out each http request at the same time, it would do one at a time, one after another.

    I would have the script fetch JSON or XML instead of a file with \n

    Also, i would use CURL instead of get_file_contents

    I could be recomending the wrong things, i haven't touched on PHP for a year and im not an expert. Someone else who is more well versed in PHP would be able to provide some extra help on this.

  • I would fetch the data in the background and store it somehow. Doesn't necessarily need to be a mysql database, but making 50 requests to remote servers is going to take quite some time, and the page WILL be slow. And since you can't guarantee that any single server is going to actually reply quickly (what if the server is overloaded?), you should probably look into some kind of threading model, so that you can still get updates from the other servers if one is taking too long to respond.

  • Guess I could just make it shout every five minutes to a DB to just replace the stats each time, or keep a log and just make it output the most recent.

    Seems like a good system, could also make it pull misc data for graphing in the future (such as a latency test, active processes, disk usage, etc...).

    I'm just scared I'll make this waaay too messy and wont be modular, the way you'd want it to be.

    Security Consultant

  • Get the remote monitored servers to generate a remotely accessible XML file with the stats, write a small cronjob on the master server to retrieve the XML file every * minutes for processing locally and then SimpleXML will be your best friend.

    Need to reach me quickly? Ping me on Discord

  • You could even cache the page for 30 seconds to make sure it's not getting pounded with uptime/status requests

  • Use @Raymii 's https://raymii.org/s/software/Bash_PHP_Server_Status_Monitor.html just add the search feature that you need.

  • @zen you mean, make the slave write to the master? Seems a bit odd; logically you'd expect the master to make requests and retreive it from the slave.

    Security Consultant

  • RaymiiRaymii Member
    edited January 2013

    @Zen said: @eastonch said: you mean, make the slave write to the master? Seems a bit odd; logically you'd expect the master to make requests and retreive it from the slave.

    Nagios can act in both ways. With NCSA you can submit checks to the nagios server, and with NRPE (or check_ssh) you can let the nagios server check hosts and host services.

    If you use NCSA checks, make sure you set up the freshness check. That way you'll be notified if a check is not submitted.

    Using NCSA you can also set up distributed nagios, a few nagios servers do the actual checking, and submit it to the nagios master via NCSA. This reduced the nagios load also.

    Quis custodiet ipsos custodes?
    https://raymii.org - https://cipherli.st
  • Take a look at http://status.nodedeploy.com/

    This is our custom system, it pulls the data from our servers using json, example below
    http://helios.nodedeploy.com/?json=true

    It then stores it in a database so we can pull history etc. and graphs which we currently have in our internal monitoring system

    ---- NOT WITH NODEDEPLOY ANYMORE ----

Sign In or Register to comment.