All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
vHWINFO - Get information from your virtual (or non) server easily
Hi, what´s new?
How many times have you googled for getting info about your server?
Probably many, so that´s why** vhwinfo script** sees the light.
Shows, through SSH/telnet access, info about: hostname, public ip, operating system, kernel version, virtualization, CPU, vcpus, RAM, HD and bandwidth speed.
Compatible with many SOs including Debian, CentOS, MacOS, etc that have wget installed. Is not required root access and info shows as simple as:
wget --no-check-certificate https://github.com/rafa3d/vHWINFO/raw/master/vhwinfo.sh -O - -o /dev/null|bash
So the tool becomes very useful but not definitive. You can suggest improvements even implement at Github:
https://github.com/rafa3d/vHWINFO
Web page of the script:
https://vhwinfo.com
Well that’s all for now
Comments
Tip. Probably up the Cachefly DL to the 100mb or at the least the 10mb file.. 1mb is just too small to grab an accurate reading
I'm sorry, OP. It's not accurate. On an OnApp VM - XEN PV, your script shows "is dedicated".
It's using the 1mb as it downloads it every time I believe.
Make sense what you say, but being "light" in resources and less intrusive results are similar.
Let´s make numbers:
cachefly 1MB
2014-10-08 14:24:07 (89,7 MB/s)
2014-10-08 14:24:10 (98,5 MB/s)
2014-10-08 14:24:11 (98,0 MB/s)
2014-10-08 14:24:13 (94,3 MB/s)
2014-10-08 14:24:16 (97,5 MB/s)
2014-10-08 14:24:17 (97,1 MB/s)
2014-10-08 14:24:19 (96,7 MB/s)
2014-10-08 14:24:21 (93,5 MB/s)
2014-10-08 14:24:23 (97,1 MB/s)
2014-10-08 14:24:25 (74,1 MB/s)
Total average 1mb = 93,65 MB/s
cachefly 10MB:
2014-10-08 14:31:22 (85,9 MB/s)
2014-10-08 14:31:24 (82,9 MB/s)
2014-10-08 14:31:26 (93,1 MB/s)
2014-10-08 14:31:27 (95,1 MB/s)
2014-10-08 14:31:29 (98,0 MB/s)
2014-10-08 14:31:31 (99,3 MB/s)
2014-10-08 14:31:32 (93,2 MB/s)
2014-10-08 14:31:34 (98,6 MB/s)
2014-10-08 14:31:36 (93,4 MB/s)
2014-10-08 14:31:38 (95,6 MB/s)
Total average 10mb = 93,51 MB/s
cachefly 100MB:
2014-10-08 14:24:47 (88,7 MB/s)
2014-10-08 14:24:56 (88,7 MB/s)
2014-10-08 14:24:58 (93,8 MB/s)
2014-10-08 14:25:09 (68,6 MB/s)
2014-10-08 14:25:14 (68,4 MB/s)
2014-10-08 14:25:17 (72,2 MB/s)
2014-10-08 14:25:20 (77,0 MB/s)
2014-10-08 14:25:21 (79,1 MB/s)
2014-10-08 14:25:23 (75,4 MB/s)
2014-10-08 14:25:24 (73,3 MB/s)
Total average 100mb = 78,52 MB/s
Fixed. Now shows correctly XEN virtualization. Thanks DalekOfSkaro
You bet'cha!
Virtualbox VM also shows up as dedicated and displays the HDD size incorrectly.
Here's the partition layout and vhwinfo output.
http://pastebin.com/AmxBSXHR
Fixed, now it shows correctly VirtualBox virtualization and total HDD size. Thanks Makenai
@rafa have you considered having your script maybe using dmidecode as a dependency?
is often pretty accurate.
Why not execute the 100mb one just once, then store that value for an amount of time so that you don't keep running it?
is often pretty accurate.
I have considered "dmidecode" command, but is not available in OpenVZ, KVM, MacOS as base.
Would be nice not requiring dependencies, and be as lighter and "base" as posible.
But I will take attention on it. Thank you for the details.
I completely forgot about OpenVZ! Good point.
Right now is just getting a 1MB cachefly test. I think is enough for a fast glance.
And other way is important not track on server, residual files, etc, to keep it 100% clean. thanks
But it runs every time someone logs into the server, right?
It runs every time you invoke the vhwinfo.sh script. Is not bad idea and it´s just 1MB, that way you take the "pulse" of your server's biorhythms.
If you mean to the server where is located the script, in Github, probably is logged as visitor to that script.
Seems like a good piece of code
HD is not much accurate, have 2x 500 GB while it's showing 86GB only.
edit: Better to change cachefly 1MB to 100MB test file
Umm... the script needs to be more specific when more than one hard disk, or when is partitioning intricately. You are right. Can you paste a "df -h" ?
About cachefly to 100MB, as I said is just "a glance". Imagine poor of us with low bandwidth as 300KB/s can be a tedious waiting each time. Thanks for your opinion.
Here you go :
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 50G 27G 21G 57% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/sda1 485M 62M 398M 14% /boot
/dev/mapper/VolGroup-home 20G 5.5G 14G 30% /home
Rest of GBs are un-partitioned for SolusVM KVM LVM Seems that it's fetching info from df -h.
Yes, right. I think the way goes through a "fdisk -l" and work on it
Nice work. I put it in my .bashrc (for where I use Bash and not something else), works ok.
maybe a windows version too?
Great thanks. It´s just compiling search, commands and tips all-in-one. Time saving.
Umm... interesting. I will put on roadmap. Anyway seems that Windows users are more aware of the machine they have. Thanks
+1 for this
BGInfo : http://technet.microsoft.com/en-us/sysinternals/bb897557.aspx
yeh that's what i currently use for templates. but its not exactly reliable.
Information showed on a MacOS
hell yeah processor from 2006
____
_____/\ \ __ ___ _______ ____________
/\ / ___\ _ _ / / / / | / / / | / / ____/ __ \
/ \ \ / / | | / / // /| | /| / // // |/ / /_ / / / /
/ \ \// \ | |/ / __ / | |/ |/ // // /| / __/ / // /
/ _________\ |___// // |/|/___// |// _/
\ / / vHWINFO 1.0 Oct 2014 | https://vhwinfo.com
hostname: robin-MS-7366. (public xxxxxxx)
SO: Ubuntu 14.04.1 LTS 64 bits
kernel: 3.13.0-37-generic
virtual: It is not virtual, is dedicated
cpu: Pentium(R) Dual-Core CPU E5200 @ 2.50GHz
vcpu: 2 cores / 5251.54 bogomips
RAM: 3952 MB (43% used) / swap 4093 MB (0% used)
HD: 932G (3% used)
cachefly 1MB: 1,27 MB/s
robin@robin-MS-7366:~$ ^C
In MacOS, usually command wget usually is not installed, but now Mac OS X 10.9 Mavericks comes with CURL that makes almost the same.
curl https://vhwinfo.com/vhwinfo.sh /dev/null|bash