Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


latest proxmox kvm + observium on latest debian kernel panic?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

latest proxmox kvm + observium on latest debian kernel panic?

emreemre Member, LIR
edited May 2014 in Help

--- Update -- Nevermind this problem is not related to proxmox or observium...---

Hey

I've setup my observium kvm host on a proxmox 3.2 latest up to date server.

Debian kvm is running latest wheezy 7.5.

I've tried this in 2 different proxmox nodes. Whenever my observium kvm box is active it randomly crashes the whole server with kernel panic.

Nodes are different: 1st one is amd 8 core cpu + gigabyte board + 16Gb ram

2nd one is Intel c229 board with Xeon 1225v3 24Gb ram

Anyone have any experience about something like that?

Observium kvm debian crashing the whole proxmox node..

Comments

  • AlexanderMAlexanderM Member, Top Host, Host Rep

    What does the logs in /var/logs/messages say?

  • emreemre Member, LIR

    hey I tested and tested again and again and it looks like this is not about observium kvm at all. So basically this post is useless now. Sorry.

  • MaouniqueMaounique Host Rep, Veteran

    Well, please dont forget to tell us what was wrong, I am using wheezy with proxmox on quite a few machines.

  • rds100rds100 Member

    Update the proxmox?

  • emreemre Member, LIR

    there is something going on with this server. whenever I boot my observium kvm host, the server will give kernel panic. but interestingly one time without anything running, server again panicked.

    I run memtest for 3 hours (x2 times) no errors on memory.

    This is interesting stuff. Server is a Dell tower. All Dell parts. Is this something about the latest proxmox kernel, I don't know yet.

    I've got 1 more identical Dell server waiting for disks. When my disks are ready I will also check with this identical server again.

    for the record:

    root@vm14:~# pveversion -v proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve) pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1) pve-kernel-2.6.32-29-pve: 2.6.32-126 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.5-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.5-1 pve-cluster: 3.0-12 qemu-server: 3.1-16 pve-firmware: 1.1-3 libpve-common-perl: 3.0-18 libpve-access-control: 3.0-11 libpve-storage-perl: 3.0-19 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-6 vzctl: 4.0-1pve5 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 1.7-8 ksm-control-daemon: 1.1-1 glusterfs-client: 3.4.2-1 root@vm14:~#

  • emreemre Member, LIR
    edited May 2014

    4x1TB seagate enterprise software raid

    `root@vm14:~# pveperf

    CPU BOGOMIPS: 25542.04
    REGEX/SECOND: 2700779
    HD SIZE: 19.69 GB (/dev/mapper/pve-root)
    BUFFERED READS: 271.95 MB/sec
    AVERAGE SEEK TIME: 7.64 ms
    FSYNCS/SECOND: 1022.60
    DNS EXT: 90.53 ms
    DNS INT: 0.77 ms (medyabim.com)
    root@vm14:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.88457 s, 276 MB/s
    root@vm14:~#
    `

  • MaouniqueMaounique Host Rep, Veteran
    edited May 2014

    I also had a weird problem with some AMD servers which refused to run xen server in dual mode (pv and hvm in the same time). They simply rebooted, was a problem with XCP 1.6 i think and xenserver proper had the same issue. It might be a weird combination, my servers simply rebooted, the error was so big that there was nothing written on the disk, so no dump at all, it simply died. PV OR HVM were running fine, but not in same time. It also didnt happen immediately at boot, only after 10-20 minutes from the start of the second tpe of Xen (as long as there were only PV or only HVM, was stable)
    Lost 2 weeks troubleshooting but to no avail, so I separated the VMs on servers int he end :P

Sign In or Register to comment.