Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hp Proliant gen8 and DEBIAN-Proxmox issue !
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hp Proliant gen8 and DEBIAN-Proxmox issue !

cociucociu Member
edited January 2019 in General

Hello Guys ,

i spend some minutes to explain a strange issue what we had in the last month and is really difficult to find the resolution , Thanks to Miguel from this forum at least we have sort this out ..
So maybe some of you had a similar problem and this is the solution what we find :

so my personal case :

the server config :

2xE5-2450L
384 gb ram
14x10tb hdd
OS : Any version of proxmox 4.+ in my case 5.3 4.4

When we have created more than 10 vm in this nodes the load was go up like a crazy (if i make a comparation with the old g6 servers ... with the same quantity of clients the g8 server cannot handle half of clients)

Some of storage nodes had +200 load in peak time.

So the solutions was this :

1) change the disk to raw (thanks to @MikePT )
2) You must to upgrade all firmware in the bare metal including p420 controller if not after the first setting the server will reboot every x hours due of miss temperature reported in the pci slot

I have read many in proxmox.com this days and seems a huge quantity of people is complain the same but i have tested almost all options and this is the correct way for me.

To mention :

the same issue in this servers model :

Hp dl360e G8
Hp dl360p G8
hp dl380 e G8
Hp dl380p G8

I will test tomorrow in dl160 also to make this post complete.

Thanked by 2pike MikePT
«1

Comments

  • FHRFHR Member, Host Rep
    edited January 2019

    How are you using the P420i controller? Which RAID? Or did you switch it to HBA mode? What are your caching settings?

    I have no such issues with pure QEMU/libvirt and HP DL360p G8. (not using Proxmox)

  • Don't forget to update your NIC ROMs and support drivers while you're at it. https://support.hpe.com/hpesc/public/home/driverHome?sp4ts.oid=5317159

    Thanked by 1cociu
  • FHR said: How are you using the P420i controller? Which RAID? Or did you switch it to HBA mode? What are your caching settings?

    Any raid is caused the same problems , i have tested in 0-1-5-10 and the same.

  • FHR said: (not using Proxmox)

    this is only for debian and proxmox instalations , centos is run perfectly for example.

  • i am frustrated any way because we have populated more than 50 nodes .... so will be a hard work in all this ...

  • Buy proxmox support ticket for this. It would be cheaper than to deal with those 50 nodes of crash and complaints. Possible bug with proxmox on debian.

    Thanked by 1cociu
  • With my DL360G8 i also had two issues:

    • VT-X / Intel virt wasn't enabled in bios / Speedstep slowed all my vms down

    • In once had a bad raid cache and battery and the entire proxmox node went slow when adding a new VM. Did you configure RAID without or with raidcard caching? I'd try to just use it as a HBA and let Debian do the raid configs

  • LTniger said: Buy proxmox support ticket for this. It would be cheaper than to deal with those 50 nodes of crash and complaints. Possible bug with proxmox on debian.

    i had 2 years support in proxmox payed and always we have resolved our own issues finnaly. Any way proxmox forum if you give a look you will see full of this problem and was release at least 10 version and seems to not bothered to fix this.

  • cociucociu Member
    edited January 2019

    FoxelVox said: In once had a bad raid cache and battery and the entire proxmox node went slow when adding a new VM. Did you configure RAID without or with raidcard caching? I'd try to just use it as a HBA and let Debian do the raid configs

    we will make today at 23.00 romanian time a update to NIC ROMs like @LTniger sugest because in the ilo event i see huge temperature also so to remove all posible issues i will do this also.

    Once is all fisish if you want i can create a guide how to proceed all this.

    I repeat this is sucks for proxmox because all proxmox forum is full of hp problems related to this issue. 3 weeks reading and testing all solutions from there and no one was work for us.

  • FoxelVox said: FoxelVox

    here is the hight temperature https://ufile.io/o9l0o

  • letboxletbox Member, Patron Provider

    @cociu said:

    FoxelVox said: FoxelVox

    here is the hight temperature https://ufile.io/o9l0o

    Does your issue come with KVM or LXC? Or you just mixed your server with both?

  • key900 said: Does your issue come with KVM or LXC? Or you just mixed your server with both?

    i sell only LXC in this offers so never try with kvm ... interesting to know any way.

  • letboxletbox Member, Patron Provider

    @cociu said:

    key900 said: Does your issue come with KVM or LXC? Or you just mixed your server with both?

    i sell only LXC in this offers so never try with kvm ... interesting to know any way.

    Don't LXC with Proxmox your server will keeps break every 3 days maximum.

  • cociucociu Member
    edited January 2019

    key900 said: Don't LXC with Proxmox your server will keeps break every 3 days maximum.

    this is resolved if you follow this thread , do you had this problem in a hp server ? if yes please read my entirely explication , you will see can be resolved.

  • letboxletbox Member, Patron Provider

    @cociu said:

    key900 said: Don't LXC with Proxmox your server will keeps break every 3 days maximum.

    this is resolved if you follow this thread , do you had this problem in a hp server ? if yes please read my entirely explication , you will see can be resolved.

    how you thought this fixed? it could back anytime the LXC really buggy and not ready for production.

    Note : i don't have any HP using Supermicro.

  • key900 said: Note : i don't have any HP using Supermicro

    in this case is not the same problem.

  • FoxelVoxFoxelVox Member
    edited January 2019

    @cociu said:

    FoxelVox said: FoxelVox

    here is the hight temperature https://ufile.io/o9l0o

    This was with a functioning node of mine too the case, here it is 85C but it doesn't affect this issue, since this node was running 40 vm's fine back then (some hard hitting vms too, same cpus as you)

    See:

  • FoxelVox said: FoxelVox

    do this 2 steps 1) change the disk to raw (thanks to @MikePT )
    2) You must to upgrade all firmware in the bare metal including p420 controller if not after the first setting the server will reboot every x hours due of miss temperature reported in the pci slot

    after this like a bonus repair this temperature by update the firmware .

    Thanked by 2vpsGOD MikePT
  • @cociu said:

    FoxelVox said: FoxelVox

    do this 2 steps 1) change the disk to raw (thanks to @MikePT )
    2) You must to upgrade all firmware in the bare metal including p420 controller if not after the first setting the server will reboot every x hours due of miss temperature reported in the pci slot

    after this like a bonus repair this temperature by update the firmware .

    Thanks for the tip, i unfortunately don't have any of these servers anymore since i'm a employee now with a managed hosting company instead of running my own company :hushed:

    Will be very helpful to other people though who might google this in the future friend

  • HaendlerITHaendlerIT Member, Host Rep

    We also use DL360p Gen8 and I never noticed any Problems, but we are only using KVM so its possible that we would get the same Problems with lxc.

  • FoxelVox said: FoxelVox

    after firmware update i have a temperature in the controller in 50*c so i strong sugest a update

  • AnthonySmithAnthonySmith Member, Patron Provider

    As I said to you, don't use HP raid controllers in HP especially old HP servers and especially the 4xx series, even HP does not use HP raid controllers in production, swap one out for an LSi and try again.

    Thanked by 1vimalware
  • For your information, Proxmox does not support hardware-raid. Kick out the Raid and go with ZFS instead.

    Thanked by 1vimalware
  • FHRFHR Member, Host Rep

    @AnthonySmith said:
    As I said to you, don't use HP raid controllers in HP especially old HP servers and especially the 4xx series, even HP does not use HP raid controllers in production, swap one out for an LSi and try again.

    I use P420i with SSDs in RAID 5, works fine.
    With that said, there are definitely issues with the controller - low performance if switched to HBA mode and disabling SmartPath actually improves IO performance.

  • AnthonySmith said: As I said to you, don't use HP raid controllers in HP especially old HP servers and especially the 4xx series, even HP does not use HP raid controllers in production, swap one out for an LSi and try again.

    some times i am stubborn some times so still try with my solution. Finnaly yes i will buy a lot of lsi .... too many problems with this shit.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited January 2019

    FHR said: I use P420i with SSDs in > RAID 5, works fine.
    With that said, there are definitely issues with the controller - low performance if switched to > HBA mode and disabling SmartPath actually improves IO performance.

    I worked at HP doing DC/service roll out on a scale which most at LET could not even comprehend, I promise you even HP do not trust their own budget raid controllers in production it is not a case of 'if' they fail it is 'when'

    I went against my own advice about 9 years ago thinking it was just HP's engineering department being fussy and had 3 servers running 410's every singe one simultaneously kicked out 3 disks on on raid 10 array within 24 months, I know for sure I am far from alone in this sort of experience even in this little market segment.

    You can use cheap shit if you want, but please don't trust it. :)

    Thanked by 3FHR cociu vimalware
  • AnthonySmith said: You can use cheap shit if you want, but please don't trust it.

    good advice , really thanks , i will look to buy some lsi controllers , but the installed node will remain like is unfortunatly because is a huge data and to move all this need months ...

  • Any way to all others this is the steps what you need to do to avoid load + unespected reboot :smile:

    1 update all firmware , when i told all is ALL

    2) change the lvm to raw

    This will be the solutions for lxc containers, because for kvm seems to be ok by default settings like others customers from here told.

  • @cociu said:
    Any way to all others this is the steps what you need to do to avoid load + unespected reboot :smile:

    1 update all firmware , when i told all is ALL

    2) change the lvm to raw

    This will be the solutions for lxc containers, because for kvm seems to be ok by default settings like others customers from here told.

    Can you repair your home panel, all the systems can not be installed. The dedicated server has always been unusable.

  • zhaoxi said: Can you repair your home panel, all the systems can not be installed. The dedicated server has always been unusable.

    dedicated servers is work perfectly if you know what you have buy , in a g6 servers due of old kernel is possible to not work autoprovisioning for new kernels like proxmox , debian 8-9 , you need ilo and install manually so if you have any problem related with this please open a tiket.

Sign In or Register to comment.