Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Need SolusVM Help
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Need SolusVM Help

lele0108lele0108 Member
edited March 2012 in General

Today, after around 3 weeks of waiting, I finally get my dedicated server (Intel 2x L5420, 8Gb EEC RAM). I install SolusVM on the server, and everything goes well, until I create a VM.

The VM refused to work, no matter what template, how I restarted it. I rebooted the server, and created a new VM, and it started to work. Yay.

I then proceeded to download OpenVZ templetes of the OpenVZ site. When I try to run these templates, the server REFUSES to start.

Can anybody help me? (I'm even willing to give admin details if somebody is that awesome!)

Comments

  • I manually started this via SSH:

    dding IP address(es): 205.134.xxx.xx 205.134.xxx.xx
    Setting CPU limit: 400
    Setting CPU units: 1000
    Setting CPUs: 4
    Unable to start init, probably incorrect template
    Container start failed
    Stopping container ...
    Container was stopped
    Can't umount /vz/root/101: Device or resource busy

    That's weird. I got these directly of the OpenVZ wiki.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    are you sure you're on the right kernel?

    Francisco

  • I am running SolusVM on CentOS release 6.2, 64-bit.

    I set all of the x64 templates as x64, and not as i386.

  • KuJoeKuJoe Member, Host Rep

    I think he means did you boot into the OpenVZ kernel (uname -a).

  • Linux 191.cheetahhost.net 2.6.32-042stab049.6 #1 SMP Mon Feb 6 19:16:12 MSK 2012 i686 i686 i386 GNU/Linux

    Ahhhh. Er, how do we switch kernels?

  • prometeusprometeus Member, Host Rep

    You are using a 32 bit kernel....

  • Gah, did my host REALLY install a 32-bit kernel when I told them to install a 64-bit one.

  • @lele0108 you want to post your grub.conf here?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Word of warning, if you're planning to sell any plans on these nodes, watch out for the .32 kernels :(

    Seriously consider RHEL 5 if you aren't using it for personal dev.

    Francisco

    Thanked by 1Raymii
  • prometeusprometeus Member, Host Rep

    @francisco what bad experience you had with.32 kernels?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @prometeus said: @francisco what bad experience you had with.32 kernels?

    For a lack of a nicer way of saying it, it's unstable as fuck.

    Most nodes we put on it will go a couple days before it dead locks.

    Check the OpenVZ bugzilla to see all of the reports of softlocks/deadlocks/panic's on .32's.

    We tried to roll .32 and it was just a bad idea. We do .18's for now.

    Francisco

    Thanked by 1Raymii
  • prometeusprometeus Member, Host Rep

    Mmmmm I only have 3 nodes with openvz so maybe they aren't statistically relevant, but so far they are very stable (some weeks running) . Did you see the deadlock after some load pattern or it was random?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @prometeus said: Mmmmm I only have 3 nodes with openvz so maybe they aren't statistically relevant, but so far they are very stable (some weeks running) . Did you see the deadlock after some load pattern or it was random?

    100% random. We got a few nodes (like 6) still on it that are stable, but most took a dive within a month. If we were OK with 30 day uptimes it'd be fine, but we're, including the clients, used to .18's 100 - 200+ day uptimes. :)

    Francisco

  • @Francisco said: For a lack of a nicer way of saying it, it's unstable as fuck.

    Most nodes we put on it will go a couple days before it dead locks.

    Check the OpenVZ bugzilla to see all of the reports of softlocks/deadlocks/panic's on .32's.

    We tried to roll .32 and it was just a bad idea. We do .18's for now.

    Francisco

    Thanks for the heads up, I already talked to my host about switching my back to 64bit, didn't know why they provisioned me with 32. I am not selling this server, its for personal use :P

    @BassHost said: you want to post your grub.conf here?

    Thanks for the help, but I think I figured it out.

  • prometeusprometeus Member, Host Rep

    @Francisco said: 100% random. We got a few nodes (like 6) still on it that are stable, but most took a dive within a month. If we were OK with 30 day uptimes it'd be fine, but we're, including the clients, used to .18's 100 - 200+ day uptimes. :)

    ok, so I should cross fingers:

    10:13:20 up 25 days, 17:41

  • @prometeus said: so I should cross fingers

    That's what I'm wondering, haha; we just released OVZ's on a .32 as well.

  • @prometeus said: ok, so I should cross fingers:

    Depends how you use it

    
    [root@e3clt03 ~]# uptime
     08:17:22 up 56 days,  6:30,  
    [root@e3clt03 ~]# uname -a
    Linux e3clt03.hostigation.com 2.6.32-042stab044.11 #1 SMP Wed Dec 14 16:02:00 MSK 2011 x86_64 x86_64 x86_64 GNU/Linux
    

    It's a backup node, so only vsftpd, rsync and dropbear are running in 99% of the containers. No vSwap. No VPN's. With typical use is where @Francisco is running into troubles, so YMMV

  • prometeusprometeus Member, Host Rep

    @miTgiB said: vSwap. VPN's

    you named them all :)
    the nodes are in full production, one of the three (the first I installed 25 days ago) is near the limit of VPS I planned to stock on the servers for now...

  • prometeusprometeus Member, Host Rep

    @Francisco said: Most nodes we put on it will go a couple days before it dead locks.
    Check the OpenVZ bugzilla to see all of the reports of softlocks/deadlocks/panic's on .32's.
    We tried to roll .32 and it was just a bad idea. We do .18's for now.

    >

    I think to have found a pattern at least with softlock. Today a guy in the forum reported a dd test with very low performances. I checked and noticed I forgot to lower vm swappiness, which usually I set at a value of 1. While I was at it I checked the other servers and since on one I've a test vps on this server I set swappiness=0 to see if I could see some difference, after one minute load spiked out and I was flooded with softlock messages. Just setting it again to 1 stopped the mess and everithing went back to normal...

Sign In or Register to comment.