New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Proxmox VE 4.0 - include LXC (and removed openvz)
Source: http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_4.0_beta1
based on Debian Jessie 8.1 use kernel 3.19.8 new HA manager, see High_Availability_Cluster_4.x QEMU 2.3 include LXC (and removed openvz), see Linux Container DRBD9 countless bug fixes and package updates (for all details see bugtracker and GIT)
Comments
wow removed Openvz eh, wonder if they are providing a toolset for none commercial support users to migrate to LXC.
@ModulesGarden interested to hear your time scale for updating your module?
Removing it until the change of the Openvz kernel to be based off the RHEL 7 kernel was the way I read it on the announcement / original discussion
Good, I just had Proxmox 3.4 ISO lying around meaning to try it sometime, now can delete it, download Proxmox 4.0 and have that lying around instead to try sometime.
Luckily migration from OpenVZ to LXC is very simple.
Today we released versions 1.5 of VPS and 1.3 of Cloud with noVNC console. Next serious update 1.6/1.4 will take place in August.
Nice one will it cover LXC then?
its beta.
and the advantages of LXC over OVZ are ? I'm guessing it's all to do with v3 Kernel. any other benefits?
Not yet Anthony. Once the version 4 is stable, we will do whatever we can to cover it.
@Bruce: Here:
http://security.stackexchange.com/questions/80532/security-of-lxc-compared-to-openvz
http://www.janoszen.com/2013/01/22/lxc-vs-openvz/
They are a little out of date, but give some top level differentiators.
How odd, just installed 4.0 beta 16 and its still OpenVZ ?????
Unless out-of-the-box security of LXC has improved since I last checked, this is not going to end well.
Originally, LXC containers were not as secure as other OS-level virtualization methods such as OpenVZ: in Linux kernels before 3.8, the root user of the guest system could run arbitrary code on the host system with root privileges, much like chroot jails.[4] Starting with the LXC 1.0 release, it is possible to run containers as regular users on the host using "unprivileged containers".[5] Unprivileged containers are more limited in that they cannot access hardware directly. Nevertheless, even privileged containers should provide adequate isolation in the LXC 1.0 security model, if properly configured
Hrm. The "if properly configured" part there is what concerns me most of all.
Any tutorials?
Is it easy to install in top of an,already installed debian?
This is exactly the way I read it as well.
LXC is nowhere near a replacement for OpenVZ, especially from a security perspective.
While there are so many options to choose from when it comes to hypervisors or application containers, even after so many years there's still only one choice for OS containers, OpenVZ.
I don't expect the OpenVZ's RHEL7 kernel to arrive any time soon, but there's no rush since RHEL6/CentOS6 is supported until 2020.
Well, weird news. Personally I don't see myself moving my servers to LXC anytime soon.
Not unless LXC gets the features, isolation and stability that openvz has for production stuff
One problem of OpenVZ for me is it's very difficult to enable L2TP in the container due to the kernel restriction. Wondering whether LXC allows L2TP?