Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How-to: Run LXC containers inside OpenVZ VPS
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How-to: Run LXC containers inside OpenVZ VPS

Hello LET Community,

This is my first post here, and this is a tutorial about how to run LXC containers inside an LET-style OpenVZ VPS. It's a cool thing to toy with and sometimes useful, but due to some OpenVZ limitations, its how-to doesn't seem to be readily available on the internet. In this tutorial, I'll show how to run an Alpine Linux container inside an OpenVZ VPS.

Why not Docker? Although OpenVZ supports running Docker inside CT, it requires veth and bridge kernel modules which are not made available by most VPS providers. Besides, Docker is glorified and consumes too much resource.

Be aware that some providers does not allow "nested virtualization." Whether running LXC is a violation of AUP is totally dependent on the definition of virtualization. Running LXC containers incurs very little overhead, and it's one thing to run LXC and another thing entirely to run QEMU.


The following example assumes the distribution on your OpenVZ VPS is Arch Linux. Although most (if not all) OpenVZ providers don't provide this option, it takes only few commands and minutes to convert any VPS into Arch.

Once we have Arch, install the following:

pacman -S lxc openvpn

Configure custom cgroups in systemd:

echo "JoinControllers=cpu,cpuacct,cpuset freezer,devices" >> /etc/systemd/system.conf

Create an LXC container, here we choose alpine:

lxc-create -n alpine -t /usr/share/lxc/templates/lxc-alpine

Edit /usr/share/lxc/config/alpine.common.conf and comment out the following lines as it is unable to drop these capabilities:

#lxc.cap.drop = syslog
#lxc.cap.drop = wake_alarm

Before we start this container, let's configure networking first. Due to the absence of veth and bridge modules, let's make a pair of TUN interfaces with OpenVPN as a workaround:

cat > /etc/openvpn/server/lxc_tun0.conf << EOF
dev tun0
proto udp
lport 65500
local 127.0.0.1
ifconfig 10.0.0.1 10.0.0.2
auth none
cipher none
EOF

cat > /etc/openvpn/client/lxc_tun1.conf << EOF
dev tun1
proto udp
remote 127.0.0.1 65500
ifconfig 10.0.0.2 10.0.0.1
auth none
cipher none
EOF

The default systemd unit files for openvpn has a restriction that needs to be taken care of:

sed -i 's/LimitNPROC=10/LimitNPROC=100/' /usr/lib/systemd/system/[email protected]
sed -i 's/LimitNPROC=10/LimitNPROC=100/' /usr/lib/systemd/system/[email protected]

Now we can start these interfaces:

systemctl start openvpn-server@lxc_tun0
systemctl start openvpn-client@lxc_tun1

Configure the firewall (and if necessary, sysctl) to allow forwarding and do NAT:

iptables -A FORWARD -i tun0 -j ACCEPT
iptables -A FORWARD -o tun0 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.2 -o venet0 -j MASQUERADE

If you need to forward some ports inside the container to the outside interface:

iptables -t nat -A PREROUTING -i venet0 -p tcp --dport 80 -j DNAT --to 10.0.0.2:80

Edit /var/lib/lxc/alpine/config and change the network type to make the container take over the interface tun1:

lxc.network.type = phys
lxc.network.link = tun1

Now start the container:

lxc-start -n alpine -F

Caveat:
It is probably necessary to stop the container with the --kill option:

lxc-stop -n alpine --kill

and don't reboot or poweroff inside the container; otherwise it might reboot or shutdown your host (i.e., the OpenVZ VPS).

«1

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    oh dear god... here we go.

  • psb777psb777 Member
    edited March 2017

    It appears that a longer post will be rejected by LET board...

  • @AnthonySmith said:
    oh dear god... here we go.

    why? is it an entirely bad idea?

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited March 2017

    psb777 said: why? is it an entirely bad idea?

    No, its a great idea in principal.

    Meanwhile in 4realsville, I look forward to having 400 arguments (tickets) this month with customers who eat up 2 full cores 24x7 and generate 4000 IOPS R+W requests 24x7 because they followed a guide and packed 50 containers in to a container because they could :)

    Thanked by 4gestiondbi sin mikho pike
  • IshaqIshaq Member

    Containerception.

  • Time to resell my LES NAT vps

    /not

    Thanked by 3Ympker brueggus yomero
  • AnthonySmith said: Meanwhile in 4realsville, I look forward to having 400 arguments (tickets) this month with customers who eat up 2 dedicated cored and generate 4000 IOPS R+W requests 24x7 because they followed a guide and packed 50 containers in to a container because they could :)

    Yet in principle how customers use their CPU/RAM/IO resources should not be providers' concern, as long as their usage is within predefined limits. There is a myriad of ways to abuse resources, if that's what they want.

    If I'm to blame, should we blame the authors of OpenVZ because they made it too easy for providers to oversell?

  • ehabehab Member
    edited March 2017

    i know anthony kills in an instant any wild lxc in his woods.

  • jackbjackb Member, Host Rep
    edited March 2017

    @psb777 said:

    AnthonySmith said: Meanwhile in 4realsville, I look forward to having 400 arguments (tickets) this month with customers who eat up 2 dedicated cored and generate 4000 IOPS R+W requests 24x7 because they followed a guide and packed 50 containers in to a container because they could :)

    Yet in principle how customers use their CPU/RAM/IO resources should not be providers' concern, as long as their usage is within predefined limits. There is a myriad of ways to abuse resources, if that's what they want.

    If I'm to blame, should we blame the authors of OpenVZ because they made it too easy for providers to oversell?

    It isn't a matter of overselling, it's that OpenVZ (and by association, the panels most folk use) don't support proper limitations in terms of iops - and even CPU limitations are pretty poor.

    While this is an interesting idea, it is definitely open to abuse.

    Edit: correction, OpenVZ introduced iops limit in 2013. I haven't looked into how well it works yet.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited March 2017

    psb777 said: as long as their usage is within predefined limits.

    That is the issue :) I am suggesting it wont be, but it is what it is.

    I think the idea is great in principal, in think in practice.. we will see.

    I have no objection at all with anyone doing this on my services, they are self managed, it would hardly be fair to prevent it, its just going to be interesting is all.

    Thanked by 2psb777 yomero
  • @AnthonySmith said:
    I have no objection at all with anyone doing this on my services, they are self managed, it would hardly be fair to prevent it, its just going to be interesting is all.

    I agree. What is the difference between "firing up an LXC for nginx," and "creating a system user named nginx and put it in a chroot"? Yes, for LXC, there will be another init, another crond, another syslogd in a different namespace. But, you know what, they barely make a footprint on your CPU, RAM or IO.

    Sure, it opens up opportunities for abuse. But I could not think of a type of abuse that cannot be achieved without LXC. Do you honestly think that now, with LXC, you can finally resell your OpenVZ containers easily? No, read again, especially the last line of the post. For obvious reasons, it's inappropriate to discuss here the various ways of achieving that abuse without LXC.

    LXC is about isolation and resource limitation, and creating an exotic userspace environment without messing up the host. I posted this with the understanding that maybe some providers will quickly and probably secretly update their ToS or AUP to forbid such usage, and soon I will be not able to use it myself. However, I don't think that's fair. Users should have the freedom to choose how to use the resources within the limits (of course, could not emphasize that enough.)

  • AnthonySmithAnthonySmith Member, Patron Provider

    You might be right, we see this from 2 different perspectives.

    I see someone using it to setup 10 x TS3 servers, 10 x minecraft servers, 10 x openvpn instances, 10 x srcds servers .... then knocking on my door and asking for the toilet paper.

    time will tell.

    Thanked by 2WSS steny
  • angstromangstrom Moderator

    I don't know much about nested virtualization, but could (e.g.) KVM_1 include KVM_2 which would include KVM_3 ... which would include KVM_n, for an arbitrary n, the only constraint being available resources? (This is a general technical question, not a question about what AnthonySmith would permit on one of his servers!)

  • @angstrom said:
    I don't know much about nested virtualization, but could (e.g.) KVM_1 include KVM_2 which would include KVM_3 ... which would include KVM_n, for an arbitrary n, the only constraint being available resources? (This is a general technical question, not a question about what AnthonySmith would permit on one of his servers!)

    For KVM, no. Linux's KVM has very limited support for nested KVM.

    Thanked by 1angstrom
  • psb777psb777 Member
    edited March 2017

    @AnthonySmith said:
    I see someone using it to setup 10 x TS3 servers, 10 x minecraft servers, 10 x openvpn instances, 10 x srcds servers .... then knocking on my door and asking for the toilet paper.

    I don't see why it's inconceivable to setup 10 x any of these without using LXC. If so, why bother?

  • AnthonySmithAnthonySmith Member, Patron Provider

    psb777 said: why bother?

    Because you can?

    Why run LXC inside OpenVZ, because you can.

  • xiyanxiyan Member

    How about run lxd on Ubuntu inside the openvz? I tried it , but it doesn't work. Lxc has the advantage to separate different distributions for test and I will no longer worry about ruin the openvz vps.

  • @xiyan said:
    How about run lxd on Ubuntu inside the openvz? I tried it , but it doesn't work. Lxc has the advantage to separate different distributions for test and I will no longer worry about ruin the openvz vps.

    I think it's best to leave that as an exercise for the readers. Besides, due to length restrictions on the LET board, some parts of the tutorial had to be redacted.

    In fact, I didn't invent the whole thing. Credits go to the OpenVZ authors for enabling Docker inside CT. Maybe you'll find the idea of using TUN devices as veth substitutes new, and yes, that's my two cents.

    Although I truly hope providers could accept such usage (of course, fair use only), from the comments above, some providers apparently counted on the trickiness of running LXC inside OpenVZ to prevent abuse. Indeed, writing this tutorial might lower some bar for abusers, and I have definitely anticipated such reactions. Alas, I think thus the left pieces should be figured out by the smart readers, who are not too ignorant to do some bad things just "because they can."

  • I think HostUS supports docker, I can't see the value of running bare LXC ... docker yes, I can see the value of supporting it. Wish it was supported by more providers, as I started with OpenVZ services and am slowly switching over to KVM/dedi ... but still have a lot of VPS lying idle.

    Thanked by 2psb777 AlexanderM
  • moonmartinmoonmartin Member
    edited March 2017

    @jackb said:

    @psb777 said:

    AnthonySmith said: Meanwhile in 4realsville, I look forward to having 400 arguments (tickets) this month with customers who eat up 2 dedicated cored and generate 4000 IOPS R+W requests 24x7 because they followed a guide and packed 50 containers in to a container because they could :)

    Yet in principle how customers use their CPU/RAM/IO resources should not be providers' concern, as long as their usage is within predefined limits. There is a myriad of ways to abuse resources, if that's what they want.

    If I'm to blame, should we blame the authors of OpenVZ because they made it too easy for providers to oversell?

    It isn't a matter of overselling, it's that OpenVZ (and by association, the panels most folk use) don't support proper limitations in terms of iops - and even CPU limitations are pretty poor.

    While this is an interesting idea, it is definitely open to abuse.

    Edit: correction, OpenVZ introduced iops limit in 2013. I haven't looked into how well it works yet.

    OVZ supports CPU and network throttling or whatever you want to call it pretty well since changes that happened years ago. 2013 sounds about right. IOPS can still be a problem unfortunately. Really wish I had more control of that.

    There is an IO priority setting for IOPS but it's not that useful and causes whacky load values. I still haven't figured out whether those excessively high load values on the node when reducing IO Priority on one VPS is misleading or not.

    I think it is taking the load on that one VPS with low IO Priority and somehow applying that to the whole server to calculate load. Still not sure about that. I rarely use it except when I have no choice because someone is eating up all the IOPS.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @AnthonySmith said:
    oh dear god... here we go.

    Your company name makes it that much more enjoyable.

    I'm waiting for some people to start doing this on your JPN boxes.

    Francisco

  • AnthonySmithAnthonySmith Member, Patron Provider

    @Francisco you should never go 2 levels in...

    Thanked by 2Francisco yomero
  • @psb777 so this is pretty old but I was wondering what version of openvz was running your vps? 6 or 7?

    Personally, I find the possibility of running lxc systems on a vps very useful since I avoid making mistakes that take down the whole vps or corrupt multiple apps. It's a way to isolate apps easier than docker IMHO. For wordpress you need lamp/lemp and that's a lot of docker stuff to go through. On lxd/c is just known territory as soon as you get your additional lamp system up. Need a python news aggregator, same thing. I won't put more load on the system but I'll be safer from my own inexperience when changing wp without affecting other web apps.

  • NeoonNeoon Community Contributor, Veteran
    edited November 2017

    @Ishaq said:
    Containerception.

    But can you run a LXC in the LXC?

    Thanked by 2vanhels raindog308
  • raindog308raindog308 Administrator, Veteran

    Neoon said: But can you run a LXC in the LXC?

    And then, within that...and then...cloud profits.

    image

    Thanked by 3vanhels mikho ehab
  • NeoonNeoon Community Contributor, Veteran

    @raindog308 said:

    Neoon said: But can you run a LXC in the LXC?

    And then, within that...and then...cloud profits.

    image

    https://www.lowendtalk.com/discussion/131429/lxc-nat-containers-arround-the-world-2-y-anyone-interested

    People would even buy it, but 1$ profit margin is a bit low.

    Thanked by 1vanhels
  • Neoon said: But can you run a LXC in the LXC?

    Easy, that should work out of the box, except when AppArmor (included in proxmox) prevents your container from mounting rootfs for nested containers.

    https://serverfault.com/questions/366575/is-it-possible-to-start-lxc-container-inside-lxc-container

  • stefemanstefeman Member
    edited December 2017

    Glorious tutorial.

  • this is great dont let anyone discourage you!

  • @dncpax said:
    @psb777 so this is pretty old but I was wondering what version of openvz was running your vps? 6 or 7?

    Sorry, I missed your question. It was OpenVZ 6 (kernel 2.6.32-042stab1xx).

    Personally, I find the possibility of running lxc systems on a vps very useful since I avoid making mistakes that take down the whole vps or corrupt multiple apps.

    I agree, but in that case I would highly recommend KVM VPS as it saves lots of trouble. By the way, this tutorial is probably out-dated. For starters, Arch Linux no longer works on OpenVZ 6 because glibc 2.26 from the package repository does not support kernel v2.6.32 any longer...

Sign In or Register to comment.