New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Fix for CVE-2014-3153 in OpenVZ
Hello, we have applied the upgrade 042stab090.3,
but the kernel showed in the containers is not fixed.
We need do a stop/start on all containers to fix the bug?
HOST NODE is fixed (after reboot)
[root@mad1-ovz2 ~]# uname -a Linux mad1-ovz2.ginernet.com 2.6.32-042stab090.3 #1 SMP Fri Jun 6 09:35:21 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux [root@mad1-ovz2 ~]#
Containers are still with old kernel...
[root@mad1-ovz2 ~]# vzctl enter 2267 entered into CT 2267 root@server [/]# uname -a Linux server.xxxxx.com 2.6.32-042stab085.20 #1 SMP Fri Jun 6 09:35:21 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux root@server [/]# exit logout exited from CT 2267 [root@mad1-ovz2 ~]#
Thanks!
Thanked by 1TheHackBox
Comments
I had this issue, the way I fixed it was to restart the actual containers individually.
When containers are suspended rather than shutdown, they keep the old kernel version number and uptime. You're fine though as it's only cosmetic.
Are you saying that I dont need to reboot the containers?
You do not need to reboot your containers. Remember OpenVZ uses the host kernel. I presume that they keep the old kernel version, etc., when the container is suspended to keep things inside the container from going bonkers.
Restarting doesn't hurt though if it will make you sleep better at night.
@DigitalDuke : Script to restart all containers...
This is not going to be a fun day.
Use http://kernelcare.com if you don't want to go around rebooting.
Debian 3.2.54-2 i686 GNU/Linux is not fixed? but didnt got an update.
So from what I understand it's the same guys who is behing CloudLinux ?
Yes
Damn I wish I knew about that sooner any issues with it to date?
None, I've been running it on one node for a few weeks. Put it on all last night when OpenVZ tweeted about the vuln.
It has similar functionality to KSplice if not the same. You can also see the patches here:
http://patches.kernelcare.com
Is it only me that finds it odd that a vulnerability comes out shortly after KernelCare gets released?
Coincidence? perhaps.
I thought it was quite old? no?
Especially the dates on their website indicate May (in their faq)?
for vz in $(vzlist -H -o ctid); do vzctl stop $vz && vzctl start $vz; done
One liners win
You beat me to it :-)
Also, by using && after stop you won't boot containers supposed to be shutdown
Nothing showing in vzlist after yum update, and getting "container file does not exist" with vzctl start
CentOS 6 64bit, no control panel or anything. Booting into 2.6.32-042stab088.4 doesn't work either
https://openvz.org/User_Guide/Operations_on_Containers
I'm not sure if they started using other container ids however this could be part of your problem.
Missed the extra 1 out on the end, although it's the same issue with any ID's.
Have you tried turning the quota off?
vzquota off 101
Yeah I've tried that, quota is not running, and turning it on gives "Native quota is already running for this partition."
This may sound like a silly question, however is your /vz partition mounted?
I can browse /vz, is that good enough?
Then it is mounted.
can we be affected by this on us? I must contact the management
Saving before its edited.
lol.
Wait until someone exploit your nodes, you will notice.
@GVH_Rakesh has been banned for pretending to be a member of staff.