As I understand it, there are three issues at play here..
CVE-2017-5753: Known as Variant 1, a bounds check bypass [Spectre]
CVE-2017-5715: Known as Variant 2, branch target injection [Spectre]
CVE-2017-5754: Known as Variant 3, rogue data cache load [Meltdown]
Variant 3 effects Linux hosts, but as KVM is fully virtualizing the processor, it does not allow the variant to be exploited between guests. Variant 3 is a concern for LXC/Docker/OpenVZ hosts because it does allow side-channel attacks between instances. There is a patch available for this that hosts are applying. It is possible for processes on the hypervisor itself (ie, dom0 space in Xen parlance) to exploit this vulnerability. This is patched via KPTI/KAISER.
Variant 1 and 2 are currently not mitigated for KVM hosts as I understand, and requires a hardware/microcode update and support for these new IBRT op-codes in KVM to protect the guests.
@IonSwitch_Stan Variant 3 (CVE-2017-5754) is Meltdown, not variant 1. This is patched by KPTI provided updates are installed. KVM is unaffected at this time.
@kassle said:
are the guest kvm also need to be patched ? or the host is enough ?
Yes you probably should patch your KVM too. I say probably since as long as you are the only user and don't care about privilegue escalation you could take the risk of leaving it unpatched. As soon as any untrusted code/users enter the picture you don't have much choice though imo.
So far, among my providers, only Hetzner, Netcup and Prometeus have issued alerts and/or reboots (Prometeus announced they'll they may retire the XenPower product completely)
Three other providers I've VPS with (one of these is a OVZ and is vulnerable to a PoC it seems) are probably still waiting a little bit for the dust to settle
IonSwitch_Stan said: Variant 1 and 2 are currently not mitigated for KVM hosts as I understand, and requires a hardware/microcode update and support for these new IBRT op-codes in KVM to protect the guests.
If you are referencing https://www.qemu.org/2018/01/04/spectre/, I think what they are saying is that although kernel/microcode updates that are already available are sufficient to prevent guests from reading hypervisor memory, qemu/kvm updates are needed to prevent applications in the guest from being able to read the guest kernel memory, because qemu needs to expose some CPU functionality ("the new CPUID bits and MSRs") to the guest kernel for the guest kernel to protect itself.
I'm not sure if the qemu/kvm updates are available now, but when they do become available, the post says that although a reboot of the host will not be required, the guest does need to be rebooted even if the guest kernel is already updated (live migration is insufficient), because it needs to get the exposed CPU functionality on boot.
Without patches, Guests cannot read from the Host memory.
Without patches, Guests cannot read from other Guests memory.
WIthout patches, Guests can make sidechannel reads from their own memory
Without patches, Hosts can make sidechannel reads from their own memory
The Kernel updates fix the last two items. My understanding currently is that Specrte requires microcode updates/new opcodes to be developed, and that those do not currently exist, and there is no a Linux patch to protect against Specrte.
Again -- lots of information out there -- please correct/challange me if my summary is wrong.
IonSwitch_Stan said: The Kernel updates fix the last two items. My understanding currently is that Specrte requires microcode updates/new opcodes to be developed, and that those do not currently exist, and there is no a Linux patch to protect against Specrte.
Again -- lots of information out there -- please correct/challange me if my summary is wrong.
I think both Spectre and Meltdown are side-channel reads.
According to the qemu blog post, guests cannot exploit Meltdown to read hypervisor memory, but CAN use Spectre to read hypervisor memory ("CVE-2017-5715 is notable because it allows guests to read potentially sensitive data from hypervisor memory"). But AFAIK there is a Linux kernel update combined with microcode updates that prevents guests from reading hypervisor memory. The blog post indicates both kernel update and microcode update are necessary even for bare-metal: "Just like on bare-metal, the guest kernel will use the new functionality provided by the microcode or millicode updates".
However there is no official qemu or kvm patch that prevents guest applications from reading guest kernel memory; but it sounds like distributions are already releasing the unofficial patch which the Paolo person (who wrote the blog post) said was written by Intel. So e.g. in Red Hat's updates everything is probably already resolved, as ramnet said.
Edit: ok I think you're right about the microcode, but then I'm confused by the blog post wording o.o
Yes you probably should patch your KVM too. I say probably since as long as you are the only user and don't care about privilegue escalation you could take the risk of leaving it unpatched. As soon as any untrusted code/users enter the picture you don't have much choice though imo.
hope the performance penalty effect not doubled then
Red Hat is claiming that the kernel patches they published already have mitigations for Variant 1 and Variant 3.
Furthermore, Red Hat is claiming that the qemu-kvm patches they published already have mitigations for Variant 2.
So, it is my understanding that for KVM, you are 100% patched now, unless Red Hat is lying.
The kernels might contain the patches, however - the microcode has to be there, and it's currently not.
In another thread, one linked to a debian package which might contain the microcodes for both spectre variants, however these microcodes are yet to be seen on other operating systems and on intels own website.
So no. People are not patched fully until they've updated the microcode for the CPUs.
Zerpy said: In another thread, one linked to a debian package which might contain the microcodes for both spectre variants, however these microcodes are yet to be seen on other operating systems and on intels own website.
Someone said the Debian ones are the same already released by RHEL.
Edit: can't find the link now, but https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886367#21 implies that the microcode updates were published by Intel yesterday (first message says expecting an update, next one saying the updates are pushed to Debian)
Zerpy said: In another thread, one linked to a debian package which might contain the microcodes for both spectre variants, however these microcodes are yet to be seen on other operating systems and on intels own website.
Someone said the Debian ones are the same already released by RHEL.
In that case, they'll not fix spectre for a bunch of CPUs such as:
i7-6700k
E3-1230
E3-1230v6
E5-1630v2
E5-1630v3
E5-1630v4
E5-1650v3
E5-1650v4
E5-2620v3
D-1540
Because none of those CPUs even after updated microcodes and kernels has ibpb and ibrs enabled:
Zerpy said: In that case, they'll not fix spectre for a bunch of CPUs such as: i7-6700k E3-1230 E3-1230v6 E5-1630v2 E5-1630v3 E5-1630v4 E5-1650v3 E5-1650v4 E5-2620v3 D-1540
Because none of those CPUs even after updated microcodes and kernels has ibpb and ibrs enabled:
Ah, this is with RHEL? The RHEL patch seemed more comprehensive than the Debian one actually since Debian hasn't updated kernel package yet AFAIK. So that is strange... hopefully get more information from Intel/others soon.
@Zerpy said:
The kernels might contain the patches, however - the microcode has to be there, and it's currently not.
In another thread, one linked to a debian package which might contain the microcodes for both spectre variants, however these microcodes are yet to be seen on other operating systems and on intels own website.
So no. People are not patched fully until they've updated the microcode for the CPUs.
Red Hat released microcode updates also.
Linux has long had the ability to patch the microcode during OS bootup, unlike certain other OSes which require BIOS updates to do that.
As noted, installing the microcode update for your hardware, if provided by the hardware vendor, is necessary to protect against variant 2. Please contact your hardware vendor for microcode updates.
@perennate said:
Ah, this is with RHEL? The RHEL patch seemed more comprehensive than the Debian one actually since Debian hasn't updated kernel package yet AFAIK. So that is strange...
Yeah the example above is from the RHEL (tested on actual Red Hat, CentOS and CloudLinux).
I've asked a bunch of other sysadms working with RHEL based systems, which see same behaviour - so.. either we're all updating things incorrectly - or there's yet to be a new microcode release.
Online.net which keeps the list up to date also have most of them in "Pending" because they're waiting.
@ramnet said:
Red Hat released microcode updates also.
Sure - but those microcodes ain't really fixing spectre, they're not enabling ibpb and ibrs which is required.
@ramnet said:
Linux has long had the ability to patch the microcode during OS bootup, unlike certain other OSes which require BIOS updates to do that.
Correct - but if the microcode is not there, you'll still have to reboot to get it applied when it's available
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long? GestionDBI,Virtmach & BandwagnHost needed a long period of time to boot up the servers back.
Zerpy said: I've asked a bunch of other sysadms working with RHEL based systems, which see same behaviour - so.. either we're all updating things incorrectly - or there's yet to be a new microcode release.
Intel's press release says most of microcode updates coming next week AFAIK
@Neoon said:
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long? GestionDBI is down since 80 minutes and counting.
Zerpy said: I've asked a bunch of other sysadms working with RHEL based systems, which see same behaviour - so.. either we're all updating things incorrectly - or there's yet to be a new microcode release.
Intel's press release says most of microcode updates coming next week AFAIK
Yeap, that's what I'm counting on as well - so we all just have to sit tight :-D
But the fact that people believe they're all 100% safe now is fake news <3
@Neoon said:
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long? GestionDBI is down since 80 minutes and counting.
I just stay with dedis, to much hassle.
D*uq. No nodes has been done for so long. I know you don't like us, but no need to bash and say bulls**t on forums...
Gestion DBI | IT consulting | OpenVZ, KVM VPS, Shared Hosting, Dedicated Servers with 24/7 Technical Support DeepNet Solutions | Cheap and low cost VPS in 9 cities around the world! | Starting at $13CAD by year!
@Neoon said:
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long? GestionDBI is down since 80 minutes and counting.
I just stay with dedis, to much hassle.
D*uq. No nodes has been done for so long. I know you don't like us, but no need to bash and say bulls**t on forums...
"Server Unity GestionDBI England went offline. Detected: 06.01.2018 19:50:10"
Just checked if my Monitoring went shit, I did not:
@Neoon said:
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long? GestionDBI is down since 80 minutes and counting.
I just stay with dedis, to much hassle.
D*uq. No nodes has been done for so long. I know you don't like us, but no need to bash and say bulls**t on forums...
"Server Unity GestionDBI England went offline. Detected: 06.01.2018 19:50:10"
Just checked if my Monitoring went shit, I did not:
Did you open a ticket? no
Did you try to login to SolusVM Portal? Probably not
Why you say we are down, WHEN it's your VPS that is down?
All nodes are up and running, excluding LAX-03 that is currently rebooting for the last 5 minutes.
Gestion DBI | IT consulting | OpenVZ, KVM VPS, Shared Hosting, Dedicated Servers with 24/7 Technical Support DeepNet Solutions | Cheap and low cost VPS in 9 cities around the world! | Starting at $13CAD by year!
@davidgestiondbi said:
- Did you open a ticket? no
- Did you try to login to SolusVM Portal? Probably not
- Why you say we are down, WHEN it's your VPS that is down?
All nodes are up and running, excluding LAX-03 that is currently rebooting for the last 5 minutes.
So, you go, reboot the nodes, and check if they back up but you do not care if the costumer VM's are backup up? well ok then.
I had 10 restarts today, this question was asked in general, I did listed just one provider, that was maybe a bit unfair but I have updated it.
At least half of them, went down for about 1 hour, so I go for each of these and open a ticket? No. I do expect that a Provider brings up the VM's and I have not to login in each panel and reboot them by hand.
@Neoon said:
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long? GestionDBI,Virtmach & BandwagnHost needed a long period of time to boot up the servers back.
@Neoon said:
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long? GestionDBI,Virtmach & BandwagnHost needed a long period of time to boot up the servers back.
I just stay with dedis, to much hassle.
I guess to safely switch off all VMs?
How many containers do you need on a single node that you need 60min+ to reboot it?
Funny fact, longest reboot time was MTL-02, with ~35min.
Gestion DBI | IT consulting | OpenVZ, KVM VPS, Shared Hosting, Dedicated Servers with 24/7 Technical Support DeepNet Solutions | Cheap and low cost VPS in 9 cities around the world! | Starting at $13CAD by year!
uh... ... ...... .. sudo (Holy shit that's half of it) apt-get (common sense part) update (omg that simple?) (hit enter) (only if I bought softlayer or theplanet!!)
Comments
what exactly are you curious about?
live migrate stuff to backup nodes while you upgrade the main nodes?
Only summerhosts cannot afford this...
A bit of reduced performance isn't an issue anyways with almost everyone idling :)
Network engineer of PyongVPS
We patched our KVM service yesterday and are patching legacy OpenVZ today.
Did the update live in place with a reboot after. As our nodes aren't overloaded to begin with, not seeing any performance problems at all.
Backups nodes is overkill when it takes literally 10 minutes to apply the patch and reboot.
Sad to see my 2 years of uptime wiped away, but oh well.
I would hope that providers will stagger reboots so that I do not have everything die at once ;)
Yes, that would be logical.
Did mine 1 machine at a time for that reason.
In before "I am a host and my server is not rebooting back up after some update!" threads.
The end is nigh. Why? Because the end is actually nigh.
Haha, you're 100% right. This will show everyone who the morons are claiming to be decent hosts.
Should be fun to watch.
@ramnet glad to hear you’ve patched. Still waiting to see the response from other providers.
Adam
As I understand it, there are three issues at play here..
CVE-2017-5753: Known as Variant 1, a bounds check bypass [Spectre] CVE-2017-5715: Known as Variant 2, branch target injection [Spectre] CVE-2017-5754: Known as Variant 3, rogue data cache load [Meltdown]
Variant 3 effects Linux hosts, but as KVM is fully virtualizing the processor, it does not allow the variant to be exploited between guests. Variant 3 is a concern for LXC/Docker/OpenVZ hosts because it does allow side-channel attacks between instances. There is a patch available for this that hosts are applying. It is possible for processes on the hypervisor itself (ie, dom0 space in Xen parlance) to exploit this vulnerability. This is patched via KPTI/KAISER.
Variant 1 and 2 are currently not mitigated for KVM hosts as I understand, and requires a hardware/microcode update and support for these new IBRT op-codes in KVM to protect the guests.
Am I incorrect?
http://www.ionswitch.com - Seattle KVM SSD VPS - 512MB Annual VPS for $17.50
Red Hat is claiming that the kernel patches they published already have mitigations for Variant 1 and Variant 3.
Furthermore, Red Hat is claiming that the qemu-kvm patches they published already have mitigations for Variant 2.
So, it is my understanding that for KVM, you are 100% patched now, unless Red Hat is lying.
@IonSwitch_Stan Variant 3 (CVE-2017-5754) is Meltdown, not variant 1. This is patched by KPTI provided updates are installed. KVM is unaffected at this time.
Adam
I had the variants out of order and fixed them according to (https://www.kb.cert.org/vuls/id/584653).
http://www.ionswitch.com - Seattle KVM SSD VPS - 512MB Annual VPS for $17.50
are the guest kvm also need to be patched ? or the host is enough ?
Guest VMs need patched too.
Spartan Host Ltd | DDoS Protected Seattle and Dallas Dedicated Servers, VPS and Colocation | Skype: spartan_host
Well, @davidgestiondbi is already on it..
I won't be back until @bsdguy is released.
Yes you probably should patch your KVM too. I say probably since as long as you are the only user and don't care about privilegue escalation you could take the risk of leaving it unpatched. As soon as any untrusted code/users enter the picture you don't have much choice though imo.
uptimecalypse
So far, among my providers, only Hetzner, Netcup and Prometeus have issued alerts and/or reboots (Prometeus announced
they'llthey may retire the XenPower product completely)Three other providers I've VPS with (one of these is a OVZ and is vulnerable to a PoC it seems) are probably still waiting a little bit for the dust to settle
Nggg. Yeah.
I won't be back until @bsdguy is released.
If you are referencing https://www.qemu.org/2018/01/04/spectre/, I think what they are saying is that although kernel/microcode updates that are already available are sufficient to prevent guests from reading hypervisor memory, qemu/kvm updates are needed to prevent applications in the guest from being able to read the guest kernel memory, because qemu needs to expose some CPU functionality ("the new CPUID bits and MSRs") to the guest kernel for the guest kernel to protect itself.
I'm not sure if the qemu/kvm updates are available now, but when they do become available, the post says that although a reboot of the host will not be required, the guest does need to be rebooted even if the guest kernel is already updated (live migration is insufficient), because it needs to get the exposed CPU functionality on boot.
Edit: actually it sounds like an unofficial patch for kvm developed by Intel was already available before the qemu blog post (https://lists.nongnu.org/archive/html/qemu-devel/2018-01/txtfadLGhMEF6.txt); so distributions probably already have it although the blog post indicates that it is not merged in upstream; also the author of the blog post seems to suggest there are some problems with this unofficial patch? (https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg00867.html)
bearmon - free and simple server monitoring
@perennate
What I read from that QEMU document is:
The Kernel updates fix the last two items. My understanding currently is that Specrte requires microcode updates/new opcodes to be developed, and that those do not currently exist, and there is no a Linux patch to protect against Specrte.
Again -- lots of information out there -- please correct/challange me if my summary is wrong.
http://www.ionswitch.com - Seattle KVM SSD VPS - 512MB Annual VPS for $17.50
XVMLabs did reboots a few minutes back.
Neoon - Status - Wiki - Free Gameservers
Free KVM NAT in Romania and Finland
Heh. Hehehe.
Node.js code review, tutoring and advice | Custom Node.js module development | Donate
"professor 200 IQ" -YokedEgg
I think both Spectre and Meltdown are side-channel reads.
According to the qemu blog post, guests cannot exploit Meltdown to read hypervisor memory, but CAN use Spectre to read hypervisor memory ("CVE-2017-5715 is notable because it allows guests to read potentially sensitive data from hypervisor memory"). But AFAIK there is a Linux kernel update combined with microcode updates that prevents guests from reading hypervisor memory. The blog post indicates both kernel update and microcode update are necessary even for bare-metal: "Just like on bare-metal, the guest kernel will use the new functionality provided by the microcode or millicode updates".
However there is no official qemu or kvm patch that prevents guest applications from reading guest kernel memory; but it sounds like distributions are already releasing the unofficial patch which the Paolo person (who wrote the blog post) said was written by Intel. So e.g. in Red Hat's updates everything is probably already resolved, as ramnet said.
Edit: ok I think you're right about the microcode, but then I'm confused by the blog post wording o.o
Edit2: so http://metadata.ftp-master.debian.org/changelogs/non-free/i/intel-microcode/intel-microcode_3.20171215.1_changelog indicates Debian has microcode updates that resolve CVE-2017-5715, RHEL/others seem to have the same update
bearmon - free and simple server monitoring
hope the performance penalty effect not doubled then
The kernels might contain the patches, however - the microcode has to be there, and it's currently not.
In another thread, one linked to a debian package which might contain the microcodes for both spectre variants, however these microcodes are yet to be seen on other operating systems and on intels own website.
So no. People are not patched fully until they've updated the microcode for the CPUs.
Someone said the Debian ones are the same already released by RHEL.
Edit: can't find the link now, but https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886367#21 implies that the microcode updates were published by Intel yesterday (first message says expecting an update, next one saying the updates are pushed to Debian)
Edit2: I found the link, it also has more info in general -- https://github.com/hannob/meltdownspectre-patches#cpu-microcode
bearmon - free and simple server monitoring
In that case, they'll not fix spectre for a bunch of CPUs such as: i7-6700k E3-1230 E3-1230v6 E5-1630v2 E5-1630v3 E5-1630v4 E5-1650v3 E5-1650v4 E5-2620v3 D-1540
Because none of those CPUs even after updated microcodes and kernels has ibpb and ibrs enabled:
Ah, this is with RHEL? The RHEL patch seemed more comprehensive than the Debian one actually since Debian hasn't updated kernel package yet AFAIK. So that is strange... hopefully get more information from Intel/others soon.
bearmon - free and simple server monitoring
Red Hat released microcode updates also.
Linux has long had the ability to patch the microcode during OS bootup, unlike certain other OSes which require BIOS updates to do that.
Just curious, if you run the command that Zerpy wrote, do you see that these are enabled?
Edit: (on RHEL 6 need to do this first: mount -t debugfs nodev /sys/kernel/debug)
Edit2: and more information here https://access.redhat.com/articles/3311301
Edit3: more commentary here too https://serverfault.com/questions/890904/i-updated-my-centos-7-system-why-is-meltdown-spectre-only-partially-mitigated; at least one person said they are seeing IBRS/IBPB enabled, so I guess the microcode does implement it but only for a very restricted set of CPU models?
bearmon - free and simple server monitoring
Yeah the example above is from the RHEL (tested on actual Red Hat, CentOS and CloudLinux).
I've asked a bunch of other sysadms working with RHEL based systems, which see same behaviour - so.. either we're all updating things incorrectly - or there's yet to be a new microcode release.
Online.net which keeps the list up to date also have most of them in "Pending" because they're waiting.
Sure - but those microcodes ain't really fixing spectre, they're not enabling ibpb and ibrs which is required.
Correct - but if the microcode is not there, you'll still have to reboot to get it applied when it's available
Even when they reboot, it takes from 5 min to 1h+, for what the fuck do they need so long?
GestionDBI,Virtmach & BandwagnHost needed a long period of time to boot up the servers back.
I just stay with dedis, to much hassle.
Neoon - Status - Wiki - Free Gameservers
Free KVM NAT in Romania and Finland
Intel's press release says most of microcode updates coming next week AFAIK
* Centmin Mod LEMP Stack Quick Install Guide
Maybe people run fsck as well :-D
Yeap, that's what I'm counting on as well - so we all just have to sit tight :-D
But the fact that people believe they're all 100% safe now is fake news <3
D*uq. No nodes has been done for so long. I know you don't like us, but no need to bash and say bulls**t on forums...
Gestion DBI | IT consulting | OpenVZ, KVM VPS, Shared Hosting, Dedicated Servers with 24/7 Technical Support
DeepNet Solutions | Cheap and low cost VPS in 9 cities around the world! | Starting at $13CAD by year!
"Server Unity GestionDBI England went offline. Detected: 06.01.2018 19:50:10"
Just checked if my Monitoring went shit, I did not:
Neoon - Status - Wiki - Free Gameservers
Free KVM NAT in Romania and Finland
All nodes are up and running, excluding LAX-03 that is currently rebooting for the last 5 minutes.
Gestion DBI | IT consulting | OpenVZ, KVM VPS, Shared Hosting, Dedicated Servers with 24/7 Technical Support
DeepNet Solutions | Cheap and low cost VPS in 9 cities around the world! | Starting at $13CAD by year!
So, you go, reboot the nodes, and check if they back up but you do not care if the costumer VM's are backup up? well ok then.
I had 10 restarts today, this question was asked in general, I did listed just one provider, that was maybe a bit unfair but I have updated it.
At least half of them, went down for about 1 hour, so I go for each of these and open a ticket? No. I do expect that a Provider brings up the VM's and I have not to login in each panel and reboot them by hand.
Everyone got it working, except gestionDBI.
Neoon - Status - Wiki - Free Gameservers
Free KVM NAT in Romania and Finland
Oh, look, it's time for some @Neoon rage. Which project are you going to abandon now?
I won't be back until @bsdguy is released.
Neoon - Status - Wiki - Free Gameservers
Free KVM NAT in Romania and Finland
I guess to safely switch off all VMs?
Clouvider Leading UK Cloud Hosting solution provider || UK Dedicated Servers Sale || Tasty KVM Slices || Latest LET Offer
Web hosting in Cloud | SSD & SAS True Cloud VPS on OnApp | Private Cloud | Dedicated Servers | Colocation | Managed Services
How many containers do you need on a single node that you need 60min+ to reboot it?
Neoon - Status - Wiki - Free Gameservers
Free KVM NAT in Romania and Finland
They're OVZ tho. If they're simfs, just reboot now and it'll work itself out if you've got a journaling filesystem.
I won't be back until @bsdguy is released.
1 Windows KVM that refuses to budge. And then you have a choice. Downtime or potential loss of data?
Clouvider Leading UK Cloud Hosting solution provider || UK Dedicated Servers Sale || Tasty KVM Slices || Latest LET Offer
Web hosting in Cloud | SSD & SAS True Cloud VPS on OnApp | Private Cloud | Dedicated Servers | Colocation | Managed Services
Well I said containers, most of them where OVZ boxes.
Neoon - Status - Wiki - Free Gameservers
Free KVM NAT in Romania and Finland
Take a snapshot, shutdown, restart using snapshot?
I won't be back until @bsdguy is released.
Always an option. Depends on how many of those stubborn VMs you have at scale, you may still get to these 60 minutes mentioned.
Clouvider Leading UK Cloud Hosting solution provider || UK Dedicated Servers Sale || Tasty KVM Slices || Latest LET Offer
Web hosting in Cloud | SSD & SAS True Cloud VPS on OnApp | Private Cloud | Dedicated Servers | Colocation | Managed Services
Funny fact, longest reboot time was MTL-02, with ~35min.
Gestion DBI | IT consulting | OpenVZ, KVM VPS, Shared Hosting, Dedicated Servers with 24/7 Technical Support
DeepNet Solutions | Cheap and low cost VPS in 9 cities around the world! | Starting at $13CAD by year!
@davidgestiondbi Well, this is an outrage - I could (probably) be done pooping by then!
I won't be back until @bsdguy is released.
uh... ... ...... .. sudo (Holy shit that's half of it) apt-get (common sense part) update (omg that simple?) (hit enter) (only if I bought softlayer or theplanet!!)
Automate server mgmt w/ Runcloud - aff link will give +15 days if you go pro