All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
one ip address corresponds to two different machines at Maxkvm.com
There is a vps in maxkvm.com, and I found that the same ip was jumping between two different machines, connecting this one for a while, and the other one for a while.
When use ssh to connect the vps, the terminal will output some error messages like this:
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:jDfaVwFLNxOcaCPr--------.
Please contact your system administrator.
Add correct host key in /Users/name/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/name/.ssh/known_hosts:1
ECDSA host key for [ip]:22 has changed and you have requested strict checking.
Host key verification failed.
Using the same ip connect the vps, it prompts me that the key of the machine is wrong, indicating that the machine is different. After a while, I can connect it correctly again.
And through the difference of the files in the vps, it can also be seen that there are indeed two different machines. Fortunately, one was modified by me before, and the other was the current content. Not connected to other people's machines.
Comments
Because swap was changed, the previous one was 1g swap. After reinstalling the system yesterday, it was adjusted to 1.4g swap
And the size of the hard disk is also different. Take the probe and you can see the swap and hard disk capacity, which are always jumping back and forth.
Are you connecting using IP or host name?
Sounds like you might have two A records with same name but different IP?
using ip, not a domain name.
Do you own both of the nodes?
No, only one server, one ip, one 25G harddisk. But It seems that 2 different hard drives are loaded.
I guess that the configuration of RAID 10 is wrong. The data in the mirror disk cannot be synchronized.
How are you able to access both servers, without password or key? The odds of both having exact same login credentials is ultra slim.
What did @MaxKVM support say when you opened a ticket?
Based on the description, the other machine he lands on seems to be running an older snapshot of his vm, which could explain why he can access both.
I have no idea what MaxKVM's panel is nor know how they handle snapshots (or if they even offer snapshots), but one possibility could be that snapshots create a new vm instance and an old one was left running with the exact same config minus the disk image.
But then again, this is all speculation until op gets a reply to the ticket.
It may be unclear.
Vps A is what I have been using, and this vps containing the key and the data is called A.
One day I reinstalled the system.
The same vps containing the same key and the different data is called B.
Now I use the secret key via ssh to connect the vps B. (Of course can connect to the vps A, cause the public key are the same. Actually they are the same one vps.)
But sometimes the data in vps B(new data) are the data in vps A(old data). And for a while it is new data, and for a while it is old data. So called two different machines.
Nevermind, just realized if it was running a snapshot the SSH host keys should be the same. (unless the other vm was running a snapshot before a reinstall or some operation that would change the host keys)
Yes it is. I guess that the configuration of RAID 10 is wrong.
MaxKVM has live migration between physical nodes, so that host kernel updates wouldn't cause VM downtime.
In this incident, it appears that the live migration had an error, somehow leaving two copies of the VM running.
stupid
By that, looks like the Beta feature bugged out.
Why raise a ticket?
OP can simply keep the fragrant vps and discontinue the other
It may be this problem. Hope Maxkvm can fix it as soon as possible.
Has any staff at Maxkvm.com seen this post and fix the bug? If so, you can give a clear reply. Thanks.
There has been NO such problem in recent days. Plan to continue to observe for a few more days.
Have you opened a ticket? Please tell me you did and you're not just relying on the random chance that they see this thread.
No ticket has been submitted. Refer to this.
Yes I am already aware of that, then how do you expect them to fix it without them knowing there's a problem?
Simple.
Problem solved.
No idea why you would not get that idea yourself. Every sane admin would simply shut down the old instance.
fixes are not the option here on LET , DRAMA is all we want
Wow, I really didn't think of this way before. Just reboot and never poweroff the old vps. And I don't know if the button 'Boot' on the control panel will turn on the old vm.
Anyway this problem seems to have been resolved now.
I thought about this solution but didn't recommend it, because it wouldn't solve the root cause in the live migration script, and could leave old instances behind indefinitely.
In case the old instance restarts somehow, it would cause more confusion down the road.
I should remember that the next time someone replies to an email, with it quoted below their reply, and asks a question answered at the top of the email they replied to and quoted. “Sir your login has been deemed a liability due to incompetence.”
(This is called a joke, for whoever quotes this as the reason I’m an asshole 3 years from now)
TIL opening a ticket with your provider can be a game of Russian roulette
That’s not entirely true. You’ve known me passively for like 9 years. 😂
I canceled my LUMPOFCOAL with this provider on 2021-04-04.
I wrote the following cancellation reason:
They gave a refund of the last monthly payment, because they are probably happy to get rid of a $1.50/mo 1GB unprofitable service.
I kept the IPv4 on UptimeRobot, and so far it hasn't been active.
I hope Santa Claus leaves a lump of coal to the support agent on Christmas.
Just gotta catch you on Saturday night after 1 am.
Did MaxKVM dead pool?
Not yet, but if they close customer's account whenever a ticket is opened, sooner or later there would be no customer left.