All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
how to run 2 server in same time
I have problem happened, that I had a server from hetzner and it was 100TB, and was set RAID 0 (to get 1 disk 100TB), and happened that 1 disk failed, and I lost everything
so I try to find a solution for that to avoid it from happened again (without using RAID)
think that I can't get 2 servers (same specifications), and connect them with a way (I don't know how to do this), to make both servers are same in everything (like a mirror), and they are run in the same time for visitor (something like load balancer), so visitor1 connected to server1 and visitor2 connected to server2 .. etc
hear about vswith, failover IP , but I don't know how they work and how to config them for my case,
sorry I can't describe good, but it first time for me to know these things. and thanks in advice
Comments
Your post is very unclear, but you had (somehow) a 100TB disk and which dissapeared after reboot? Did you mount it properly?
And your second question is to have the disk mounted at two servers at once, and have a floating ip with something like a ip handover system via heartbeat for example?
It's a bit confusing, could you be a bit more specific what information you need?..
100TB. That's like 1/4 of @uptime 's porn collection.
thanks for your reply, and sorry for been unclear
about the 100TB ,, its just info for my problem (I didn't ask about it)
all I need to do clear
get 2 server, same server spcacification. and using them for 1 website
the 2 server are same in every thing (data), like mirror (don't know how can make them mirrored as live)
and they are run and up . and they are served for visitor
so when visitor go to my website, he loads data from server1, and another visitor when he login will load data from server2 (this will reduce load on both server)
all I mean from this idea, that I have live backup for the main server1 to server2 . and both server are working for customer in same time
I don't know if that possible or not, so I asked expert here
I hope that I clear now
You have to setup a load balancer to distribute the requests to the two backend servers, like HAProxy. Failover IP alone won't work for HA.
Edit: when you say "data" you mean a database (MySQL, MariaDB, PostgreSQL etc)?
What do you have serving the data? Apache? Nginx? A CMS? NodeJS?
Ah that's more clear! Let me try to explain an more ideal setup:
1x VM for HA proxy and for a Percona or Galera SQL master
2x VM with Apache or NGINX on it, firewall it so only the HA proxy node and the three VMs can reach each other on port 80/443. Also run two SQL server instances (Percona or GALERA) on the servers running APACHE or NGINX.
You can for example mount a bigger disk on the single VM with HAproxy, and use something like NFS or CEPH to load the data from. This is more like a load balancer setup then a HA setup, it adds some HA capabilities, but it's not 100% High Available.
If you want true high available, the best to do is to have two VMs in two seperate data centers with HAproxy on it and with a failover IP which a heartbeat or pacemaker instance can manage. Then have two or even better three webservers at minimum at the backend of the two HAproxy VM's, and have a separate SQL cluster over an internal network. There are many options setting something up like this.
The best way to distribute your data over multiple disks:
If there are only files, no database or anything like that, then rsync and cron will work.
What you want is a complex setup. You will have to mirror files and databases separately, use a third server as load balancer with round robin etc.
Your knowledge, as it seem from your post, not only is not existent to do something like this, but I guess is minimum to none even to run the first server. You should think to hire a tech guy to do something like this.
Now, if you insist to do it yourself, this is the way:
This is the simpler way to do mirroring with load balancing. There are other ways but are even more complex (server clustering etc.). But remember: according to what your knoledge seem to be, even round robin (if you manage to setup it or hire someone to set it up for you), when it fails somewhere, the troubleshooting will be way out of your league...
Think of better create an identical mirrored server as a simple hot backup, not accessible to the public, just to be a ready to work server in case of a failure to the original one.
Hire someone to set it up for you so you can focus on your project.
If you prefer DIY and pulling hair, I respect that.
The key lies in the specific way your server is set up. This could be relatively easy/inexpensive or relatively complicated and time consuming/higher valued task.
OR, return to the original problem:
And ask Hetzner to install additional drives so you can configure a proper RAID. You won't have HA, but at least you can mitigate the chance of losing everything due to drive failures.
You need to assess whether you really need high availability or not. Is your application mission critical? Remember that two or more servers in the same data center do not form a (strong? reliable?) HA configuration. Sometimes it is better to adopt a simpler and cheaper RAID solution.
My 2 cents.
A true Cloud based solution should auto-migrate a failed server to another node. Likely the most cost-effective solution?
first thanks for all who reply, except shallownorthdakota, its for hosting file sir not as you said
to be more clear, I just need to mirror data (files only), (no need for the database) as it's small and it is not very important ..
my website for hosting my own files for download propose (like mega,mediafire ..etc) . but it's not for clients. I using this file for another website (the main website using this link for download secure and encrypted)
what best suggestion for this, need to mirror that files on 2 servers, and using same in the same time to reduce loading while downloading it (downloading is not direct link, its encrypted link)
I really don't care what your "file" is. I was just stating that @uptime 's porn collection was quite impressive.
Lulz ...
a couple other ways to go for distributed / replicated storage - just to give you some suggestions (ideas) rather than a "recommended" prescription
Something old: https://www.cis.upenn.edu/~bcpierce/unison/
Something new-ish: https://github.com/minio/minio
+1 I agree. If you can barely manage 1 server, you shouldn't really think about managing multiple servers in such a complicated configuration. You'd be more likely to loose data and mess up through lack of knowledge and experience in such a multi server configuration management than loosing data through hardware failure.
Learn from your mistake, DO NOT use raid 0 with no disk redundancy and always have remote external backups. You're efforts are better spent learning to have a better understanding on proper backup process/policies in place so that even loosing the entire server won't mean you loose your data. Learning this will serve you well no matter the server you or site you use in the future !
Just my AUD$0.02
He said he used RAID 10. So it's obvious that he lost all data when 1 single disk fail..
Is your 100TB space fully utilized?
I mean you really need that amount space?
You should use at least RAID 1 for the first server and get a new one and use that as a backup server.
It's the most simple solution..
@ezak
Considering what seems to be your level of expertise I strongly consider to use either Raid 1 oder Raid 5. As you've learned now the hard way Raid 0 is not for safety but for speed.
As far as I understood the purpose of your server following your plan of two mirrored servers is not a good idea. It will drastically increase both cost and complexity.
If your need is simply to have your disk space available even if 1 disk breaks/fails you should simply use Raid 5 or Raid 6 which are an optimal compromise between cost, speed, and availability (in a case like yours).
Pointless to have anything important online without raid, simply use cloud from OVH, as they do work on the following plan, example:
Cloud HPC cluster for AI:
for « only » 14k$/hour or 5M$/mo
Having both servers active at the same time is a lot more complexity and overhead that you don't want or need. It will perform better to have one active, while the secondary just recieves copies of the data and waits for the primary to go down.
To sync the data, rsync running out of crontab every minute could work for you. It's the simplest way, anyhow.
To automatically bring up the secondary and take over the traffic when the primary goes down, try Corosync and Pacemaker with a Floating IP. That's not simple, but there are lots of guides out there on the internet that will guide you through getting a basic setup working.
Put two motherboards in one cardboard box add two power connectors and two etherwebs cables (for redundancy) and there you go.
WHAT?! jesus christ, MARIA.
RAID copies the data on the other disks yes, it does not validate shit.
So if you end up with a single corrupt file, which obviously gets replicated on the second disk, you are done.
There is no fucking validation, the claim without raid its pointless to have anything online is bullshit.