New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
The same time with a normal server. Lowend doesn't mean lowend hardware.
Not all SSD's are created equal (or anywhere near) - do you have a model in mind?
It cannot be predicted. I had a situation where our SSD went bad on the wedding day of the server. In general terms they lasts longer as there are no physical moving things in SSD's. But the data cycles are limited to the disk.
WTF?
anniversary, maybe?
He probably means anniversary.
In my experiences, you'll find SSDs to be extremely reliable if they're Samsung, Intel, or Crucial. We use Crucial and Intel SSDs with no problems. I have a Samsung SSD (512 GB) in my laptop, I just replaced it a few months back. It lasted ~3 years, but my laptop can get very hot, so I'm sure that played a factor.
OCZ and PNY are pretty bad just randomly die but never had a problem with Intel, Crucial, or Samsung both in consumer and server environment
Server provision date. They've tested the drive and did a benchmark for 24 hours. But after 7 hours of using. It went down. So no time frame can be given.
Date of deployment would work here, for future use.
Forgot that we're in different time zones. It's 12.57AM here so thought wedding date would be nice.
In addition to quality, it will also depend on how long the ssd has been in use (esp. write cycle). Even the best one will go bad soon if it has been heavily written to. If the controller anticipates this, the drive will only go read-only.
Basically, if you have raid/replication/backup, just hope that your host will replace the drive fast enough if it goes bad. Other than that, it's safe to say that trying to predict that is futile.
Depends on the SSD make/model, how old the SSD is, how it was used before you, etc - however, in general IMO SSD's have a very low failure rate.
It's too much writes that kill SSDs, not age. So if you want to write terabytes of data per day, this will shorten your SSD's life. And SSDs have smart attributes that show exactly how much has been written to them and estimates can be made about the remaining life of the SSD.
It'll depend very much on the usage you put it to and how much it has been used before... practically making it impossible for anyone to estimate how long it'll last once you buy the server, could be 5 days could be 5 years.
I would very much recommend using at least RAID1 if not RAID10
Sir, we have over 1000 SSD in our dedicated servers. Some tips:
1) always check SMART for relocated sectors
2) never get 64,128Gb SSD - get 256Gb+, even if you don't need it
3) check your storage usage profile with iostat, daily stat will let you know how many Gb you write daily. Compare it with already written amount from SMART and typical endurance of SSD in server. dmesg for exact model you have
4) backup daily, get server with one SSD and one HDD
5) never use notebooks SSD like Kingston V100, Samsung EVO and so on. Only Samsung 840-950 pro works well and other server grade SSDs like Intel S3500-3700 and others.
6) use disks separately, without RAID. backup daily offsite or to HDD in same server
You may find out older ones are more reliable, the new TLC and such are not really suitable.
TBH, I have a feeling reliability is directly proportional with age in most cases, at least lately. Of course, usage plays a huge role too.
It's largely irreverent because you'll be using Raid along with regular backups right?
Also as it's a dedi if it does die then it's the providers cost/problem to replace it.
Why? Newer controllers do TRIM on SSD and work perfectly fine in HW RAID0/1/5/6/10/50/60.
its just not required. you could mount disks to different points of FS tree. RAID5-6 will reduce disk life - every write will affect 3-4 disks.
Actually, raid 1 and any mirroring writes more data than any raid 5.
Using only one disk, though, or "raid 0" has some advantages, but we are using raid because reading is way faster and most operations are reads. Not to mention controller's cache, which does help a bit. Unless there are issues such as disk abuse (kvm/xen out of ram writing on swap like crazy, for example) reads outnumber writes on a 5:1 ratio in a regular server. If you g below 3:1, you have an abuser, most of the time.
If you do not have io intensive operations, then one disk backed up at regular intervals might be a solution, same if you have extremely intensive io operations, cache everything in RAM and dump to disk at regular intervals.