Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Amazon S3 for server backups

Amazon S3 for server backups

drdrakedrdrake Member

Hello,

I am using Hetzner's first storage plan (100gb) for some server backups. I am thinking of switching to Amazon S3 because I want to pay for resources I use and not more.

My setup is this:

I backup var-slash-www to a tar archive to mtn-slash-files. I also backups databases using automysqlbackup to mnt-slash-databases. Files are automatically uploaded to my storage plan via webDAV.

I am looking for the best way to backup these files to amazon s3. I am new to amazon s3 and I have read a few articles in this field but I was wondering if any of you have a similar setup and would like to offer suggestions.

«1

Comments

  • PepeSilviaPepeSilvia Member
    edited April 12

    I use the AWS CLI tool in a shell script that's run via cron. The command is really simple, you just include the name of the file you want to upload, the bucket you want to upload to and it just does it.

    However you should consider Backblaze or OVH instead. They're far cheaper, and one of them is better than the other depending on your usage case. They have similar CLI tools I believe.

    I'd switch, but at my volume the money saved isn't worth the hassle.

    Powered by CrapNAT™

    Thanked by 1yomero
  • BharatBBharatB Member, Provider
    edited April 12

    Install aws-cli in your linux and configure your access key and secret key followed by region, then use the following command

    aws s3 cp filename.tar.gz s3://bucket-name/

    P.S If you don't want to retain a copy locally for saving space you can change cp to mv and that will directly move the file to the bucket.

    Budget dedicated servers - https://readydedi.com/ | I work for DrServer | Follow us on Twitter or Facebook | Skype: readydedi.bharat

  • akhfaakhfa Member

    Why don't you use aws glacier? I haven't use it yet but I think it worth to try

  • @akhfa said: Why don't you use aws glacier? I haven't use it yet but I think it worth to try

    You would want to kill yourself the day you need to retrieve any data from Glacier. Not only does it take a while, it costs a lot too.

    Powered by CrapNAT™

    Thanked by 1yomero
  • drdrakedrdrake Member

    @PepeSilvia said: I use the AWS CLI tool in a shell script that's run via cron. The command is really simple, you just include the name of the file you want to upload, the bucket you want to upload to and it just does it.

    However you should consider Backblaze or OVH instead. They're far cheaper, and one of them is better than the other depending on your usage case. They have similar CLI tools I believe.

    I'd switch, but at my volume the money saved isn't worth the hassle. @BharatB said: Install aws-cli in your linux and configure your access key and secret key followed by region, then use the following command

    aws s3 cp filename.tar.gz s3://bucket-name/

    P.S If you don't want to retain a copy locally for saving space you can change cp to mv and that will directly move the file to the bucket.

    I forgot to mention, my scripts deletes all files older than 30 days in the webdav mount. I want to have something similar. I want amazon s3 to work just like this scenario. Is it possible?

    I want to:

    1. Mount the storage as local drive

    2. Compress and add the files directly to amazon s3.

    3. Delete files older than 30 days.

  • BharatBBharatB Member, Provider

    I don't see any reason why you would want it as a local drive though I'm not sure if it's possible, as for deleting files, make sure to give a naming structure to your backups like 2017-04-12.tar.gz etc, use your same script and remove file using aws s3 rm s3://bucket-name/filename.tar.gz

    Budget dedicated servers - https://readydedi.com/ | I work for DrServer | Follow us on Twitter or Facebook | Skype: readydedi.bharat

  • drdrakedrdrake Member

    @BharatB said: I don't see any reason why you would want it as a local drive though I'm not sure if it's possible, as for deleting files, make sure to give a naming structure to your backups like 2017-04-12.tar.gz etc, use your same script and remove file using aws s3 rm s3://bucket-name/filename.tar.gz

    How could I automate it?

  • BharatBBharatB Member, Provider
    edited April 12

    @drdrake said: How could I automate it?

    write a bash script and cron it weekly

    Budget dedicated servers - https://readydedi.com/ | I work for DrServer | Follow us on Twitter or Facebook | Skype: readydedi.bharat

  • drdrakedrdrake Member

    @BharatB said:

    @drdrake said: How could I automate it?

    write a bash script and cron it weekly

    How can i get which file is older than 30 days if amazon is not mounted?

  • akhfaakhfa Member

    @drdrake said:

    @PepeSilvia said: I use the AWS CLI tool in a shell script that's run via cron. The command is really simple, you just include the name of the file you want to upload, the bucket you want to upload to and it just does it.

    However you should consider Backblaze or OVH instead. They're far cheaper, and one of them is better than the other depending on your usage case. They have similar CLI tools I believe.

    I'd switch, but at my volume the money saved isn't worth the hassle. @BharatB said: Install aws-cli in your linux and configure your access key and secret key followed by region, then use the following command

    aws s3 cp filename.tar.gz s3://bucket-name/

    P.S If you don't want to retain a copy locally for saving space you can change cp to mv and that will directly move the file to the bucket.

    I forgot to mention, my scripts deletes all files older than 30 days in the webdav mount. I want to have something similar. I want amazon s3 to work just like this scenario. Is it possible?

    I want to:

    1. Mount the storage as local drive

    2. Compress and add the files directly to amazon s3.

    3. Delete files older than 30 days.

    For free, you can use s3fs to mount s3. For paid solution, you can use objectivefs. I use objectivefs for production in my website data.

  • BharatBBharatB Member, Provider

    @drdrake said: How can i get which file is older than 30 days if amazon is not mounted?

    That's the reason you'll call a list command and itterate through filenames which are actually timestamps lol.

    Budget dedicated servers - https://readydedi.com/ | I work for DrServer | Follow us on Twitter or Facebook | Skype: readydedi.bharat

  • I think you can setup Lifecycle Rules on S3 to let S3 automatically removes files older than 30 days (or any number of days you configure it to)

    https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/

  • Let's wait for scaleway sis ;)

  • ihadpihadp Member

    How much are you really going to save by moving to S3?

    www.whatuptime.com
    Microsoft Windows Templates for Online.net, Kimsufi, DigitalOcean, OVH, Vultr & Much More!

  • marrcomarrco Member

    rclone

  • drdrakedrdrake Member

    @ihadp said: How much are you really going to save by moving to S3?

    Well, I will use more space and the price will be less than what I am paying. Sounds good to me.

  • I'd recommend Backblaze B2 and their CLI tool. Storage is far, far cheaper

  • spammyspammy Member

    Cheaper way is to create a minio server (really simple!), then you'll have a S3-compatible interface at 1/10th of the cost of S3!

    Thanked by 1sergsergiu
  • drdrakedrdrake Member

    @spammy said: Cheaper way is to create a minio server (really simple!), then you'll have a S3-compatible interface at 1/10th of the cost of S3!

    Where do you host it?

  • drdrakedrdrake Member

    @lukehebb said: I'd recommend Backblaze B2 and their CLI tool. Storage is far, far cheaper

    What about operations?

  • drivexdrivex Member

    @drdrake said:

    @lukehebb said: I'd recommend Backblaze B2 and their CLI tool. Storage is far, far cheaper

    What about operations?

    I guess there is no such thing on Backblaze. Take a look at here at the bottom: https://www.backblaze.com/b2/cloud-storage-pricing.html

  • @drivex said:

    @drdrake said:

    @lukehebb said: I'd recommend Backblaze B2 and their CLI tool. Storage is far, far cheaper

    What about operations?

    I guess there is no such thing on Backblaze. Take a look at here at the bottom: https://www.backblaze.com/b2/cloud-storage-pricing.html

    This is correct - B2 is exceptionally cheap. Its built on the tech that they use for their desktop backups. I've been a customer of theirs for a very long time, and I'm very happy :)

  • moonmartinmoonmartin Member
    edited April 12

    Why do cost sensitive people still consider Amazon S3 for backup? It's way more expensive than other solutions when you run the numbers for anything other than minimal storage. If your storage requirements are minimal then you can just use google drive for free. Are people bad at math?

    Color me confused.

  • eva2000eva2000 Member
    edited April 12

    moonmartin said: Why do cost sensitive people still consider Amazon S3 for backup? It's way more expensive than other solutions when you run the numbers for anything other than minimal storage. If your storage requirements are minimal then you can just use google drive for free. Are people bad at math?

    Color me confused.

    High availability and reliability and selection of s3 regions closest to your servers. Part of my backup routines has data backups to aws s3 i.e. one aws account has ~1,500+ GB at $23/month with aws s3. AWS IAM management also allows me to assign a unique AWS IAM user per AWS S3 + server pair for security and convenience of managing and revoking AWS IAM user permissions via central AWS Console is great :)

    speed is great too, one of my Centmin Mod users reports 100MB/s backup speeds from OVH BHS to AWS S3 Canada region (ca-central-1) :)

    @ranaldlys said: I think you can setup Lifecycle Rules on S3 to let S3 automatically removes files older than 30 days (or any number of days you configure it to)

    https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/

    yup s3 lifecycle management is what you need, i have it set to 30 days to more from standard to standard-IA storage class then 60 or 90 days to move to Glacier and then XX days for deletion

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
  • drdrake said: What about operations?

    There's a button at the bottom, "Pricing Organized by API Calls".

  • @eva2000 said:

    moonmartin said: Why do cost sensitive people still consider Amazon S3 for backup? It's way more expensive than other solutions when you run the numbers for anything other than minimal storage. If your storage requirements are minimal then you can just use google drive for free. Are people bad at math?

    Color me confused.

    High availability and reliability and selection of s3 regions closest to your servers. Part of my backup routines has data backups to aws s3 i.e. one aws account has ~1,500+ GB at $23/month with aws s3. AWS IAM management also allows me to assign a unique AWS IAM user per AWS S3 + server pair for security and convenience of managing and revoking AWS IAM user permissions via central AWS Console is great :)

    speed is great too, one of my Centmin Mod users reports 100MB/s backup speeds from OVH BHS to AWS S3 Canada region (ca-central-1) :)

    @ranaldlys said: I think you can setup Lifecycle Rules on S3 to let S3 automatically removes files older than 30 days (or any number of days you configure it to)

    https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/

    yup s3 lifecycle management is what you need, i have it set to 30 days to more from standard to standard-IA storage class then 60 or 90 days to move to Glacier and then XX days for deletion

    All those reasons are things besides cost sensitive. I get why enterprise people who want all that stuff and are willing to pay extra for it would use S3. If cost is the primary concern then S3 is more expensive. Also, google drive and pretty much every other cloud storage service is highly available so no reason to use S3 because of that.

  • eva2000eva2000 Member

    moonmartin said: All those reasons are things besides cost sensitive. I get why enterprise people who want all that stuff and are willing to pay extra for it would use S3. If cost is the primary concern then S3 is more expensive. Also, google drive and pretty much every other cloud storage service is highly available so no reason to use S3 because of that

    Haven't really used Google Drive on linux side besides working on Centmin Mod rclone addon https://community.centminmod.com/threads/addons-rclone-sh-client-for-syncing-to-remote-cloud-storage-providers.9299/ so might look into it. Been using AWS S3 for years with awscli, s3cmd and various other linux command line clients for concurrent data transfers etc.

    I also fine AWS S3 speed nice when you have 16+ geographic server locations and need a backup locale close to my server.

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
  • spammyspammy Member

    @drdrake said:

    @spammy said: Cheaper way is to create a minio server (really simple!), then you'll have a S3-compatible interface at 1/10th of the cost of S3!

    Where do you host it?

    Any linux machine? https://www.minio.io/

  • drdrakedrdrake Member

    @spammy said:

    @drdrake said:

    @spammy said: Cheaper way is to create a minio server (really simple!), then you'll have a S3-compatible interface at 1/10th of the cost of S3!

    Where do you host it?

    Any linux machine? https://www.minio.io/

    Amazon S3 offers more for a better price.

  • moonmartinmoonmartin Member
    edited April 12

    @eva2000 said:

    moonmartin said: All those reasons are things besides cost sensitive. I get why enterprise people who want all that stuff and are willing to pay extra for it would use S3. If cost is the primary concern then S3 is more expensive. Also, google drive and pretty much every other cloud storage service is highly available so no reason to use S3 because of that

    Haven't really used Google Drive on linux side besides working on Centmin Mod rclone addon https://community.centminmod.com/threads/addons-rclone-sh-client-for-syncing-to-remote-cloud-storage-providers.9299/ so might look into it. Been using AWS S3 for years with awscli, s3cmd and various other linux command line clients for concurrent data transfers etc.

    I also fine AWS S3 speed nice when you have 16+ geographic server locations and need a backup locale close to my server.

    All available for a price. There are primarily price sensitive services and then there are services like Amazon s3. Apples and oranges.

    I keep hearing about how S3 is cheap and gotta wonder if I am in some parallel universe or if these people can't do some basic math.

    Thanked by 1quicksilver03
  • eva2000eva2000 Member

    moonmartin said: I keep hearing about how S3 is cheap and gotta wonder if I am in some parallel universe or if these people don't understand basic math.

    Cheap is always relative to the person's perspective and their relative alternatives based on their needs i.e. easy of deployment, variety of command line, mounting options and high availability/reliability etc.

    AWS S3 Standard-IA storage class is $0.0125/GB so $12.50/TB per month.

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
  • moonmartinmoonmartin Member
    edited April 12

    @eva2000 said:

    moonmartin said: I keep hearing about how S3 is cheap and gotta wonder if I am in some parallel universe or if these people don't understand basic math.

    Cheap is always relative to the person's perspective and their relative alternatives based on their needs i.e. easy of deployment, variety of command line, mounting options and high availability/reliability etc.

    AWS S3 Standard-IA storage class is $0.0125/GB so $12.50/TB per month.

    You 'forgot ' to include data transfer. Sure it was just coincidence. Also the fine print.

    Standard - Infrequent Access Storage has a minimum billable object size of 128KB. Smaller objects may be stored but will be charged for 128KB of storage. Standard – Infrequent Access Storage is charged for a minimum storage duration of 30 days. Objects that are deleted, overwritten, or transitioned to a different storage class before 30 days will incur the normal usage charge plus a pro-rated request charge for the remainder of the 30 day minimum. Objects stored 30 days or longer will not incur a 30-day minimum request charge.

    A more honest argument should probably use standard storage price and also include data transfer.

    Using amazon cost calculator with 1TB transfer in per month and 1TB inter region transfer out is $225

    1TB transfer in and 1TB transfer out using cloudfront it's $64 per month.

    If you don't transfer that much then of course it is much less but still way more than the cost you are implying by at least double...at a minimum.

  • eva2000eva2000 Member
    edited April 12

    Yeah i know.. use AWS Life Cycle management to manage AWS S3 storage class migrations automatically between Standard > Standard IA and Glacier with 1500GB at ~US$23/month. Also AWS S3 is pay as you go so at some data set sizes, alternatives like Google Drive might be more expensive i.e. if you data set was 2100GB. As far as I can see for my Google Drive, I only have plan options for 1TB, 2TB, 10TB. So for 2100GB, I'd be paying for a 10TB account! Hence, why the term 'cheap' is relative to one's own alternatives :)

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
    Thanked by 1Dormeo
  • BradNDBradND Member

    S3 supports automatic retention in the console or cli. Its literally a few clicks to setup your retention policy.

    Check the docs.

    No longer with ND

  • WSSWSS Member

    Fuck Bezos.

    I've been asked to stop derailing threads. So, stop enticing more-interesting topics!

  • moonmartinmoonmartin Member
    edited April 12

    @eva2000 said: Yeah i know.. use AWS Life Cycle management to manage AWS S3 storage class migrations automatically between Standard > Standard IA and Glacier with 1500GB at ~US$23/month. Also AWS S3 is pay as you go so at some data set sizes, alternatives like Google Drive might be more expensive i.e. if you data set was 2100GB. As far as I can see for my Google Drive, I only have plan options for 1TB, 2TB, 10TB. So for 2100GB, I'd be paying for a 10TB account! Hence, why the term 'cheap' is relative to one's own alternatives :)

    Or just don't use Amazon and their pay for every little thing and do everything our way service that ends up costing much more than you think if you just use a little math.

    Up to you. Now you are saying "relatively". So I guess the goal posts moved. It's not cheap relative to anything. It's one of the more expensive cloud services out there. What it IS good at is making you think it's cheap (relative, absolute, in comparison to, take your pick) assuming you actually believed that $0.0125/GB price you used.

  • williewillie Member

    Saving money is the last reason to do anything on S3. It will definitely cost more than LET storage for anything of any size. That said, Hetzner's smaller storage plans are overpriced: they get better at 2TB and above.

    For a small amount like 100GB, I'd just get a storage VPS. How much storage do you actually want to use? wishosting.com has a 40GB NAT storage vps for something like 3 euro per YEAR. Maybe you could get a couple of those. Or @pbgben mentioned having KVMs at OVH Gravelines at $1 per 100GB + fees (whatever those are). Serverhub.com, 125GB for $15/year at many locations.

    Or if you want archive storage, I like C14 Intensive which is 0.005 euro/GB/month with free upload/download/delete. The non-intensive alternative is 0.002 euro/GB/month for storage, but upload/download/delete cost 0.01/GB each, so it only wins if you leave the stuff there without touching it for a long time. OVH Cloud Archive is similar except there's no deletion fee, which is good if you want to delete your backups after some specified retention period.

    Again the amounts involved are so piddly that it doesn't seem worth worrying about. This is more interesting if you have many TB to deal with.

  • @WSS said: Fuck Bezos.

    Stay classy, WSS...

    Valuable contributions such as this, keeps me more and more away from this forum.

  • WSSWSS Member
    edited April 13

    @Weblogics said:

    @WSS said: Fuck Bezos.

    Stay classy, WSS...

    Valuable contributions such as this, keeps me more and more away from this forum.

    Bye, Jeff. Not sure if we can find someone else to post 8 things from their RSS feed in several years, but we'll manage.

    I've been asked to stop derailing threads. So, stop enticing more-interesting topics!

    Thanked by 2rajprakash JahAGR
  • reetwoodreetwood Member
    edited April 13

    @WSS said: Fuck Bezos.

    u r vry smart....

    I've found backblaze (b2) to be awfully slow for upload. I wouldn't recommend it, unless you are really cheap.

    I'll have to give OVH's block storage a play with.

    s3 costs are peanuts considering what you get :)

  • eva2000eva2000 Member
    edited April 13

    moonmartin said: assuming you actually believed that $0.0125/GB price you used.

    not believed, it's real - it's right in my AWS S3 billing reports

    Cheap will always be 'relative' term for each person. My criteria is at least 50-100MB/s transfer speeds for backups for 16+ geographic locations of my servers and pay as you go billing with secure per server user control/revoke access from central console.

    That criteria rules out slow Kim Sufi capped at 100Mbps and OVH capped at 250-500Mbps with at least $105/month extra to go to 1Gbps https://www.ovh.com/us/dedicated-servers/bandwidth-upgrade.xml and any other dedicated or vps host below <1Gbps already. Cheap IS relative to ones needs!

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
  • eva2000eva2000 Member
    edited April 13

    moonmartin said: Using amazon cost calculator with 1TB transfer in per month and 1TB inter region transfer out is $225

    1TB transfer in and 1TB transfer out using cloudfront it's $64 per month.

    If you don't transfer that much then of course it is much less but still way more than the cost you are implying by at least double...at a minimum.

    AWS S3 out to internet is US$90/TB for first 10TB for US destinations https://aws.amazon.com/s3/pricing/ and as i said 'cheap' is relative as I don't transfer out much - it's storage and transfer speeds that matters and the variety of ssh command line clients and integrations available for the storage layer for my specific needs.

    also look at per region egress traffic examples

    1TB egress out

    to US region

    • AWS S3 = US$90/TB for 1st 10TB
    • Google Cloud = US$120/TB for 1st 10TB

    to Sydney region

    • AWS S3 = US$140/TB for 1st 10TB
    • Google Cloud = US$190/TB for 1st 10TB

    to Singapore region

    • AWS S3 = US$120/TB for 1st 10TB
    • Google Cloud = US$120/TB for 1st 10TB

    to Hong Kong region

    • AWS S3 = US$120/TB for 1st 10TB connecting to Singapore
    • Google Cloud = US$230/TB for 1st 10TB

    So 1.5TB data egress out to US + Sydney + Singapore + Hong Kong region would cost

    • AWS S3 = 1.5x (90+140+120+120) = US$705
    • Google Cloud = 1.5x (120+190+120+230) = US$990

    Again 'cheap' is relative :)

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
  • godonggodong Member

    @drdrake: AWS S3 have versioning and lifecycle rules, which I think much faster, consuming less resource (and also cheaper) than uploading full compressed backups in daily basis. I've been using rclone (https://rclone.org/s3/) to do daily sync from the server to S3, and enabled the versioning and lifecycle rules in S3 to keep and remove older version of the files. For point in time recovery/restore from the backup files, you can use this: https://github.com/madisoft/s3-pit-restore. For me, this is an incremental backup on steroid ;).

  • drdrakedrdrake Member

    @godong said: @drdrake: AWS S3 have versioning and lifecycle rules, which I think much faster, consuming less resource (and also cheaper) than uploading full compressed backups in daily basis. I've been using rclone (https://rclone.org/s3/) to do daily sync from the server to S3, and enabled the versioning and lifecycle rules in S3 to keep and remove older version of the files. For point in time recovery/restore from the backup files, you can use this: https://github.com/madisoft/s3-pit-restore. For me, this is an incremental backup on steroid ;).

    I can not seem to find the option to remove files older than 30 days. Any help?

  • eva2000eva2000 Member

    drdrake said: I can not seem to find the option to remove files older than 30 days. Any help?

    http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
  • xyzxyz Member
    edited April 13

    eva2000 said: Cheap IS relative to ones needs!

    No, you're just re-defining 'cheap' to suit what you want it to in your reality distortion field.

    Yes, if you cherry pick features that only Amazon can provide, or have extremely niche requirements (unlikely), or just are aboard the Amazon hype train (likely), then Amazon will be the best solution for you. But for 99.99% of others, Amazon isn't cheap by any stretch of the (sane) imagination.

    I get that people like to justify extravagant purchases by convincing themselves that they "need" 500+Mbps bandwidth for backups, or "need" a 4 server cluster run a simple Wordpress blog with 10 visits a day, or whatever gives them a hard, and then pride themselves on how cheaply they can meet their requirements. But I think that the LET mindset is generally able think more minimalistically and separate the needs from the nice-to-haves. By which token, Amazon is not cheap.

    Thanked by 3Lee WSS southy
  • DamianDamian Member

    +1 times a million for Backblaze B2. Last October I sat down with them to discuss managing their new Phoenix datacenter and got a lot of insight on their operation. B2 is cheaper to boot, too.

  • JTRJTR Member

    @reetwood said: I've found backblaze (b2) to be awfully slow for upload. I wouldn't recommend it, unless you are really cheap.

    At work we recently started using it to back up the office server and it's quite speedy for the upload - we have a shared gigabit fiber connection and B2 was able to burst to full line speed at times although it typically fluctuated between a fairly wide range (I'd say typical upload speed was 300-800mbps?). It wasn't even very well optimized for performance either in the settings, if connection threads had been set higher it would have been faster (we did run it at two different thread count settings and observed a significant upload performance improvement with the higher settings).

    At home I use Backblaze on my personal computer and it easily saturates my line's upload. I'm quite happy with it.

    When we were discussing backup options prior to choosing B2 our #1 alternative pick was Hetzner. I don't think we had a firm #2 alternative option though... Our requirements ruled out a number of other candidates though that would have been fine if it was for personal use or a smaller amount of data.

  • eva2000eva2000 Member

    xyz said: Yes, if you cherry pick features that only Amazon can provide, or have extremely niche requirements (unlikely), or just are aboard the Amazon hype train (likely), then Amazon will be the best solution for you. But for 99.99% of others, Amazon isn't cheap by any stretch of the (sane) imagination.

    You're still defining 'cheap' relatively even by the 'LET mindset' as that mindset is just another set of lower minimalist 'criterias' as opposed to my specific criterias. If you don't need all the high end features that AWS offers, then yes alternatives relative to your needs are cheaper.

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
  • eva2000eva2000 Member
    edited April 13

    Damian said: +1 times a million for Backblaze B2. Last October I sat down with them to discuss managing their new Phoenix datacenter and got a lot of insight on their operation. B2 is cheaper to boot, too.

    Do they still only have one datacenter in USA ? The Last time I evaluated B2 it was very slow for Asian and Australia transfers.

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod Nginx Letsencrypt SSL Integration (soon)
Sign In or Register to comment.