New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Someone's getting coal for Christmas.
A slab, you say?
I could see using that for a nice self-contained WebDAV or OwnCloud store...
Just asking, raid 1 or 10?
Also will be avail in both your dcs?
Haven't decided which raid level, that's part of what the testing will be for I'd like to go with raid6 or similar with a lot of cache to help smooth it out.
Francisco
Nice. Just asking another stuff , are the disk attachable ? E.g can migrate from vm to vm when required.
Yep, via stallion or api.
Francisco
Not sure if it's been asked (did a quick read). Are the blocks expandable or a fixed size? Would be great if I could expand an existing block on-demand, but don't know if the logistics would be a problem on the back-end.
I guess you would have to do a backup of your backup then...
Growing is possible, shrinking isn't.
Francisco
Though presumably one could allocate a smaller volume, copy over, and cancel the first.
What's the billing cycle btw? Monthly?
It'll be billed monthly.
I don't think I would want to try to keep up with hourly billed 100TB volumes and things like that, it'd be too costly for a non VC group.
Francisco
Can this scale to, say, 1.5PB & sustain 4-7Gbps constant write?
The initial deployment is around 1.5PB and we can plug in more nodes as demand shows. As for write performance, each node should have no problem sustaining 2 - 3GB/sec of sustained writes/reads each. The big issue is always going to be IOPs on random work, but given what my current storage nodes are showing, we're smack dab right around the 30/70 W/R split.
This platform isn't based on ceph, rather it's something we're putting together ourselves. A lot of our work so far has been on paper and going off past experiences on our current storage product and other things like that.
Ideally the beta will get a lot of disgustingly abusive customers that will really push the platform and show us what a single pod can handle.
The beta will likely last around 2 - 3 months. By the end of it, it'll either be a total failure and one of my staffers is getting a new Linux ISO box, or it'll work well and we try to take over the world!
Francisco
This sounds great for my purposes (cold archives) and I'm up for it, probably about 2TB to start. LV is my preferred location.
You do realize what will happen to your outbound bandwidth bills as most of LET launches $3.50 slices as seedboxes backed by these things. 100 iops will probably also draw some complaints the way time4vps does now.
I'd also like seeing some SSD block storage if that's of any interest. DO has that for .10/GB and Online.net (RPN SAN storage) has it for 30 euro/256GB in high availability (replicated across 2 data centers, so about 7 cents per GB in single copy.
Yes I think raid 6 or a Ceph cluster is the way to go, since multi-drive failures are a thing and will become more of one.
It would be great to have some serious compute power near the storage too, either in the form of a cheap dedi somewhere in Fiberhub, or if you ever break down and do shorter term rentals (daily or 48h or weekly would be fine if hourly is too "cloudy") I could see occasionally wanting to spin up a max-sized slice for a big compute task.
We have fully unmetered ports on our network so that's not a big concern.
As for the 100 iops, isn't timevps a lot less than that? As well as slower CPU and things like that?
Remember, the storage is plugged into a Slice so you got all the CPU power you want to buy.
As I mentioned, I'm not sure if I would want to try to handly hourly billing. It can end up with me having to spin up mountains of gear that isn't making me money 24/7, like I would want.
At this point we're looking to pick a few winners we want to put our time & effort behind and focus on that. If we end up doing block storage, slices, hourly billing, and who knows what else, we'll be blowing cash out the door like mad and have a hard time grasping what's really moving and what's lagging behind.
Francisco
I think time4vps claims to supply 200iops. I don't know if they live up to it though.
Yes I understand that slices have unmetered bw, but you can do that while keeping speed tolerable because not that many people are hosing the port (I hope). All I'm saying is that storage slabs will probably change the demand profile.
Hourly billing for storage slabs would be crazy, but for cpu it would be very useful if the economics can work out.
The issue with using slices for cpu tasks is that I can't afford to keep a 32gb slice idle most of the time, just for the occasional 24h burst of computation (that's basically how I use my Hetzner dedi). I'm happy to have an idle 1gb slice (I can afford that) but it would be great to be able to spin up a bunch more cores once in a while. Again yeah, I take your point that you'd need a ton more gear for that, but a guy can dream...
We'll keep an eye on it We do reserve the right to rate limit people to 100mbit, and that is more likely to happen with the Slabs.
I dunno, I don't care what TimeVPS or scamsolutions is doing with their storage. I have plenty of people that do tons of work on their slices and don't come near 100 iops, so I don't see it being too big of an issue for most.
We're in uncharted waters with this. All I can hope is that the research & work we've done so far goes like we're wanting
Francisco
What locations?
Vegas to start with the others depending on how that goes.
Francisco
If that comes to your Lux location I will be on it like a tramp on chips, especially as I am just sorting offsite backups (internal backups sorted, need that extra layer of 'safe'!)
That aside, any plans to have high disk space slices now this is coming? Only asking as I am looking at setting up a ES node that may need 10Tb or more storage - currently using GC and DO but volumes at that scale is hurting the pocket !!
Nothing in Europe?
Do you need 10TB of SSD?
Francisco
Will take this to PM to keep it out of your thread
This doesn't particularly sound like a 'feedback' thread but seems like a subtle way of advertising.
For a product he has said several times will be months away? And in any case, he would be welcome to advertise it here anyway.
I think the price point is more than competitive with other available storage options on the market, but the tipping point for me is that it comes from a reputable vendor that I already do business with.
The only thing I'm not too enthusiastic about is that I only have OVZ service from BuyVM so I'd have to switch over to a slice in order to get access to this product line. For me that potentially means having to change IPs on a running mailserver. Maybe not since my existing service is also in LV.
I've moved ips for people many times, that's not a concern.
Francisco
I wouldn't need to market in this way. I have no problem coming on here to hock my wares. How else do you think the "Get a slice" slogan came about?
Anyway, i'm just getting some public input on what the popular sizes may be so I can plan things accordingly. I can't really use my own stats for this since our storage nodes have far less space.
Francisco
This plan sounds very interesting to me. Thank you. However, checking your KVM plan, it doesn't look suitable for media streaming server, especially with such small number of CPU cores (I'm pointing this out because you were selling this plan potentially for plex server). I'd want 1-2 dedicated core not to upset my neighbors for encoding and other activities, however, 4TB RAID6+2 cpu core for $50... is of course a great deal however it's not too difficult to find a better dedi deal at this price range, especially if I want media streaming. Do you have any plan for more CPU intensive vps? Sorry for low balling. I'd still be very interested in this plan for offsite backup.