Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Sizing for storage VPS? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Sizing for storage VPS?

2

Comments

  • VPNshVPNsh Member, Host Rep

    @pubcrawler said: $15 a month for 500GB?

    Yeah that will sell. Ask BuyVM. Permanently out of stock.

    Difference being BuyVM's is RAID-protected.

  • @pubcrawler said: $15 a month for 500GB?

    Yeah that will sell. Ask BuyVM. Permanently out of stock.

    BuyVM's 500GB for $15 is on a RAID50 array though, not the non-RAID @liamwithers was speaking of.

  • Oh doh!

    So better drop that $15 price then :) BuyVM wins, just don't have stock.

  • VPNshVPNsh Member, Host Rep

    @pubcrawler said: So better drop that $15 price then :) BuyVM wins, just don't have stock.

    Which is why I'm asking here. For non-RAID storage of 500GB, what would you consider to be reasonable pricing?

  • $10 a month would be great. $15 would be acceptable obviously.

    $15 price point might only exist with BuyVM.

    One competitor perhaps at that price point.

  • joepie91joepie91 Member, Patron Provider

    @liamwithers said: Which is why I'm asking here. For non-RAID storage of 500GB, what would you consider to be reasonable pricing?

    I'd be more than okay with $8-$10/month for 500GB non-RAID. With the caveat that it should be possible to install custom software for utilizing said storage (like Tahoe-LAFS), so not just limited to rsync :P

    Thanked by 1pubcrawler
  • FUSE support would be excellent feature for any storage VPS. Unsure how widely FUSE is enabled by VPS providers.

  • Personally, I prefer a plan with about 25-50GB storage, and of course a good network. Bandwidth need to be, umm, maybe more than 1T. And RAM could be dropped down, 48 or 64MB with about the same burst should be enough.

  • Non-RAID storage would lower the initial cost point a bit, since a $450 raid card wouldn't be needed. Could also start with one hard drive and add more as time went on, extending the array with LVM.

  • @joepie91 said: I'd be more than okay with $8-$10/month for 500GB non-RAID. With the caveat that it should be possible to install custom software for utilizing said storage (like Tahoe-LAFS), so not just limited to rsync :P

    Ditto - @damian - non-RAID and tahoe-lafs storage server software installed (heck, happy if you install it and provide that as a service as its non-sensitive)

    I'd definitely buy.

  • rds100rds100 Member
    edited October 2012

    Well, normally you would want at least RAID1 for the volume where the node filesystem and the virtual private servers' root file systems live. Software RAID1 would be enough for this.
    Then you can add/mount second volumes to the VPSes, this time on HDDs without RAID.

  • @rds100 said: Well, normally you would want at least RAID1 for the volume where the node filesystem and the virtual private servers' root file systems live. Software RAID1 would be enough for this.

    Then you can add/mount second volumes to the VPSes, this time on HDDs without RAID.

    Yeah, that's what I was planning. Could get a cheapy BR10 raid card or something similar and put the HN filesystem on that, then link in storage from the LVM's mountpoint to everyone's VPS.

  • I'm all over the non-RAID protected storage! Like some others, I only use it as a mirror for an already redundant solution I have setup.

    Since it's used as an incremental backup mirror for me, I don't need gobs and gobs of bandwidth. A couple dozen GB per month will suit me fine after an initial push.

    On the same same token, don't need super fast incoming data rates. I don't need to push the incremental backup fast. However, I do want reasonably fast outgoing data rates in the event I need to restore from it.

  • joepie91joepie91 Member, Patron Provider

    @pubcrawler said: FUSE support would be excellent feature for any storage VPS. Unsure how widely FUSE is enabled by VPS providers.

    As far as I know, it's even possible on OpenVZ by loading the appropriate kernel module. Then again, I have not ever done anything with this in practice, so I may very well be dead wrong :)

  • DamianDamian Member
    edited October 2012

    @rajprakash said: On the same same token, don't need super fast incoming data rates. I don't need to push the incremental backup fast. However, I do want reasonably fast outgoing data rates in the event I need to restore from it.

    My original plan was going to be your-choice of either a specific allocated transfer amount per month, or unmetered incoming/10-mbit outgoing or something like that.

    In the event of needing all your data (catastrophic system failure, you're leaving our service, etc), open a support ticket and we'll un-restrict your transfer speed/monthly allocation. We're not going to unnecessarily delay you restoring your data, and we're not going to hold it hostage either.

    Keeping track of these requests in support tickets will allow us to determine if an individual is abusing this; if the user habitually needs their VPS un-restricted every week, we won't be able to support them further.

    @joepie91 said: As far as I know, it's even possible on OpenVZ by loading the appropriate kernel module. Then again, I have not ever done anything with this in practice, so I may very well be dead wrong :)

    FUSE is pretty simple, it's set up a lot like TUN/TAP: load a kernel module, set the container to be able to use it, and that's about it. The rest of the config is handled by the user in their container.

    http://wiki.openvz.org/FUSE

  • What chasis would you put all of these disks in? Some sort of 4u chasis and all 2.5" drives?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    You're missing another point about BuyVM storage plans.

    They include windows and are KVM based so they're used fully as VM's, not just 'backups only'.

    Our plans aren't perm out of stock. We have something like 150TB - 200TB worth of storage showing up in the coming few months.

    We're just working out power and such.

    Francisco

  • @pubcrawler said: I'd offer a slew of different sizes:

    25GB 50GB 100GB 250GB 500GB 1TB

    agreed. I would add 1.5TB and 2TB, if possible. the more choice you give people, the better, I think.

  • @Corey said: What chasis would you put all of these disks in? Some sort of 4u chasis and all 2.5" drives?

    I'd probably do a 6 or 8-drive 3.5" 2u chassis of some sort. An un-striped LVM volume will lose whatever data is on that specific drive when the drive dies, so i'd rather keep the number of eggs in a basket to a minimum.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Damian said: An un-striped LVM volume will lose whatever data is on that specific drive when the drive dies,

    Not quite.

    It'll lose hunks of the files that pass over the boundries.

    If you're using KVM for people, you'll end up with fubar'd VM's on both edges, as well as the contents of the initial drive

    Francisco

  • You could of course just create separate VGs per each HDD. You don't have the benefit of a big storage pool this way and probably end up with unused bits of space, but at least the consequences of a failed disk are more limited this way.

  • @Francisco said: It'll lose hunks of the files that pass over the boundries.

    Yes, which would indeed mean that it's probably lost those files, since ext4 will mark holey files as bad.

    @Francisco said: If you're using KVM for people, you'll end up with fubar'd VM's on both edges, as well as the contents of the initial drive

    I was planning on using OVZ, with an additional bound mount to a directory for the user on the LVM mountpoint, while keeping the container's root directory on the / partition. Although using KVM might be better, since I can arbitrarily mount things directly then. Hmmm....

  • @rds100 said: You could of course just create separate VGs per each HDD. You don't have the benefit of a big storage pool this way and probably end up with unused bits of space, but at least the consequences of a failed disk are more limited this way.

    Indeed, and ensuring the disks get filled could be implemented using stock control too.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @rds100 said: You could of course just create separate VGs per each HDD.

    Not in solus anyways.

    If you used say proxmox you could do that but you'd have to manually provision storage & the VM.

    Francisco

  • DamianDamian Member
    edited October 2012

    @Francisco said: Not in solus anyways.

    If you used say proxmox you could do that but you'd have to manually provision storage & the VM.

    I think he means to have each drive be it's own volume group. As in:

    volume group #1/drive #1 = /lvm/disk001
    volume group #2/drive #2 = /lvm/disk002

    etc.

    If I did that, I'd end up having to manually manage each new container and what disk it should go on.

    At this price point, it will need to be automated, which can be effected with a monolithic VG by putting the bound mount in the container's config and having it auto-setup that way. I'm not sure if I can effect the same process with multiple volume groups easily.

  • @Damian said: I'd probably do a 6 or 8-drive 3.5" 2u chassis of some sort. An un-striped LVM volume will lose whatever data is on that specific drive when the drive dies, so i'd rather keep the number of eggs in a basket to a minimum.

    I guess I don't understand what you're doing here.... you are trying to do this without raid?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Damian said: volume group #1/drive #1 = /lvm/disk001

    volume group #2/drive #2 = /lvm/disk002

    You're mixing up your techs here.

    A volume group is a container that holds all of your physical volumes (pv's). 'logical volumes' from there are slabs cut out of a VG.

    If you want 'true separation' of things you'd need to use dedicated VG's with only the correct PV's bound to it. Solus doesn't support this nor probably ever will since it's a very screwy way of doing things.

    You could use a single VG and then carve LV's carefully using unit counts instead of MB and try to mark it so things don't overlap, but you're then just blowing a crap load of time on this all

    Francisco

  • @Francisco you are talking about KVM VPSes, we are talking about OpenVZ where you just mount a separate additional filesystem inside each container - at /vz/root/XXX/storage for instance.
    The separate file system could be a partition, a logical volume, a whole disk, whatever.

  • @Corey said: I guess I don't understand what you're doing here.... you are trying to do this without raid?

    Yes. RAID storage is already in the works, and will be sold. Multiple people have commented in this thread that they do not need RAID storage and do not need it's associated price premium, so I'm looking into accommodating those individuals, because I like money. Read from http://www.lowendtalk.com/discussion/comment/145110#Comment_145110 onward.

    @Francisco said: You're mixing up your techs here.

    Yes, thank you for pointing that out and setting me straight. It's been a hell of a day and it's only 1 pm :)

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @rds100 said: at /vz/root/XXX/storage for instance.

    The separate file system could be a partition, a logical volume, a whole disk, whatever.

    That's an even screwier way of doing it but OK :).

    I hope he plans to only ever sell single drives. quotas are only applied to a single drive and you can't, say, bind 2 folders out of a single device and then space is limited.

    If he did like:

    mount /dev/sdg1 /mnt/moarstorage
    mkdir /mnt/moarstorage/usr1 /mnt/moarstorage/usr2
    mount -o bind /mnt/moarstorage/usr1 /vz/root/100/storage
    mount -o bind /mnt/moarstorage/usr2 /vz/root/101/storage

    There won't be any quotas and users will be able to write to the full boundries of the drive

    Francisco

Sign In or Register to comment.