Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


BuyVM Block Storage Slabs! $1.25/mo for 256GB! $5.00/mo for 1TB! CN2 GIA also available! - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

BuyVM Block Storage Slabs! $1.25/mo for 256GB! $5.00/mo for 1TB! CN2 GIA also available!

24567

Comments

  • FHRFHR Member, Host Rep

    @techhelper1 said:

    willie said: Slabs are a higher end product with raid 10 and lots of caching and block device interface etc.

    RAID 10 is required for redundancy and speed, what else would you expect?

    It's not. There are different methods as well.

    For media streaming and backups, do you really think NVME and 10TB+ worth of caching on the front end is required? I certainly don't. I maybe see 4TB used but for purposes outside what you have mentioned.

    Yes. If 50 people stream the same file, the data would theoretically be fetched into the cache, and read from the slow disk storage only once.

    Great for media servers, download sites, torrents etc…

    I have personally done tests with ZFS where I can have 20 spinning disks and only have the ARC and L2ARC PCI-E caching. I don't see a need for a big cache in write events when the data is regularly flushed out to the disks every 5 seconds or so (which is about 50GB of data with a redundant 40Gbit setup).

    This is most likely not a ZFS setup - so caching algorithms will work totally differently.

    What does it change if it's a block device or accessible via the network connection? If you're running a media server over the Internet, you're still capped at the 1G speed or whatever is available on the servers bridge.

    It is a huge difference, mainly in latency. Infiniband is superiour to Ethernet, even if on paper they may have same maximum speed.

  • williewillie Member
    edited November 2018

    techhelper1 said: RAID 10 is required for redundancy and speed, what else would you expect?

    For basic cheap backup, raid-6 (like Hetzner Storagebox) or even no raid at all (like the now-scarce SYS ARM storage servers). What I'm inferring from slabs' use of raid-10 etc. is that they are built as a more versatile product than that. BuyVM's users have all kinds of applications in mind besides backup, so this is great for them.

  • letboxletbox Member, Patron Provider
    edited November 2018

    @techhelper1 said:

    > [willie said](/discussion/comment/2893025/#Comment_

    What does it change if it's a block device or accessible via the network connection? If you're running a media server over the Internet, you're still capped at the 1G speed or whatever is available on the servers bridge.

    You are wrong in that part we have like 21.98 GBytes by iperf test in Black Storage so since fran using 40Gbit Infiniband So he have like 11 to 20 GBytes depend on what kind card is by iperf

  • Is there any way to get additional cores without the memory and other things? GlusterFS really likes threads. Do the servers support private networking? I'd like to separate that traffic from my external network.

  • @Francisco said:

    @varunchopra said:
    Great for setting up object storage!

    Yep :) No problem shoving minio on it and enjoying.

    Francisco

    So we can use it as a regular filesystem, right?

  • HarambeHarambe Member, Host Rep
    edited November 2018

    @jlay said:
    Is there any way to get additional cores without the memory and other things? GlusterFS really likes threads. Do the servers support private networking? I'd like to separate that traffic from my external network.

    Don't think it's possible. Would love a 4GB plan with 50% of 2 cores, but I think the plans are pretty set in stone.

  • @ferri said:

    So we can use it as a regular filesystem, right?

    Correct. I have mine formatted as ext4 then set to mount on boot.

  • jlayjlay Member
    edited November 2018

    Just ordered three VMs with nine slabs, going to do raid-z across three drives on three servers running GlusterFS, excited to see how it turns out :smile: Might even try an LVM stripe instead of RAID for kicks!

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @jlay said:
    Just ordered three VMs with nine slabs, going to do raid-z across three drives on three servers running GlusterFS, excited to see how it turns out :smile:

    You'll be limited to 1gbit or so since that's the network port.

    As mentioned I'm aiming to get the private lan over the infiniband fabric but will need some extra work before that's possible.

    Francisco

  • jlayjlay Member
    edited November 2018

    @Francisco said:

    @jlay said:
    Just ordered three VMs with nine slabs, going to do raid-z across three drives on three servers running GlusterFS, excited to see how it turns out :smile:

    You'll be limited to 1gbit or so since that's the network port.

    As mentioned I'm aiming to get the private lan over the infiniband fabric but will need some extra work before that's possible.

    Francisco

    Ah, gotcha! No worries. Is it a separate gigabit allocation on another interface? If so, that'll be perfectly fine (basically what I run everywhere else).

    Cool to hear that's being looked into! We've done something similar where I work (not RDMA, something proprietary) but I understand how involved it can prove to be.

  • @Francisco

    Are you having trouble with orders and or activating the slabs? I ordered another slab last night and has been stuck in pending now for ~ 12 hours.

  • desperanddesperand Member
    edited November 2018

    key900 said: You are wrong in that part we have like 21.98 GBytes by iperf test in Black Storage so since fran using 40Gbit Infiniband So he have like 11 to 20 GBytes depend on what kind card is by iperf

    This equipment is very pricey. I don't think that small-medium companies can afford this.
    And also I don't see any economy there though too, if compare to classic storage. Maybe renting place is cost a lot there, who knows... I don't understand the whole business model related to storage too, it's, I think, not possible to sell this storage so cheap with using such pricey technologies right now. Looks like or used cheaper equipment or someone in long-term will be dead pulled or will start selling some of his body parts for paying debs.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Weblogics said:
    @Francisco

    Are you having trouble with orders and or activating the slabs? I ordered another slab last night and has been stuck in pending now for ~ 12 hours.

    Billing is 9-5. Karen is on handling things now, sorry about that!

    Francisco

    Thanked by 2Weblogics shell
  • FHRFHR Member, Host Rep
    edited November 2018

    @desperand said:

    This equipment is very pricey. I don't think that small-medium companies can afford this.

    On the contrary, stuff for 40gig (QDR) Infiniband is very cheap, since it has been obsolete for quite some time - just look on eBay.

    100gig (EDR) Infiniband is the modern standard - and for that, the equipment is indeed pretty pricey.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    ferri said: So we can use it as a regular filesystem, right?

    In the eyes of the VPS it's just like any other harddrive.

    You can format it however you want with whatever you want. Or if you're a baller you can go write an application that interacts at the block level.

    Really, it's entirely up to you.

    If anyone needs help formatting/partitioning, just ticket and I'll sort you out.

    Francisco

  • FHR said: On the contrary, stuff for 40gig (QDR) Infiniband is very cheap, since it has been obsolete for quite some time - just look on eBay.

    You're right, I got confused, thank you for clarification ^_~

  • Awesome! Will order asap when NY slice is available :-)

  • techhelper1techhelper1 Member
    edited November 2018

    FHR said: It is a huge difference, mainly in latency. Infiniband is superiour to Ethernet, even if on paper they may have same maximum speed.

    Of course, it's the latency, but what I'm saying is that if your application requires a resource from the Internet (like Plex) then the latency argument is tossed.

    FHR said: It's not. There are different methods as well.

    Yep. Just depends on if you want parity and where you want to place that calculation in the pipeline.

    key900 said: You are wrong in that part we have like 21.98 GBytes by iperf test in Black Storage so since fran using 40Gbit Infiniband So he have like 11 to 20 GBytes depend on what kind card is by iperf

    The conversion numbers are below, but since Fran said each block user is locked to 1G, you're limited to a mechanical hard drive with the head at the outer edge of the platter performance with NVME seek times.

    (Edited the table below, forgot to factor in 8b/10b encoding.)

    1G = 125MB/s
    10Gbits (8Gbits) = 1.25GB/s (1GB/s)
    40Gbits (32Gbits) = 5GB/s (4GB/s)
    56Gbits (40/54Gbits depending on link type) = 7GB/s (5GB or 6.75GB/s)

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @sin said:
    Awesome! Will order asap when NY slice is available :-)

    NY/LU will be next year at this point.

    Waiting to get a good feel for how the platform works so we can make whatever improvements we need before I ship gear to remote locations.

    Francisco

    Thanked by 1sin
  • HarambeHarambe Member, Host Rep

    techhelper1 said: but since Fran said each block user is locked to 1G

    Think that was in reference to the guy setting up a storage cluster that would use the public network between VMs. Block storage isn't limited to 1Gbps, easily pushing 500-700+MB/s

    Thanked by 1FHR
  • Francisco said: NY/LU will be next year at this point.

    Next year as in January? :)

  • Harambe said: Think that was in reference to the guy setting up a storage cluster that would use the public network between VMs.

    That sounded pretty crazy though, because of the cross site latency. Was it just a "prove it can be done" thing? Or does it make more sense than it sounds?

  • letboxletbox Member, Patron Provider
    edited November 2018

    @desperand said:

    [key900 said]> (/discussion/comment/2893059/#Comment_

    This equipment is very pricey. I don't think > that small-medium companies can afford this.

    No really since using QDR Comparison to FDR and EDR.

    @FHR said:

    @desperand said:

    This equipment is very pricey. I don't think that small-medium companies can afford this.

    On the contrary, stuff for 40gig (QDR) Infiniband is very cheap, since it has been obsolete for quite some time - just look on eBay.

    100gig (EDR) Infiniband is the modern standard - and for that, the equipment is indeed pretty pricey.

    Yes It something like

    QDR 40gig
    FDR 56 GIg
    EDR 100 Gig

  • HarambeHarambe Member, Host Rep

    @willie said:

    Harambe said: Think that was in reference to the guy setting up a storage cluster that would use the public network between VMs.

    That sounded pretty crazy though, because of the cross site latency. Was it just a "prove it can be done" thing? Or does it make more sense than it sounds?

    Sounds like he's doing GlusterFS all within Vegas. Whether or not it makes sense: no clue. Probably more of a proof of concept thing than a production setup if I had to guess.

    Thanked by 1vimalware
  • jlayjlay Member
    edited November 2018

    Got my GlusterFS+ZFS cluster deployed! Three 2GB VMs with 3 1TB slabs each in RAID-Z. 4TB of usable space out of 9TB. I can lose up to one drive in each array, and one whole node before worrying about data loss! I can add more space and redundancy with more nodes/slabs too :wink:

    Now all I need is more network bandwidth :smiley:

    [root@s1 ~]# zpool status
      pool: pool
     state: ONLINE
      scan: none requested
    config:
    
            NAME                            STATE     READ WRITE CKSUM
            pool                            ONLINE       0     0     0
              raidz1-0                      ONLINE       0     0     0
                scsi-0BUYVM_SLAB_VOLUME-62  ONLINE       0     0     0
                scsi-0BUYVM_SLAB_VOLUME-65  ONLINE       0     0     0
                scsi-0BUYVM_SLAB_VOLUME-68  ONLINE       0     0     0
    
    errors: No known data errors
    [root@s1 ~]# gluster volume info gv-data
    
    Volume Name: gv-data
    Type: Disperse
    Volume ID: 495d8a03-824c-4a19-81db-843c68e2d5d3
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x (2 + 1) = 3
    Transport-type: tcp
    Bricks:
    Brick1: s1:/pool/gv-data
    Brick2: s2:/pool/gv-data
    Brick3: s3:/pool/gv-data
    Options Reconfigured:
    transport.address-family: inet
    nfs.disable: on
    [root@s1 ~]# mkdir /mnt/gv-data
    [root@s1 ~]# mount -t glusterfs localhost:/gv-data /mnt/gv-data/
    [root@s1 ~]# df -h /mnt/gv-data/
    Filesystem          Size  Used Avail Use% Mounted on
    localhost:/gv-data  3.9T   40G  3.9T   1% /mnt/gv-data
    [root@s1 ~]# dd if=/dev/zero of=/mnt/gv-data/testfile1 bs=1M count=6000 status=progress
    6217007104 bytes (6.2 GB) copied, 56.304366 s, 110 MB/s
    6000+0 records in
    6000+0 records out
    6291456000 bytes (6.3 GB) copied, 56.9905 s, 110 MB/s
    [root@s1 ~]# dd if=/dev/urandom of=/mnt/gv-data/testfile2 bs=1M count=6000 status=progress
    6249512960 bytes (6.2 GB) copied, 57.356790 s, 109 MB/s
    6000+0 records in
    6000+0 records out
    6291456000 bytes (6.3 GB) copied, 57.8013 s, 109 MB/s
    [root@s1 ~]#
    

    For those curious, setting this up in a cross-region setup does make sense. GlusterFS has geo-replication which allows you to do automatic transfers between zones I believe in realtime (with degraded performance) or on a schedule. I haven't deployed it yet, but I probably will when the slabs are available elsewhere.

  • FHRFHR Member, Host Rep

    jlay said: I can lose up to one drive in each array, and one whole node before worrying about data loss

    It doesn't work this way. All the drives are virtual - so all 9 of your slabs are served from the same server and same set of drives anyway…

    Thanked by 1vimalware
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @jilay - so whats your opinion/feedback on the product so far? Liking it?

    Francisco

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    FHR said: It doesn't work this way. All the drives are virtual - so all 9 of your slabs are served from the same server and same set of drives anyway…

    No.

    Our storage is clustered so while volumes don't replicate between nodes, there's 5 large nodes, each with multiple arrays, that make up the cluster as a whole. Users are spread out over everything.

    Francisco

    Thanked by 1vimalware
  • jlayjlay Member
    edited November 2018

    @FHR said:

    jlay said: I can lose up to one drive in each array, and one whole node before worrying about data loss

    It doesn't work this way. All the drives are virtual - so all 9 of your slabs are served from the same server and same set of drives anyway…

    Fran mentioned a storage cluster, so I'd assume there are multiple nodes serving slabs. I don't have a way to confirm this, but I'd assume so. Also, they're using RDMA so it's not a far jump to believe there's multi-pathing involved somewhere.

    Edit:

    He confirmed, see above. In this scenario, what I said is true. Assuming all of my slabs are reasonably spread across their storage nodes and arrays, this setup is pretty resilient. I'd like to have a little more usable space, so I may switch to simply striping the drives. However this would only really survive losing one VM of three or two of five.

    @Francisco said:
    @jilay - so whats your opinion/feedback on the product so far? Liking it?

    Francisco

    It's working well! Will need to give it some time and see how ZFS likes the drives and how the internal network works over time :smile: So far it's going well, eager for the days where there's more internal bandwidth! :wink:

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    jlay said: It's working well! Will need to give it some time and see how ZFS likes the drives and how the internal network works over time So far it's going well, eager for the days where there's more internal bandwidth!

    I'll keep everyone posted on how that goes ;)

    Francisco

Sign In or Register to comment.