Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


LET Storage - I'm Back!
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

LET Storage - I'm Back!

Hello LET's,

After receiving quite a few PM's asking if our last offer was still available, and with the recent addition of some new hardware in our COLO, its back again for a limited time!

Setup may take up to 24 hours as is still a manual task currently within Proxmox for provisioning, however full control is offered from within WHMCS for all VM Operations.

These will be hosted on:
- HA CEPH Storage Back-end (SSD Caching soon to come online)
- Multiple KVM Nodes in HA setup
- Redundant network links (recently added AMS-IX and LINX now online)
- HP Enterprise Hardware & JUNIPER Network Equipment

Package:
- 1TB CEPH Redundant storage
- 2TB Outgoing Bandwidth
- Un-metered Incoming Bandwidth
- 1 Fair Share CPU Thread
- 2GB DDR4 ECC RAM
- Full Selection of ISO's & Remote Management within WHMCS
- 1 IPv4
- /64 IPv6 on Request
- Inbuilt firewall at VM Network level

$48/Year
AND
$96/3 years

ORDER

Multiple packages can be combined into a single VM on request for extra storage / resources.

Sale will end : Friday 17th 23:59 GMT

T&C
AUP
Bandwidth LookingGlass

Thanked by 3willie telephone Asim
«1

Comments

  • Wow, this is great! I haven't used mine much yet but it feels like an incredibly solid VPS besides being an outstanding storage buy. One of the best so far. Very tempted to get another. Network to my Hetzner and Online dedis are both very fast, and who cares about anyplace else? :)

    Ashley, can I pay $48 to extend my existing 1 year instance to 3 years? Still not sure I want to do that, but have had some buyers remorse about not doing it in the first place.

  • joepie91joepie91 Member, Patron Provider
    edited February 2017

    So, reading through your AUP...

    The following activity is banned on all our services:

    [...]

    • The running or mirroring of a public file/mirror service

    [...]

    • The running of a publicly facing file or mirror service

    Question: Aside from the duplicate entry - does this extend to "use as a storage backend for a publicly facing CDN / file hosting service that is hosted elsewhere"? In other words: is the ban on "use for file serving" or on "publicly exposing a file service"?

    • The use of unlicensed software, with the intent to infringe the author's right to payment

    There is no such right.

    • Using hosting services in any manner that is illegal, including any pictures or content of anyone under or implied to be under the age of 18

    That... is not "illegal", in the manner described.

    • Collection of personal information without users permission

    That's basically anything including keeping access logs.

    • Promotion of illegal activities (info on hacking, cracking etc)

    Those are not the same thing.

    • Use of any p2p services with the intent to transfer any content or material that will infringe upon any trademarks

    What do trademarks have to do with P2P services? You probably mean copyright?


    This AUP probably needs review by a lawyer. It sounds a lot like these AUP were written by a random person trying to prevent everything they disagree with from an ethical point of view, without taking into consideration the legal aspects.

    Thanked by 2sirrobin Lunar
  • AshleyUkAshleyUk Member
    edited February 2017

    @joepie91 said:
    So, reading through your AUP...

    >

    1) Ban on publicly exposing file service

    The rest I will have reviewed by a lawyer, however are all statements I have seen on other providers who I myself have purchased services with (Not to the exact word but covering the same key areas)

  • @willie said:
    Wow, this is great! I haven't used mine much yet but it feels like an incredibly solid VPS besides being an outstanding storage buy. One of the best so far. Very tempted to get another. Network to my Hetzner and Online dedis are both very fast, and who cares about anyplace else? :)

    Ashley, can I pay $48 to extend my existing 1 year instance to 3 years? Still not sure I want to do that, but have had some buyers remorse about not doing it in the first place.

    Pop in a ticket.

  • joepie91joepie91 Member, Patron Provider
    edited February 2017

    @AshleyUk said:

    @joepie91 said:
    So, reading through your AUP...

    >

    1) Ban on publicly exposing file service

    Alright. So if I used it as a backend storage node for a CDN project (where the actual CDN nodes that serve up the content to users are hosted elsewhere), then this is fine, even if it uses 100% of the disk space available?

    The rest I will have reviewed by a lawyer, however are all statements I have seen on other providers who I myself have purchased services with (Not to the exact word but covering the same key areas)

    Right. I'm definitely not saying you're the only one with these kind of terms (I've criticized poorly written ToS/AUP from other providers in the past as well, here on LET), but they definitely need to be fixed. Good to hear that you'll have them reviewed :)

  • @joepie91 said:

    @AshleyUk said:

    @joepie91 said:
    So, reading through your AUP...

    >

    1) Ban on publicly exposing file service

    Alright. So if I used it as a backend storage node for a CDN project (where the actual CDN nodes that serve up the content to users are hosted elsewhere), then this is fine, even if it uses 100% of the disk space available?

    Correct, the space is yours to use.

  • I got one of these the last time they came up, and it's been performing quite well. TBH I only use it for backup purposes and some other small stuff that really doesn't use any resources at all but thought I'd give it my 2 cents regardless.

  • joepie91joepie91 Member, Patron Provider

    @AshleyUk said:

    @joepie91 said:

    @AshleyUk said:

    @joepie91 said:
    So, reading through your AUP...

    >

    1) Ban on publicly exposing file service

    Alright. So if I used it as a backend storage node for a CDN project (where the actual CDN nodes that serve up the content to users are hosted elsewhere), then this is fine, even if it uses 100% of the disk space available?

    Correct, the space is yours to use.

    Alright, thanks :) Will definitely have to consider it.

    Thanked by 1AshleyUk
  • Wow Wow Wow
    What?

    I think I will grab one more!
    Is it possible to hold a Plex server on it?
    Decoding may be considered as an intense CPU activity.
    (I will grab one anyway)

  • @Paleoft said:
    Wow Wow Wow
    What?

    I think I will grab one more!
    Is it possible to hold a Plex server on it?
    Decoding may be considered as an intense CPU activity.
    (I will grab one anyway)

    Audio only decoding be fine, but if your needing to do a full video transcode then your have issues.

  • @AshleyUk (and/or anyone who has the service/system setup) - couple of questions:

    1. Is the disk exposed as a single (virtual) disk into the KVM (to be partitioned as desired by the user)?

    2. One of the issue's I've generally had is that IO crawls when things like RAID rebuilds etc. happens (at other providers) and so makes using the VM pretty much useless until things stabilize a bit. How is it with CEPH (no direct experience) during the equivalent "poor IO" times? Having lots of "small" files (i.e. inodes) (like in typical file backups) makes things really bad as well.

    3. Would using something(s) like borgbackup (etc.) be considered OK? When the backups run (or for verification/checking of the backups), there is likely to be some heavy CPU usage (for deduping/checksumming etc.). Having only 1 CPU (thread) could be limiting - I'm OK with the "slowness" on account of the CPU but I don't want to be tagged a "bad neighbour".

    Thanks in advance.

  • @nullnothere said:
    1. Is the disk exposed as a single (virtual) disk into the KVM (to be partitioned as desired by the user)?

    yes.

    Thanked by 1nullnothere
  • nice!

  • super provider, one of the very few gems in LET. if u need strorage take this, i recommend ashley

    Thanked by 1Falzo
  • @nullnothere said:
    @AshleyUk (and/or anyone who has the service/system setup) - couple of questions:

    1. Is the disk exposed as a single (virtual) disk into the KVM (to be partitioned as desired by the user)?

    Yes, it's a single 1TiB disk that you can partition however you want it to be.

    1. Would using something(s) like borgbackup (etc.) be considered OK? When the backups run (or for verification/checking of the backups), there is likely to be some heavy CPU usage (for deduping/checksumming etc.). Having only 1 CPU (thread) could be limiting - I'm OK with the "slowness" on account of the CPU but I don't want to be tagged a "bad neighbour".

    That's perfectly fine, I'm using Borg Backup as well. During normal daily sync rounds the only outbound traffic will be Borg's cache files getting updated to see if there have been any new backups since the last time it ran, and then starts checksumming/encrypting (and optionally compressing, I've set it at lz4 to reduce the total CPU load and compression time to prevent myself from being a noisy neighbour on all hosts I'm using it with) the new files it has detected since the previous run. All this load will be generated on the server/VPS which is initiating the backup, so the only thing the server/VPS (which contains your copy of the repository) will have to do is write the new files. That hardly generates any CPU load.

  • @nullnothere said:
    @AshleyUk (and/or anyone who has the service/system setup) - couple of questions:

    1. One of the issue's I've generally had is that IO crawls when things like RAID rebuilds etc. happens (at other providers) and so makes using the VM pretty much useless until things stabilize a bit. How is it with CEPH (no direct experience) during the equivalent "poor IO" times? Having lots of "small" files (i.e. inodes) (like in typical file backups) makes things really bad as well.
      Thanks in advance.

    Without going into to much detail about CEPH's backend.

    CEPH spreads your data in chunks of 4MB across multiple HD's into blocks called PG's, so if their is for example a single disk failure this will effect a very small amount of these PG's, the recovery may effect some of your VM's data or none at all depending on the PG's that hold your VM's data.

    However either way CEPH handles and priorities client traffic during the rebuild and is very different to a simple 4 Drive Raid 10 setup.

    Thanked by 1nullnothere
  • AshleyUk said:

    Without going into to much detail about CEPH's backend.

    CEPH spreads your data in chunks of 4MB across multiple HD's into blocks called PG's...

    Thanks, this is very interesting and I'd love to hear more detail. Can you say the total number of drives and servers in the CEPH cluster, vs the amount of usable data? My picture is that it's basically a RAID-6 spread across several machines, but haven't understood the specifics. It sounds cool though.

  • @willie said:

    AshleyUk said:

    Without going into to much detail about CEPH's backend.

    CEPH spreads your data in chunks of 4MB across multiple HD's into blocks called PG's...

    Thanks, this is very interesting and I'd love to hear more detail. Can you say the total number of drives and servers in the CEPH cluster, vs the amount of usable data? My picture is that it's basically a RAID-6 spread across several machines, but haven't understood the specifics. It sounds cool though.

    Sure:

    4 & 6TB HGST 7K6000 Ultrastar Drives
    10 Drives per a Physical Server
    2 * 10Gbps Storage Uplink per a Physical Server (To separate 10Gbps Switches)

    CEPH Supports two methods:

    1/ Replication, this is where every piece of data is replicated as many times as required across separate HD / Host, this can be from 2 or above and allows for a failure of all but one HD / HOST. Data is only read from the Primary replication and any changes are replication live internally within CEPH.

    2/ Erasure Coding, this works on two values (split + parity), a chunk of data is split by the value set and each split chunk is then saved on a separate HD / HOST, then a parity chunk is calculated. If one is calculated this is the same as RAID5, however within CEPH this value can be a high as required, for example if a a value of 5 was used for parity, and a value of 10 for split.

    Then a data chunk would be split across 10 separate HD/ HOST, and a further 5 parity chunks across a further 5 HD / HOSTS, any 5 of the 15 could fail without data loss, each time one was to fail CEPH will automatically rebalance by creating the new chunk on another OSD or HOST.

    As our cluster is built up of a growing amount of disk's, data is spread across multiple HD's, so a single disk failure will not effect all VM's data but a small subset, unlike a local 4 Disk RAID10 array where a single disk has a chance of effecting 50% of all data performance (and more from the RAID Card being busy during recovery)

  • can both ipv4 and ipv6 reversed via rdns?

  • @cybersans said:
    can both ipv4 and ipv6 reversed via rdns?

    Yes currently requires a ticket but will soon be user configurable.

  • AshleyUk said: 2/ Erasure Coding, this works on two values (split + parity),

    Thanks! This is basically distributed raid6. I presume it's what you're using and the 10gbps local net undoubtedly helps.

  • @willie said:

    AshleyUk said: 2/ Erasure Coding, this works on two values (split + parity),

    Thanks! This is basically distributed raid6. I presume it's what you're using and the 10gbps local net undoubtedly helps.

    Yes you can look at it as RAID6 without the disadvantages of RAID6 :)

  • Thanks for the details about CEPH. I am a current customer and must say I'm satisfied with the performance of this package as a web and backup server. Only negatives so far are one unexpected reboot apparently due to a hypervisor bug that was corrected and that we couldn't get the inbuilt firewall at VM Network level to actually block anything. @AshleyUk is dialogating with Proxmox over this issue and has been providing prompt and helpful support.

  • @simlev said:
    Thanks for the details about CEPH. I am a current customer and must say I'm satisfied with the performance of this package as a web and backup server. Only negatives so far are one unexpected reboot apparently due to a hypervisor bug that was corrected and that we couldn't get the inbuilt firewall at VM Network level to actually block anything. @AshleyUk is dialogating with Proxmox over this issue and has been providing prompt and helpful support.

    This was due to a Kernel Panic on one node with the kernel running having an know Ubuntu KVM bug. This has since been patched and your VM was started on another node.

    The Firewall issue should be resolved shortly, will update your ticket once ready.

  • Hi, how's provisioning going for this offer, getting through them?

  • Is it just me, getting "502 Bad Gateway" from the order link?

  • @ulclepha said:
    Is it just me, getting "502 Bad Gateway" from the order link?

    It's down for me as well. Was flaky around 12 hours ago too, but came back up in an hour.

  • @advarisk said:

    @ulclepha said:
    Is it just me, getting "502 Bad Gateway" from the order link?

    It's down for me as well. Was flaky around 12 hours ago too, but came back up in an hour.

    Can you please try now, latest PHP7 FPM wasn't playing happy.

  • @AshleyUk said:
    Can you please try now, latest PHP7 FPM wasn't playing happy.

    Looks to be down still.

  • @AshleyUk said:
    Can you please try now, latest PHP7 FPM wasn't playing happy.

    Looks to be down still.

Sign In or Register to comment.