Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


31mb/sec in the DD 'test': acceptable to you? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

31mb/sec in the DD 'test': acceptable to you?

2

Comments

  • @Corey said: Duno why they would replace the whole server - BUT that is a classic example of why it would take 'days'.

    It should never take days, the provider should have been pro-actively monitoring the situation and ordered new drives/servers/whatever required to fix the problem before customers needed to start complaining.

  • @JoeMerit said: It should never take days, the provider should have been pro-actively monitoring the situation and ordered new drives/servers/whatever required to fix the problem before customers needed to start complaining.

    Who's to say they didn't? Do you also think that they are required to order overnight shipping? Maybe they are trying out a new build on the next server and they ran into some problems and had to order more parts.

  • risharderisharde Patron Provider, Veteran

    @Corey Yes I have to tell you that this has been the first major issue and this is probably why I can have some patience with them. Also, you are right again, they did provide responses to every message I have sent them so I must give them some kudos for that because I've been with some providers that answer with "We're working on it" and then a day later I message them back to find out what's going on and they say "It's been fixed" as if my human brain wouldn't want to know what was up ;)

    @JoeMerit you do make a valid point there but I guess sometimes people make mistakes and not do what they have to plus this mistake or unforseen event didn't cause me to lose my data - and still I guess I would be forgiving as long as my backup works but I'm not demeriting what you say... there should be a reasonable level of proactive approach.

  • 30mb/s - a normal value for virpus vps when I was with them (over a year)

  • letboxletbox Member, Patron Provider

    Not look good.

  • @Corey said: Who's to say they didn't? Do you also think that they are required to order overnight shipping? Maybe they are trying out a new build on the next server and they ran into some problems and had to order more parts.

    Whatever, I'd be out of there. If a single drive failing causes days of slow as molasses disk i/o then we got someone cutting corners on hardware and I dont want to be part of it and certainly dont want to risk being there for the next occurrence.

  • CoreyCorey Member
    edited November 2012

    @JoeMerit said: Whatever, I'd be out of there. If a single drive failing causes days of slow as molasses disk i/o then we got someone cutting corners on hardware and I dont want to be part of it and certainly dont want to risk being there for the next occurrence.

    What do you mean cutting corners on hardware? When you loose a disk out of any array the array becomes slow as molasses. On top of that they are probably a small business, when they initially bought the server they probably didn't have an unlimited budget like cvps-chris #winning. I thought this community welcomed small businesses and startups.

    You have to look at this in the big picture, not from just a simple client's point of view that doesn't care about their provider and just wants service 24/7 365 with no hiccups. ( I know a lot of users here are pretty savvy and know there will be hiccups and are rsyncing their vps to other providers for when there are issues like this. )

  • @Corey said: What do you mean cutting corners on hardware? When you loose a disk out of any array the array becomes slow as molasses. On top of that they are probably a small business, when they initially bought the server they probably didn't have an unlimited budget like cvps-chris #winning. I thought this community welcomed small businesses and startups.

    I'm all for supporting small businesses and startups. However, it doesn't take an "unlimited budget" to have a disk array setup that doesn't make a node dog slow
    if a disk is lost. I'm not going to stay with a host out of pity just because they are small if there is another provider doing it better for the same price.

  • @JoeMerit said: if a disk is lost. I'm not going to stay with a host out of pity just because they are small if there is another provider doing it better for the same price.

    I'm pretty sure a disk can up replaced in at most a couple of hours. If the data center or the provider can't organize this, something's wrong. A spare is kept onsite?

  • JoeMeritJoeMerit Veteran
    edited November 2012

    @concerto49 said: I'm pretty sure a disk can up replaced in at most a couple of hours. If the data center or the provider can't organize this, something's wrong. A spare is kept onsite?

    The issue here is the performance of the node after the drive failed and then the 3 days (and counting) of crappy performance that risharde is getting which is presumably due to the array rebuilding.... Corey said he thinks risharde's provider is "great" for how they are handling the situation and appears to be suggesting that mediocre providers should be given a pass if they are small businesses. I don't.

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2012

    @risharde
    1. dd test tests write only, while most (5:1) IOPS are read.
    2. dd tests sequential write which is not going to happen at that speed almost in no situation, only copy from one directory to another, and even then...
    3. ioping shows the "responsiveness" of the storage. I.e. how long it takes to aknowledge the request and start serving it. It depends on more factors than just the storage speed and shows much better how well your app will be have if it needs frequent IOPS.
    Since most IOPS are not sequential in a real life case scenario and not write but read, dd test can at most give a vague idea of how good the storage is, for example, in a SAN situation the sequential operations are not going to be blazing fast, however, response time is great for a big array on fibrechannel, but nothing beats local SSD in any situation.
    Well, except a big SSD array :P

  • that dd that is acceptable, but the IOPing is terribad.

  • @JoeMerit said: The issue here is the performance of the node after the drive failed and then the 3 days (and counting) of crappy performance that risharde is getting which is presumably due to the array rebuilding.... Corey said he thinks risharde's provider is "great" for how they are handling the situation and appears to be suggesting that mediocre providers should be given a pass if they are small businesses. I don't.

    I don't either. If the array of disks are going to be HUGE at least get RAID10 or something and not RAID1. 3 days is a long time.

  • risharderisharde Patron Provider, Veteran

    @Maounique Thank you for the explanation, much appreciated (I miss the thank button as well lol).

    @SimpleNode agreed

    @concerto49 hmm I'm really not sure what they are using, I just assumed it was RAID10 since most (not all) of the providers I come across use RAID10.

    @JoeMerit I understand what you mean, you never know, maybe they might be kind to me later on... who knows... will wait and see, for now, I'll wait and see, if it becomes a habit then I'll surely have to think of moving

    I'm wondering if it was multiple disks that started to function wrongly but right now, I'm just speculating, I think now its a good idea to use that IOPING more often to check how the disk responds. I'll keep you guys posted on what they say if they say anything further.

  • @risharde said: @concerto49 hmm I'm really not sure what they are using, I just assumed it was RAID10 since most (not all) of the providers I come across use RAID10.

    Definitely seen hosts with RAID1. We don't do it, but seen it. SwitchVM did before they got sold.

  • @concerto49 said: Definitely seen hosts with RAID1. We don't do it, but seen it. SwitchVM did before they got sold.

    Our oldest node is running RAID1

  • SimpleNodeSimpleNode Member
    edited November 2012
    [root@argon ~]# dd if=/dev/zero of=test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 9.91378 s, 108 MB/s
    

    ^ On RAID1 Node.

    [root@dysprosium ~]# dd if=/dev/zero of=test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 5.15639 s, 208 MB/s
    

    ^ On RAID10 Node.

  • Degraded arrays should not be left for any period of time.

  • SimpleNodeSimpleNode Member
    edited November 2012
    [root@dysprosium ~]# ioping -c 10 /vz/
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=1 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=2 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=3 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=4 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=5 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=6 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=7 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=8 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=9 time=0.2 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server3214-LogVol02): request=10 time=0.1 ms
    
    --- /vz/ ioping statistics ---
    10 requests completed in 9002.1 ms, 8489 iops, 33.2 mb/s
    min/avg/max/mdev = 0.1/0.1/0.2/0.0 ms
    

    ^ H/W RAID 10 Node

    [root@argon ~]# ioping -c 10 /vz/
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=1 time=4.0 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=2 time=11.0 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=3 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=4 time=11.0 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=5 time=0.1 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=6 time=0.2 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=7 time=11.7 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=8 time=81.8 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=9 time=11.8 ms
    4096 bytes from /vz/ (ext4 /dev/mapper/vg_server2422-LogVol02): request=10 time=0.1 ms
    
    --- /vz/ ioping statistics ---
    10 requests completed in 9132.7 ms, 76 iops, 0.3 mb/s
    min/avg/max/mdev = 0.1/13.2/81.8/23.4 ms
    

    ^ S/W RAID 1 Node

    [root@beryllium ~]# ioping -c 10 /
    4096 bytes from / (ext4 /dev/md2): request=1 time=8.5 ms
    4096 bytes from / (ext4 /dev/md2): request=2 time=9.9 ms
    4096 bytes from / (ext4 /dev/md2): request=3 time=5.6 ms
    4096 bytes from / (ext4 /dev/md2): request=4 time=0.1 ms
    4096 bytes from / (ext4 /dev/md2): request=5 time=0.1 ms
    4096 bytes from / (ext4 /dev/md2): request=6 time=0.1 ms
    4096 bytes from / (ext4 /dev/md2): request=7 time=0.2 ms
    4096 bytes from / (ext4 /dev/md2): request=8 time=0.1 ms
    4096 bytes from / (ext4 /dev/md2): request=9 time=8.2 ms
    4096 bytes from / (ext4 /dev/md2): request=10 time=0.2 ms
    
    --- / (ext4 /dev/md2) ioping statistics ---
    10 requests completed in 9034.0 ms, 303 iops, 1.2 mb/s
    min/avg/max/mdev = 0.1/3.3/9.9/4.0 ms
    

    ^ S/W RAID10 Node

  • 31mb/s is awful. 31MB/s is ok, depending on what I'm doing with the box.

  • @JoeMerit said: The issue here is the performance of the node after the drive failed and then the 3 days (and counting) of crappy performance that risharde is getting which is presumably due to the array rebuilding.... Corey said he thinks risharde's provider is "great" for how they are handling the situation and appears to be suggesting that mediocre providers should be given a pass if they are small businesses. I don't.

    I'm not sure how they are mediocre for loosing disk IO when their disk fails is what I'm getting at. Should every startup have 8-12 disk raid 10 arrays at the start? Are they stupid and mediocre for not doing this?

    What I'm getting at is that you don't know their prior experience server administration when they first start their business. People generally learn from their mistakes. Startups will make mistakes. It's how they handle these mistakes that make them 'great'. Should everyone have worked for another provider for X years before starting up to learn every detail of the industry?

    In this case they are ordering a brand new server because evidently they learned that their old build IS in fact mediocre.

    How does them learning from their mistake and getting new hardware in for risharde here make them mediocre? Of course - getting brand new hardware in after learning that they made a mistake is going to take a few days.

    @concerto49 said: I don't either. If the array of disks are going to be HUGE at least get RAID10 or something and not RAID1. 3 days is a long time.

    I'm not sure what you mean about a huge raid1 array? Raid1 is only two disks ever. If you've ever had a dedicated server you know that you need 30 days advance notice to the provider to cancel. If you've ever had rackspace you would know how big of an investment that is. Who's to say they aren't stuck with a slow datacenter because they don't have the money to move away? Startups do not always have #winning budgets.

  • SimpleNodeSimpleNode Member
    edited November 2012

    @Corey said: 8-12 disk raid 10 arrays

    err. err... err.... err...... ;(

    I think @lele0108 also uses RAID1 RE4s

  • Oh folks and their love of RAID... Not sure how people make a dollar with these strands of drives and schmancy controllers ;) Actually not sure how folks are packing density of drives in low unit space. 1U servers don't afford much in the way of drive bays.

    @Damian, the dd output stunk. 34 MB/s I'd call problematic. Depends on node load though. If it lingers there typically then might be tolerable for most folks.

    The ioping times were all over the place. I see those sorts of wild deviations on many providers VPS'es. Very common. Again, is that typical and what happens under normal load from other VPS'es on the node?

    Could be active use, slow drives or drive failure in RAID.

  • @Corey said: Raid1 is only two disks ever

    Not quite

  • @ShardHost said: Not quite

    So mirrors are now 3 - 4 way? How?

  • Anyways, all of our newer nodes use 4 disks in hardware RAID10, as we can't fit any more into a 1U server.

  • @pubcrawler said: Oh folks and their love of RAID... Not sure how people make a dollar with these strands of drives and schmancy controllers ;) Actually not sure how folks are packing density of drives in low unit space. 1U servers don't afford much in the way of drive bays.

    Server density is how you make money in this market. You can't get as much density from 1U servers.

  • @Corey said: So mirrors are now 3 - 4 way? How?

    By mirroring it 3-4 ways?

  • edited November 2012

    @Corey said: So mirrors are now 3 - 4 way? How?

    By mirroring on 3 - 4 disks

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2012

    @pubcrawler said: Actually not sure how folks are packing density of drives in low unit space.

    http://www.supermicro.com/products/system/2U/2027/SYS-2027TR-HTRF_.cfm

    We have some of those, good wattage per unit, good drive density, very fast with LSI raid and SAS2 10k drives.
    Downside ? Cost and arm and a leg.

Sign In or Register to comment.