Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


High IO Wait Proxmox in OVH Raid 1 E5-1620v2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

High IO Wait Proxmox in OVH Raid 1 E5-1620v2

akhfaakhfa Member
edited June 2017 in Help

I have OVH with 2 x 2 TB disk in RAID 1. Recently I use glances to see system statistics.

I realize that:

- iowait value always ~74% average all the time.
- there is no high CPU usage in host machine. 
- the most io usage seen in glances and iotop are dm-0 and md4 (raid for data storage)
- I can't find what is the source of this iowait
- there is no VM with high disk usage. The most suspicious vm are one vm for private 1 seafile server with 1 account with rare sync and 1 vm for wordpress site containing private notes (doesn't have many visitor), but they just use few iops (from iostats) 

This is raid status in data storage

/dev/md4:
        Version : 0.90
  Creation Time : Sun Nov 27 21:35:27 2016
     Raid Level : raid1
     Array Size : 1931980736 (1842.48 GiB 1978.35 GB)
  Used Dev Size : 1931980736 (1842.48 GiB 1978.35 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Jun 18 06:37:36 2017
          State : active 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : f4c9bf32:54999985:a4d2adc2:26fd5302
         Events : 0.40534

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       1       8       20        1      active sync   /dev/sdb4

This is glances

This is iostat


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.38    0.10    0.74   18.58    0.00   78.19

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
loop0             0.00     0.00    0.88    0.81     0.04     0.01    53.96     0.01    7.93    0.25   16.28   6.84   1.16
loop1             0.00     0.00    0.68    5.36     0.02     0.06    27.35     0.12   19.24    4.05   21.18   3.76   2.27
loop2             0.00     0.00    0.02    1.36     0.00     0.01    14.87     0.02   15.66   11.19   15.72  13.64   1.87
sda               2.74    23.85   16.70   43.98     1.29     1.51    94.42     0.39    6.42    9.96    5.08   4.34  26.36
sdb               2.67    23.91   10.90   43.99     0.85     1.51    88.01     0.29    5.27   10.67    3.93   4.14  22.75
md2               0.00     0.00    0.01    2.99     0.00     0.01     9.49     0.00    0.00    0.00    0.00   0.00   0.00
md4               0.00     0.00    8.17   59.00     0.59     1.47    62.89     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    8.04   58.06     0.59     1.47    63.91     0.12    1.84   10.20    0.68   3.91  25.87

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.69    0.00    0.31   74.06    0.00   24.94

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
loop0             0.00     0.00    0.00    0.50     0.00     0.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00
loop1             0.00     0.00    0.00    4.00     0.00     0.02     8.00     0.11   28.00    0.00   28.00  28.00  11.20
loop2             0.00     0.00    0.00    2.00     0.00     0.01     8.00     0.11   56.00    0.00   56.00  56.00  11.20
sda               0.00     4.50    0.00   35.50     0.00     0.48    27.70     0.22    6.20    0.00    6.20   5.46  19.40
sdb               0.00     4.50    0.00   35.50     0.00     0.48    27.70     0.21    6.03    0.00    6.03   5.52  19.60
md2               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md4               0.00     0.00    0.00   33.50     0.00     0.45    27.81     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00   32.50     0.00     0.45    28.66     0.46   14.22    0.00   14.22   5.72  18.60

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00    0.38   74.28    0.00   24.84

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
loop0             0.00     0.00    0.00    7.00     0.00     0.03     8.00     0.08   10.86    0.00   10.86  10.86   7.60
loop1             0.00     0.00    0.00    3.00     0.00     0.01     8.00     0.00    0.00    0.00    0.00   0.00   0.00
loop2             0.00     0.00    0.00    0.50     0.00     0.01    32.00     0.00    0.00    0.00    0.00   0.00   0.00
sda               0.00     9.00    0.00   42.00     0.00     0.62    30.36     0.24    5.62    0.00    5.62   5.52  23.20
sdb               0.00     9.00    0.00   42.00     0.00     0.62    30.36     0.24    5.76    0.00    5.76   5.71  24.00
md2               0.00     0.00    0.00    6.00     0.00     0.02     8.00     0.00    0.00    0.00    0.00   0.00   0.00
md4               0.00     0.00    0.00   38.00     0.00     0.57    30.82     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00   37.00     0.00     0.57    31.65     0.28    7.68    0.00    7.68   5.30  19.60

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00    0.44   74.20    0.00   24.86

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
loop0             0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
loop1             0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
loop2             0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sda               0.00     0.00    0.00    9.00     0.00     0.03     7.67     0.06    6.44    0.00    6.44   6.44   5.80
sdb               0.00     0.00    0.00    9.00     0.00     0.03     7.67     0.06    6.44    0.00    6.44   6.44   5.80
md2               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md4               0.00     0.00    0.00    3.00     0.00     0.01     7.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    3.00     0.00     0.01     7.00     0.04   12.67    0.00   12.67  12.67   3.80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.13    0.00    0.56   73.51    0.00   24.80

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
loop0             0.00     0.00    0.00    0.50     0.00     0.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00
loop1             0.00     0.00    0.00    4.00     0.00     0.02     8.00     0.06   16.00    0.00   16.00  16.00   6.40
loop2             0.00     0.00    0.00    3.50     0.00     0.01     8.00     0.07   19.43    0.00   19.43  19.43   6.80
sda               0.00     7.50    0.00   56.00     0.00     0.61    22.22     0.43    7.68    0.00    7.68   6.07  34.00
sdb               0.00     7.50    0.00   56.00     0.00     0.61    22.22     0.39    7.00    0.00    7.00   5.75  32.20
md2               0.00     0.00    0.00    7.00     0.00     0.03     8.57     0.00    0.00    0.00    0.00   0.00   0.00
md4               0.00     0.00    0.00   44.50     0.00     0.53    24.46     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00   42.50     0.00     0.53    25.61     0.47   11.01    0.00   11.01   6.87  29.20

Some forum I search said that high iowait without high cpu usage is fine. I want to know your opinion about this. Is this really problem or not.

Thank you.

Comments

  • stefemanstefeman Member
    edited June 2017

    I had this too with too much VMs. Just enable writeback cache on all of them and its fixed. You should also use VirtIO if Its linux server.

    Heres my past thread about it: http://www.webhostingtalk.com/showthread.php?t=1597435

    Do you use Ext3, Ext4 or ZFS Beta when you installed from OVH panel?

  • IshaqIshaq Member

    What is iotop showing?

  • akhfaakhfa Member
    edited June 2017

    What is the relation between io wait and dynamic memory?

    @stefeman said:
    I had this too with too much VMs. Just enable writeback cache on all of them and its fixed. You should also use VirtIO if Its linux server.

    Heres my past thread about it: http://www.webhostingtalk.com/showthread.php?t=1597435

    Do you use Ext3, Ext4 or ZFS Beta when you installed from OVH panel?

    Thank you for the reference. I use default proxmox installation from ovh. I also use virtio for all vm.

    This is my fstab

    #                
    /dev/md2        /       ext3    errors=remount-ro       0       1
    /dev/pve/data   /var/lib/vz     ext3    defaults        1       2
    /dev/sda3       swap    swap    defaults        0       0
    /dev/sdb3       swap    swap    defaults        0       0
    proc            /proc   proc    defaults        0       0
    sysfs           /sys    sysfs   defaults        0       0
    

    Is nobarrier make much affect with disk speed? I think I need to reboot the server if I want to change /dev/pve/data, don't I?

    How much iowait do you have now with the improved mount option?

    @Ishaq said:
    What is iotop showing?

    Nothing strange. Just some jbd2 process in the top. I think this is for RAID process. The IO column in iotop still keep low, < 5% at most of the time

  • Could be your array is resyncing. Check cat /proc/mdstat

  • akhfaakhfa Member

    @teamacc said:
    Could be your array is resyncing. Check cat /proc/mdstat

    this is the output

    root@host2:~# cat /proc/mdstat 
    Personalities : [raid1] 
    md4 : active raid1 sda4[0] sdb4[1]
          1931980736 blocks [2/2] [UU]
          bitmap: 10/15 pages [40KB], 65536KB chunk
    
    md2 : active raid1 sda2[0] sdb2[1]
          20478912 blocks [2/2] [UU]
          
    unused devices: 
    
  • Disable bitmap on md4 will improve your speed somewhat

  • Your fucked anyway with Ext3. No matter what you do, the speeds will suck.

  • akhfaakhfa Member

    @AshleyUk said:
    Disable bitmap on md4 will improve your speed somewhat

    With this command? mdadm --grow --bitmap=none /dev/md0

    @stefeman said:
    Your fucked anyway with Ext3. No matter what you do, the speeds will suck.

    Why :D

    So what can I do to improve its speed, or convert it to ext4? :D

Sign In or Register to comment.