Rebalancing slow I/O.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Andrei.

Thanks for the tip, but there are problems with reading some VMs.
Can the impact parameter (osd_recover_clone_overlap) to lock the RBD?

Thank!

2014-09-11 16:49 GMT+04:00 Andrei Mikhailovsky <andrei at arhont.com>:

> Irek,
>
> have you change the ceph.conf file to change the recovery p riority?
>
> Options like these might help with prioritising repair/rebuild io with the
> client IO:
>
> osd_recovery_max_chunk = 8388608
> osd_recovery_op_priority = 2
> osd_max_backfills = 1
> osd_recovery_max_active = 1
> osd_recovery_threads = 1
>
>
> Andrei
> ------------------------------
> *From: *"Irek Fasikhov" <malmyzh at gmail.com>
> *To: *ceph-users at lists.ceph.com
> *Sent: *Thursday, 11 September, 2014 1:07:06 PM
> *Subject: *Rebalancing slow I/O.
>
>
> Hi,All.
>
> DELL R720X8,96 OSDs, Network 2x10Gbit LACP.
>
> When one of the nodes crashes, I get very slow I / O operations on virtual
> machines.
> A cluster map by default.
> [ceph at ceph08 ~]$ ceph osd tree
> # id    weight  type name       up/down reweight
> -1      262.1   root defaults
> -2      32.76           host ceph01
> 0       2.73                    osd.0   up      1
> ...........................
> 11      2.73                    osd.11  up      1
> -3      32.76           host ceph02
> 13      2.73                    osd.13  up      1
> ..............................
> 12      2.73                    osd.12  up      1
> -4      32.76           host ceph03
> 24      2.73                    osd.24  up      1
> ............................
> 35      2.73                    osd.35  up      1
> -5      32.76           host ceph04
> 37      2.73                    osd.37  up      1
> .............................
> 47      2.73                    osd.47  up      1
> -6      32.76           host ceph05
> 48      2.73                    osd.48  up      1
> ...............................
> 59      2.73                    osd.59  up      1
> -7      32.76           host ceph06
> 60      2.73                    osd.60  down    0
> ...............................
> 71      2.73                    osd.71  down    0
> -8      32.76           host ceph07
> 72      2.73                    osd.72  up      1
> ................................
> 83      2.73                    osd.83  up      1
> -9      32.76           host ceph08
> 84      2.73                    osd.84  up      1
> ................................
> 95      2.73                    osd.95  up      1
>
>
> If I change the cluster map on the following:
> root---|
>           |
>           |-----rack1
>           |            |
>           |            host ceph01
>           |            host ceph02
>           |            host ceph03
>           |            host ceph04
>           |
>           |-------rack2
>                        |
>                       host ceph05
>                       host ceph06
>                       host ceph07
>                       host ceph08
> What will povidenie cluster failover one node? And how much will it affect
> the performance?
> Thank you
>
> --
> ? ?????????, ??????? ???? ???????????
> ???.: +79229045757
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
? ?????????, ??????? ???? ???????????
???.: +79229045757
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140924/f99f8ce3/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux