Re: long blocking with writes on rbds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Apr 10, 2015 8:10 AM, "Lionel Bouton" <lionel+ceph@xxxxxxxxxxx> wrote:
>
> On 04/10/15 15:41, Jeff Epstein wrote:
>>
>> [...]
>>
>> This seems highly unlikely. We get very good performance without ceph. Requisitioning and manupulating block devices through LVM happens instantaneously. We expect that ceph will be a bit slower by its distributed nature, but we've seen operations block for up to an hour, which is clearly behind the pale. Furthermore, as the performance measure I posted show, read/write speed is not the bottleneck: ceph is simply waiting.
>>
>> So, does anyone else have any ideas why mkfs (and other operations) takes so long?
>
>
>
> As your use case is pretty unique and clearly not something Ceph was optimized for, if I were you I'd switch to a single pool with the appropriate number of pgs based on your pool size (replication) and the number of OSD you use (you should target 100 pgs/OSD to be in what seems the sweet spot) and create/delete rbd instead of the whole pool. You would be in "known territory" and any remaining performance problem would be easier to debug.
>

Lionel is probably trying to say, very back to a basic Ceph config and find whatever is the problem, which is easier for us to help with. After the root cause is taken care of then it would be better to venture off the beaten path. I would do it in steps:  update the CRUSH map to split your clusters (not sure of the reason you are doing this), then if everything looks good then test with lots of pools. It is easier to find the needle when you break the haystack into smaller units.

Robert LeBlanc

> Best regards,
>
> Lionel Bouton
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux