On Tue, 19 Sep 2017, Xiaoxi Chen wrote: > Hi, > I just hit an OSD cannot start due to insufficient aio_nr. Each > OSD is with a separate SSD partition as db.block Can you paste the message you saw? I'm not sure which check you mean. > Further checking showing 6144 AIO contexts were required per OSD, > could anyone explain a little bit on where the 6144 aio contexts goes > to? > > It looks to me like the bdev_aio_max_queue_depth is default to > 1024, but how can we have 6 bdev to get 6144? I'm guessing the is fallout from the kernel's behavior. When you set up an IO queue you specify how many aios you want to allow (that's where we use the max_queue_depth value), but the kernel rounds the buffer up to a page boundary, so in reality it will use more. That can make you hit the host maximum sooner. sage _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com