>>Any other ideas how to reduce ceph-osd while doing randwrite? >> >>Randread gives me with 3 VMs: 60.000 iops >>Randwrite gives me with 3 VMs: 25.000 iops Great to see that read scale ! For randwrite, what is the bottleneck now with filestore xattr use omap = true ? Always cpu ? ----- Mail original ----- De: "Stefan Priebe" <s.priebe@xxxxxxxxxxxx> À: "Sage Weil" <sage@xxxxxxxxxxx> Cc: ceph-devel@xxxxxxxxxxxxxxx Envoyé: Jeudi 15 Novembre 2012 21:26:06 Objet: Re: ceph-osd cpu usage Am 15.11.2012 16:14, schrieb Sage Weil: > On Thu, 15 Nov 2012, Stefan Priebe - Profihost AG wrote: > Hmm, most significant time seems to be in the allocator and doing > fsetxattr(2) (10%!). Also some path traversal stuff. Yes fsetxattr seems to be CPU hungry. > Can you try the wip-fd-simple-cache branch, which tries to spend less time > closing and reopening files? I'm curious how much of a different it will > make for you for both IOPS and CPU utilization. It seems to give me around 1000 iops across 3 VMs. > It is also possible to use leveldb for most attrs. If you set > 'filestore xattr use omap = true' it should put most attrs in leveldb. Tried this but this raises CPU by 20%. Any other ideas how to reduce ceph-osd while doing randwrite? Randread gives me with 3 VMs: 60.000 iops Randwrite gives me with 3 VMs: 25.000 iops Stefan -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html