Write path changes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,
FYI, I have sent out a new PR addressing your earlier comments and some more enhancement. Here it is..

https://github.com/ceph/ceph/pull/6670

Did some exhaustive comparison with ceph latest master code base and found up to 32 OSDs (4 OSD nodes , one per 8TB SAS SSD) , my changes are giving ~35-40% improvement (more scale more delta). But, for 48 OSDs (6 OSD nodes) it is giving > 2x improvement for small IO. I didn't dig down much to find out why but my guess is the following.

1. During the drive addition, I found out some of the drives are performing slow than others while running ceph workload. Because of this, the entire cluster throughput is suffering. The stock throttle scheme is very crude and probably not able to handle this. New PR has some changes in the throttling specifically addressing this kind of scenarios.

2. Since those drives are slow the way Ceph stock code writes to the device is not helping much. Doing a big syncfs() on those drives are more harmful thus lowering those OSD throughput. In fact, stock code performance has degraded from 32 OSDs to 48 OSDs...

3. Mixed workload also not scaling well with stock, getting ~2X benefit there as well.


Thanks & Regards
Somnath
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux