On 11/03/2016 11:01 AM, Igor Fedotov wrote:
Mark,
thanks for update.
I've Just made a brief try.
Performance is much better now but numbers I can see with my old branch
are still unreachable.
Approx х2 times slower ( it was ~10х slower before) for random 4K r/w
using FIO against standalone bluestore instance.
Can you check and see if https://github.com/ceph/ceph/pull/11530 is
impacting you vs old defaults? So far this appears to be a win on our
NVMe setup, but it's one of the big changes that might impact small
random write performance.
If this is impacting you, it would be really interesting to save one or
more of the OSD logs in each case and run:
https://github.com/ceph/cbt/blob/master/tools/ceph_rocksdb_log_parser.py
Mark
Thanks,
Igor
On 03.11.2016 17:59, Mark Nelson wrote:
Mostly this is for Igor and Somnath:
It looks like the regression was indeed caused by lack of O2
optimization for rocksdb in master after 418bfd7. The following PR
appears to resolve the performance regression in my tests:
https://github.com/ceph/ceph/pull/11767
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html