If you are using kernel rbd clients, crc is mandatory. We have to keep the crc on. Varada > -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Stefan Priebe - Profihost AG > Sent: Monday, January 25, 2016 1:39 PM > To: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx > Subject: Re: optimized SSD settings for hammer > > > Am 25.01.2016 um 08:54 schrieb Somnath Roy: > > ms_nocrc options is changed to the following in Hammer.. > > > > ms_crc_data = false > > ms_crc_header = false > > If i add those the osds / client can't comunicate any longer. > > > Rest looks good , you need to tweak the shard/thread based on your cpu > complex and total number of OSDs running on a box.. > > BTW, with latest Intel instruction sets crc overhead is reduced significantly > and you may want to turn back on.. > > ah OK so i can remove ms_nocrcin generall ans also skip the data and header > stuff you mentioned above? > > Stefan > > > Thanks & Regards > > Somnath > > > > -----Original Message----- > > From: Stefan Priebe - Profihost AG [mailto:s.priebe@xxxxxxxxxxxx] > > Sent: Sunday, January 24, 2016 11:48 PM > > To: ceph-users@xxxxxxxxxxxxxx > > Cc: Somnath Roy > > Subject: optimized SSD settings for hammer > > > > Hi, > > > > is there a guide or recommendation to optimized SSD settings for hammer? > > > > We have: > > CPU E5-1650 v3 @ 3.50GHz (12 core incl. HT) 10x SSD / Node journal and > > fs on the same ssd > > > > currently we're runnig: > > - with auth disabled > > - all debug settings to 0 > > > > and > > > > ms_nocrc = true > > osd_op_num_threads_per_shard = 2 > > osd_op_num_shards = 12 > > filestore_fd_cache_size = 512 > > filestore_fd_cache_shards = 32 > > ms_dispatch_throttle_bytes = 0 > > osd_client_message_size_cap = 0 > > osd_client_message_cap = 0 > > osd_enable_op_tracker = false > > filestore_op_threads = 8 > > filestore_min_sync_interval = 1 > > filestore_max_sync_interval = 10 > > > > Thanks! > > > > Greets, > > Stefan > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com