Re: optimized SSD settings for hammer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I think you should try with crc enabled as it is recommended for network level corruption detection.
It will definitely add some cpu cost but it is ~5x lower with Intel new cpu instruction set..

-----Original Message-----
From: Stefan Priebe - Profihost AG [mailto:s.priebe@xxxxxxxxxxxx] 
Sent: Monday, January 25, 2016 12:09 AM
To: Somnath Roy; ceph-users@xxxxxxxxxxxxxx
Subject: Re: optimized SSD settings for hammer


Am 25.01.2016 um 08:54 schrieb Somnath Roy:
> ms_nocrc options is changed to the following in Hammer..
> 
>         ms_crc_data = false
>         ms_crc_header = false

If i add those the osds / client can't comunicate any longer.

> Rest looks good , you need to tweak the shard/thread based on your cpu complex and total number of OSDs running on a box..
> BTW, with latest Intel instruction sets crc overhead is reduced significantly and you may want to turn back on..

ah OK so i can remove ms_nocrcin generall ans also skip the data and header stuff you mentioned above?

Stefan

> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Stefan Priebe - Profihost AG [mailto:s.priebe@xxxxxxxxxxxx]
> Sent: Sunday, January 24, 2016 11:48 PM
> To: ceph-users@xxxxxxxxxxxxxx
> Cc: Somnath Roy
> Subject: optimized SSD settings for hammer
> 
> Hi,
> 
> is there a guide or recommendation to optimized SSD settings for hammer?
> 
> We have:
> CPU E5-1650 v3 @ 3.50GHz (12 core incl. HT) 10x SSD / Node journal and 
> fs on the same ssd
> 
> currently we're runnig:
> - with auth disabled
> - all debug settings to 0
> 
> and
> 
> ms_nocrc = true
> osd_op_num_threads_per_shard = 2
> osd_op_num_shards = 12
> filestore_fd_cache_size = 512
> filestore_fd_cache_shards = 32
> ms_dispatch_throttle_bytes = 0
> osd_client_message_size_cap = 0
> osd_client_message_cap = 0
> osd_enable_op_tracker = false
> filestore_op_threads = 8
> filestore_min_sync_interval = 1
> filestore_max_sync_interval = 10
> 
> Thanks!
> 
> Greets,
> Stefan
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux