Re: thanks for a double check on ceph's config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 11 May 2016 15:16:14 +0800 Geocast Networks wrote:

> Hi,
> 
> We plan to create image with this format, how do you think about it?
> thanks.
>
Not really related to OSD formatting.
 
> rbd create myimage --size 102400 --order 25 --stripe-unit 4K
> --stripe-count 32 --image-feature layering --image-feature striping
>
I would leave the object size (--order) at the default of 4MB, otherwise
you'll wind up with an excessive number of objects, likely to impact
performance and resource requirements at some point.

The stripe unit of 4K (which won't work, it needs to be in Bytes) feels
way too small as well.
You will _really_ want to test this, but my feeling is something like 64
to 256KB will work better.

Which brings us to the stripe count, it depends on the amount of servers
you have and should be a match for the stripe unit.
So if you have 32 servers (or 16 now and 32 later) your above setting of
32 would make sense with a stripe unit of 128KB or smaller.
  

Christian

> 
> 
> 2016-05-10 21:19 GMT+08:00 <ulembke@xxxxxxxxxxxx>:
> 
> > Hi,
> >
> >
> > Am 2016-05-10 05:48, schrieb Geocast:
> >
> >> Hi members,
> >>
> >> We have 21 hosts for ceph OSD servers, each host has 12 SATA disks
> >> (4TB each), 64GB memory.
> >> ceph version 10.2.0, Ubuntu 16.04 LTS
> >> The whole cluster is new installed.
> >>
> >> Can you help check what the arguments we put in ceph.conf is
> >> reasonable or not?
> >> thanks.
> >>
> >> [osd]
> >> osd_data = /var/lib/ceph/osd/ceph-$id
> >> osd_journal_size = 20000
> >> osd_mkfs_type = xfs
> >> osd_mkfs_options_xfs = -f
> >> filestore_xattr_use_omap = true
> >> filestore_min_sync_interval = 10
> >> filestore_max_sync_interval = 15
> >> filestore_queue_max_ops = 25000
> >> filestore_queue_max_bytes = 10485760
> >> filestore_queue_committing_max_ops = 5000
> >> filestore_queue_committing_max_bytes = 10485760000
> >> journal_max_write_bytes = 1073714824
> >> journal_max_write_entries = 10000
> >> journal_queue_max_ops = 50000
> >> journal_queue_max_bytes = 10485760000
> >> osd_max_write_size = 512
> >> osd_client_message_size_cap = 2147483648
> >> osd_deep_scrub_stride = 131072
> >> osd_op_threads = 8
> >> osd_disk_threads = 4
> >> osd_map_cache_size = 1024
> >> osd_map_cache_bl_size = 128
> >> osd_mount_options_xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"
> >>
> > I have this settings (to avoid fragmentation):
> >          osd mount options xfs =
> > "rw,noatime,inode64,logbufs=8,logbsize=256k,allocsize=4M"
> >          osd mkfs options xfs = "-f -i size=2048"
> >
> > Udo
> >


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux