On Mon, Aug 17, 2015 at 8:21 PM, Patrik Plank <patrik@xxxxxxxx> wrote: > Hi, > > > have a ceph cluster witch tree nodes and 32 osds. > > The tree nodes have 16Gb memory but only 5Gb is in use. > > Nodes are Dell Poweredge R510. > > > my ceph.conf: > > > [global] > mon_initial_members = ceph01 > mon_host = 10.0.0.20,10.0.0.21,10.0.0.22 > auth_cluster_required = cephx > auth_service_required = cephx > auth_client_required = cephx > filestore_xattr_use_omap = true > filestore_op_threads = 32 > public_network = 10.0.0.0/24 > cluster_network = 10.0.1.0/24 > osd_pool_default_size = 3 > osd_pool_default_min_size = 1 > osd_pool_default_pg_num = 4096 > osd_pool_default_pgp_num = 4096 > osd_max_write_size = 200 > osd_map_cache_size = 1024 > osd_map_cache_bl_size = 128 > osd_recovery_op_priority = 1 > osd_max_recovery_max_active = 1 > osd_recovery_max_backfills = 1 > osd_op_threads = 32 > osd_disk_threads = 8 > > is that normal or a bottleneck? Any memory not used by the OSD processes directly will be used by Linux for page caching. That's what we want to have happen! So it's not a problem that it's using "only" 5 GB. Keep in mind that the memory usage might spike dramatically if the OSDs need to deal with an outage, though — your normal-state usage ought to be lower than our recommended values for that reason. -Greg > > > best regards > > Patrik > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com