I use this to quickly check pool stats:
[root@ceph-mon01 ceph]# ceph osd dump | grep pool
pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64
last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 6 'rcvtst' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins
pg_num 400 pgp_num 400
last_change 10879 flags hashpspool stripe_width 0
[root@ceph-mon01 ceph]#
Or to individually query a pool:
ceph osd pool get rbd pg_num
ceph osd pool get rbd pgp_num
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Giuseppe Civitella
Sent: Tuesday, April 14, 2015 9:53 AM
To: Saverio Proto
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Binding a pool to certain OSDs
Hi Saverio,
I first made a test on my test staging lab where I have only 4 OSD.
On my mon servers (which run other services) I have 16BG RAM, 15GB used but 5 cached. On the OSD servers I have 3GB RAM, 3GB used but 2 cached.
"ceph -s" tells me nothing about PGs, shouldn't I get an error message from its output?
2015-04-14 18:20 GMT+02:00 Saverio Proto <zioproto@xxxxxxxxx>:
You only have 4 OSDs ?
How much RAM per server ?
I think you have already too many PG. Check your RAM usage.
Check on Ceph wiki guidelines to dimension the correct number of PGs.
Remeber that everytime to create a new pool you add PGs into the
system.
Saverio
2015-04-14 17:58 GMT+02:00 Giuseppe Civitella <giuseppe.civitella@xxxxxxxxx>:
> Hi all,
>
> I've been following this tutorial to realize my setup:
>
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
>
> I got this CRUSH map from my test lab:
> http://paste.openstack.org/show/203887/
>
> then I modified the map and uploaded it. This is the final version:
> http://paste.openstack.org/show/203888/
>
> When applied the new CRUSH map, after some rebalancing, I get this health
> status:
> [-> avalon1 root@controller001 Ceph <-] # ceph -s
> cluster af09420b-4032-415e-93fc-6b60e9db064e
> health HEALTH_WARN crush map has legacy tunables; mon.controller001 low
> disk space; clock skew detected on mon.controller002
> monmap e1: 3 mons at
> {controller001=10.235.24.127:6789/0,controller002=10.235.24.128:6789/0,controller003=10.235.24.129:6789/0},
> election epoch 314, quorum 0,1,2 controller001,controller002,controller003
> osdmap e3092: 4 osds: 4 up, 4 in
> pgmap v785873: 576 pgs, 6 pools, 71548 MB data, 18095 objects
> 8842 MB used, 271 GB / 279 GB avail
> 576 active+clean
>
> and this osd tree:
> [-> avalon1 root@controller001 Ceph <-] # ceph osd tree
> # id weight type name up/down reweight
> -8 2 root sed
> -5 1 host ceph001-sed
> 2 1 osd.2 up 1
> -7 1 host ceph002-sed
> 3 1 osd.3 up 1
> -1 2 root default
> -4 1 host ceph001-sata
> 0 1 osd.0 up 1
> -6 1 host ceph002-sata
> 1 1 osd.1 up 1
>
> which seems not a bad situation. The problem rise when I try to create a new
> pool, the command "ceph osd pool create sed 128 128" gets stuck. It never
> ends. And I noticed that my Cinder installation is not able to create
> volumes anymore.
> I've been looking in the logs for errors and found nothing.
> Any hint about how to proceed to restore my ceph cluster?
> Is there something wrong with the steps I take to update the CRUSH map? Is
> the problem related to Emperor?
>
> Regards,
> Giuseppe
>
>
>
>
> 2015-04-13 18:26 GMT+02:00 Giuseppe Civitella
> <giuseppe.civitella@xxxxxxxxx>:
>>
>> Hi all,
>>
>> I've got a Ceph cluster which serves volumes to a Cinder installation. It
>> runs Emperor.
>> I'd like to be able to replace some of the disks with OPAL disks and
>> create a new pool which uses exclusively the latter kind of disk. I'd like
>> to have a "traditional" pool and a "secure" one coexisting on the same ceph
>> host. I'd then use Cinder multi backend feature to serve them.
>> My question is: how is it possible to realize such a setup? How can I bind
>> a pool to certain OSDs?
>>
>> Thanks
>> Giuseppe
>
>
>