Re: Pool Count incrementing on each create even though I removed the pool each time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What you are seeing is expected behavior.  Pool numbers do not get reused; they increment up.  Pool names can be reused once they are deleted.  One note, though, if you delete and recreate the data pool, and want to use cephfs, you'll need to run 'ceph mds newfs <metadata pool #> <data pool #> --yes-i-really-mean-it' before mounting it.

Brad

-----Original Message-----
From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Matt.Latter@xxxxxxxx
Sent: Tuesday, March 18, 2014 11:53 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject:  Pool Count incrementing on each create even though I removed the pool each time


I am a novice ceph user creating a simple 4 OSD default cluster (initially) and experimenting with RADOS BENCH to understand basic HDD (OSD) performance. Each interation of rados bench -p data I want the cluster OSDs in initial state  i.e. 0 objects . I assumed the easiest way was to remove and re-create the data pool each time.

While this appears to work , when I run ceph -s it shows me the pool count is incrementing each time:

matt@redstar9:~$ sudo ceph -s
    cluster c677f4c3-46a5-4ae1-b8aa-b070326c3b24
     health HEALTH_WARN clock skew detected on mon.redstar10, mon.redstar11
     monmap e1: 3 mons at
{redstar10=192.168.5.40:6789/0,redstar11=192.168.5.41:6789/0,redstar9=192.168.5.39:6789/0},
 election epoch 6, quorum 0,1,2 redstar10,redstar11,redstar9
     osdmap e52: 4 osds: 4 up, 4 in
      pgmap v5240: 136 pgs, 14 pools, 768 MB data, 194 objects
            1697 MB used, 14875 GB / 14876 GB avail
                 136 active+clean


even though lspools still only shows me the 3 default pools (metadata, rbd,
data)

Is this a bug, AND/OR, is there a better way to zero my cluster for these experiments?

Thanks,

Matt Latter

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux