Re: Pool Count incrementing on each create even though I removed the pool each time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 18 Mar 2014, John Spray wrote:
> Hi Matt,
> 
> This is expected behaviour: pool IDs are not reused.

The IDs go up, but I think the 'count' shown there should not.. i.e. 
num_pools != max_pool_id.  So probably a subtle bug, I expect in the 
print_summary or similar method in PGMonitor.cc?

sage

> 
> Cheers,
> John
> 
> On Tue, Mar 18, 2014 at 6:53 PM,  <Matt.Latter@xxxxxxxx> wrote:
> >
> > I am a novice ceph user creating a simple 4 OSD default cluster (initially)
> > and experimenting with RADOS BENCH to understand basic HDD (OSD)
> > performance. Each interation of rados bench -p data I want the cluster OSDs
> > in initial state  i.e. 0 objects . I assumed the easiest way was to remove
> > and re-create the data pool each time.
> >
> > While this appears to work , when I run ceph -s it shows me the pool count
> > is incrementing each time:
> >
> > matt@redstar9:~$ sudo ceph -s
> >     cluster c677f4c3-46a5-4ae1-b8aa-b070326c3b24
> >      health HEALTH_WARN clock skew detected on mon.redstar10, mon.redstar11
> >      monmap e1: 3 mons at
> > {redstar10=192.168.5.40:6789/0,redstar11=192.168.5.41:6789/0,redstar9=192.168.5.39:6789/0},
> >  election epoch 6, quorum 0,1,2 redstar10,redstar11,redstar9
> >      osdmap e52: 4 osds: 4 up, 4 in
> >       pgmap v5240: 136 pgs, 14 pools, 768 MB data, 194 objects
> >             1697 MB used, 14875 GB / 14876 GB avail
> >                  136 active+clean
> >
> >
> > even though lspools still only shows me the 3 default pools (metadata, rbd,
> > data)
> >
> > Is this a bug, AND/OR, is there a better way to zero my cluster for these
> > experiments?
> >
> > Thanks,
> >
> > Matt Latter
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux