Re: PG active+clean+degraded, but not creating new replicas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 4 Jun 2013, Wolfgang Hennerbichler wrote:
> On Mon, Jun 03, 2013 at 08:58:00PM -0700, Sage Weil wrote:
>  
> > My first guess is that you do not have the newer crush tunables set and 
> > some placements are not quite right.  If you are prepared for some data 
> > migration, and are not using an older kernel client, try
> > 
> >  ceph osd crush tunables optimal
> 
>  One thing that I'm not quite sure about - in the documentation we learn: The ceph-osd and ceph-mon daemons will start requiring the feature bits of new connections as soon as they get the updated map. However, already-connected clients are effectively grandfathered in, and will misbehave if they do not support the new feature.
> 
> So: Am I in danger when I set this to optimal in a productive bobtail-cluster with qemu-rbd being the only "client" around? 

The tunables were added in v0.55 (just prior to bobtail), so you should be 
in good shape.

sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux