Re: Reducing I/O when increasing number of PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gregory Farnum <greg@...> writes:


> Yes, Ceph does all the heavy lifting. Multiple PGs with the same OSDs
> can happen (eg, if you only have two OSDs, all PGs will be on both),
> but it behaves about as well as is possible within the configuration
> you give it.
> -Greg
> Software Engineer #42  <at>  http://inktank.com | http://ceph.com
> 

Thanks Greg.  A few more questions-- again maybe obvious.

If a pool defines a num of copies as 6, does that mean each PG will have 
6 and only 6 OSDs?

How is the primary OSD determined per PG?

When an OSD within a PG fails, presumably the PG would be operating outside
of the target diversity/redundancy of the pool.   Does that PG start to search
for another OSD that can be brought into the PG to satisfy the pool diversity/
redundancy requirements?  Is there a way to tell when a PG is in this hazard
state?


Thanks






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux