Few CEPH questions.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All, I am new to this mailing list.
I have few basic questions on the CEPH, hope someone can answer me.
Thanks in advance.!!


I would like to understand more the the placement policy, especially
on the failure path.

1. Machines going up and down is fairly common in a data center.
     How often does the cluster map change?
     Every machine bounce causes an update/distribution of cluster map?
     and affect the CRUSH? Does it cause cluster network too chatty?

2. Ceph mainly depends on the primary OSD in a given PG.
    What happens in the read/write path if that OSD is down at that moment?
    There can be cases, OSD is down but cluster map is not up to date.
    When the write/read fail, does the client retry it after
populating clustermap?

3. Whenever the cluster map changes, it may not propagate to the entire cluster.
    Some clients may be running with old map, and may end-up in wrong OSD.
    Do we depend on peering to take care of this situation?

4. If a OSD becomes too full, which may take reads, but no writes anymore. Does
    CRUSH() take that into account? Does it generate different maps
for reads vs writes?
     or this case is handled by distributing(moving) data off of the OSD?

6. Does CRUSH() takes OSD size into consideration?

7. Does CEPH support quorum writes? (2 out of 3 is a success. )

--
Jvrao
---
First they ignore you, then they laugh at you, then they fight you,
then you win. - Mahatma Gandhi
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux