crushmap rules :: host selection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi! I'm new with ceph and i struggle to make a mapping between
my current storage knowledge and ceph...

So, i will state my understanding of the context and the question
so please correct me with anything that i got wrong :)

So, files (or pieces of files) are put in PGs that are given sections
of OSDs. The crushmap gives a physical OSDs map to be chosen for placement
or access

Pools are a logical name for a storage space but how can i specify
what osds or host are part of a pool?

For replication, how can i specify: if a replica is missing (for a given time)
start rebuilding on some available OSD?

Is there a notion of "spare" so if an osd is missing on action, the rebuild to
start on another host and when the old OSD is back (the hdd is replaced, or the
machine was repaired) to be automatically cleaned up and used?

I'm thinking about a 3 node cluster with the replica=2
failure domain = host, in such a way if one node is down, the data
from there to be replicated on the remaining nodes (with some drives kept
as spares..)
I am almost certain that from the point of view of ceph, what i'm thinking is wrong
so i would love to receive some advice :)

Thanks a lot!
Adrian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux