Hi ceph users,
Ceph have a very good documentation about technical usage, but there is a lot of conceptual things missing (from my point of view)
It's not easy to understand all at the same time, but yes, little by little it's working.
It's not easy to understand all at the same time, but yes, little by little it's working.
Here are some questions about ceph , hope someone can take a little time to point me where I can find answers :
- Backup :
Do you backup data from a CEPH cluster or you consider a copy as a backup of that file ?
Let's say I have replica size of 3 . Somehow , my crush map will keep 2 copy in my main rack and 1 copy to another rack in another datacenter
Can I consider the third copy as a backup ? What would be your position ?
Let's say I have replica size of 3 . Somehow , my crush map will keep 2 copy in my main rack and 1 copy to another rack in another datacenter
Can I consider the third copy as a backup ? What would be your position ?
- Writing process of ceph object storage using radosgw
Simple question, but not sure about it.
Simple question, but not sure about it.
The more replica the more slower will be my cluster ? Does CEPH have to acknowledge the number of replica before saying it's good ?
From what I read, CEPH will write and acknowledge the de primary OSD of the pool , So if that the cas, I does not matter how many replica I want and how far are situated the others OSD that would work the same.
From what I read, CEPH will write and acknowledge the de primary OSD of the pool , So if that the cas, I does not matter how many replica I want and how far are situated the others OSD that would work the same.
Can I chose myseft the primary OSD in my zone 1 , have a copy on zone 2 (same rack) and a third zone 3 in another datacenter that might have some latency .
- Data persistance / availability
If crush map is by hosts and I have 3 hosts with replication of 3
This means , I will have 1 copy on each hosts
Does it means I can lost 2 hosts and still have my cluster working, at least on read mode ? and eventually in write too if i say , osd pool default min size = 1
This means , I will have 1 copy on each hosts
Does it means I can lost 2 hosts and still have my cluster working, at least on read mode ? and eventually in write too if i say , osd pool default min size = 1
Thanks for your help.
-
Benoît G
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com