Re: Ceph cluster with single replica to be deployed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[adding ceph-devel]

On Thu, 1 Dec 2016, LIU, Fei wrote:
> Hi Sage,
> 
>    We have some workload that have requirements of latency and IOPS but
> don’t care that much of data persistency. We do care a lot of TCO. We are
> looking for a possible solution to deploy Ceph with single replica. We have
> tried and put one OSD down intensively , the whole cluster was ok as long as
> you don’t nee d to operate any data related to the down OSD. We need to be
> carefully setup crushmap within a pool. However, what data is going to be
> losed in the down OSD is very hard to track, any suggestions and adviced
> will be appreciated.

It of course depends on your application, but from teh ceph perspective, 
this is doable.  The key is to streamline the data-loss procedure.  If an 
OSD fails, you'll lose a random set of PGs, which will have a random 
subset of the overall set of objects in the cluster.  You can tell the 
system to give up and recreate a PG as empty (acknowleding that data was 
*permanently* lost and telling ceph to carry on without it) with

 ceph pg force_create_pg <pgid>

You'll want to have some automation to pick out which PGs are lost and 
recreate them.

Of course, your application has to tolerate that...

sage

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux