Re: Failures with Ceph without redundancy/replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 16, 2015 at 11:58 AM, Vedran Furač <vedran.furac@xxxxxxxxx> wrote:
> Hello,
>
> I'm experimenting with ceph for caching, it's configured with size=1 (so
> no redundancy/replication) and exported via cephfs to clients, now I'm
> wondering what happens is an SSD dies and all of its data is lost? I'm
> seeing files being in 4MB chunks in PGs, do we know if a whole file as
> saved through cephfs (all its chunks) are in a single PG (or at least in
> a multiple PGs within a single OSD), or it might be spread over multiple
> OSD, so in that case an SSD failure would entail effectively loosing
> more than data than it fits on a single drive, or even worse, massive
> corruption potentially affecting most of the content. Note that losing a
> single drive and all of its data (so 1% in case of a 100 drives) isn't
> an issue for me. However losing much more or files being silently
> corrupted with holes in them is unacceptable. I would then have to go
> with some erasure coding.

Files are chunked into 4MB, as you've surmised. They are deliberately
stored *widely* across the cluster (ie, not in the same PG), so the
loss of a PG is going to leave you with 4MB holes in most of your
files (or lots of empty files, or whatever, depending on file sizes).
Right now that loss will be silent, yes.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux