Re: Is cepf feasible for storing large no. of small files with care of handling OSD failure(disk i/o error....) so it can complete pending replication independent from no. of files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/02/2014 05:42 PM, upendrayadav.u wrote:
Hi,

1. Is ceph is feasible for storing large no. of small files in ceph
cluster with care of osd failure and recovery process.

2. if we have *4TB **OSD(almosst 85% full)* and storing only small size
files(500 KB to 1024 KB), And it got failed(due to disk i/o error....)
then how much time it will take to complete all pending replication?
What are the factors that will affect this replication process? Is this
total time to complete pending replication is independent from the *no.
of files* to replicate. Means failure recovery depends on only size of
OSD not on no. of files to replicate.

Please forget the concept of files, we talk about object inside Ceph / RADOS :)

It's hard to predict how long it will take, but it depends on the number of PGs and the amount of objects inside the PGs.

The more objects you have, the longer recovery will take.

Btw, I wouldn't fill a OSD until 85%, that's a bit to high. I'd stay below 80%.

3. We have 64 no. of disks(with JBOD configuration) for one machine. Is
this necessary to run one OSD per disk. In this, Is there possible to
combined 8 no. of disk for one OSD?


Run one OSD per disk, that gives you best fault tolerance. You can run one OSD with something like RAID on multiple drives, but that reduces your fault tolerance.

Wido

Thanks a lot for giving ur precious time for me... hope this time will
get response.
*
*
*:( Last 2 mail have no reply... :( *

*
*
*Regards,*
*Upendra Yadav*
*DFS*



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux