Newbie Ceph Design Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Ceph-Community,

we are considering to use a Ceph Cluster for serving VMs.
We need goog performance and absolute stability.

Regarding Ceph I have a few questions.

Presently we use Solaris ZFS Boxes as NFS Storage for VMs.

The zfs boxes are totally fast, because they use all free ram
for read caches. With arc stats we can see that 90% of all read 
operations are served from memory. Also read cache in zfs is very 
intelligent about what blocks to put in the read cache.

>From Reading about Ceph it seems that ceph Clusters dont have
such an optimized read cache. Do you think we can still perform
as well as the solaris boxes ?

Next question: I read that in Ceph an OSD is marked invalid, as 
soon as its journaling disk is invalid. So what should I do ? I don't
want to use 1 Journal Disk for each osd. I also dont want to use 
a journal disk per 4 osds because then I will loose 4 osds if an ssd
fails. Using journals on osd Disks i am afraid will be slow.
Again I am afraid of slow Ceph performance compared to zfs because
zfs supports zil write cache disks .

Last Question: Someone told me Ceph Snapshots are slow. Is this true ?
I always thought making a snapshot is just moving around some pointers 
to data.

And very last question: What about btrfs, still not recommended ?

Thanks for helping

Christoph



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux