Re: Designing a cluster guide

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 23 May 2012, Gregory Farnum wrote:

On Wed, May 23, 2012 at 12:47 PM, Jerker Nyberg <jerker@xxxxxxxxxxxx> wrote:

 * Scratch file system for HPC. (kernel client)
 * Scratch file system for research groups. (SMB, NFS, SSH)
 * Backend for simple disk backup. (SSH/rsync, AFP, BackupPC)
 * Metropolitan cluster.
 * VDI backend. KVM with RBD.

Hmm. Sounds to me like scratch filesystems would get a lot out of not
having to hit disk on the commit, but not much out of having separate
caching locations versus just letting the OSD page cache handle it. :)
The others, I don't really see collaborative caching helping much either.

Oh, sorry, those were my use cases for ceph in general. Yes, scratch is mosty of interest. But also fast backup. Currently IOPS is limiting our backup speed on a small cluster with many files but not much data. I have problems scanning through and backing all changed files every night. Currently I am backing to ZFS but Ceph might help with scaling up performance and size. Another option is going for SSD instead of mechanical drives.

Anyway, make a bug for it in the tracker (I don't think one exists
yet, though I could be wrong) and someday when we start work on the
filesystem again we should be able to get to it. :)

Thank you for your thoughts on this. I hope to be able to do that soon.

Regards,
Jerker Nyberg, Uppsala, Sweden.

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux