Re: ZFS on RBD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 24, 2013 at 05:10:14PM +0200, Wido den Hollander wrote:
> On 05/23/2013 11:34 PM, Tim Bishop wrote:
> > I'm evaluating Ceph and one of my workloads is a server that provides
> > home directories to end users over both NFS and Samba. I'm looking at
> > whether this could be backed by Ceph provided storage.
> >
> > So to test this I built a single node Ceph instance (Ubuntu precise,
> > ceph.com packages) in a VM and popped a couple of OSDs on it. I then
> > built another VM and used it to mount an RBD from the Ceph node. No
> > problems... it all worked as described in the documentation.
> >
> > Then I started to look at the filesystem I was using on top of the RBD.
> > I'd tested ext4 without any problems. I'd been testing ZFS (from stable
> > zfs-native PPA) separately against local storage on the client VM too,
> > so I thought I'd try that on top of the RBD. This is when I hit
> > problems, and the VM paniced (trace at the end of this email).
> >
> > Now I am just experimenting, so this isn't a huge deal right now. But
> > I'm wondering if this is something that should work? Am I overlooking
> > something? Is it a silly idea to even try it?
> 
> It should work, but I'm not sure what is happening here. But I'm 
> wondering, what's the reasoning behind this? You can use ZFS on multiple 
> machines, so you are exporting via RBD from one machine to another.
> 
> Wouldn't it be easier to just use NBD or iSCSI in this case?
> 
> I can't find the usecase here for using RBD, since that is designed to 
> work in a distributed load.
> 
> Is this just a test you wanted to run or something you were thinking 
> about deploying?

Thank you for the reply. It's a bit of both; at this stage I'm just
testing, but it's something I might deploy, if it works.

I'll briefly explain the scenario.

So I have various systems that I'd like to move on to Ceph, including
stuff like VM servers. But this particular workload is a set of home
directories that are mounted across a mixture of Unix-based servers,
some Linux, some Solaris, and also end user desktops using Windows and
MacOS.

Since I can't directly mount the filesystem on all the end user machines
I thought a proxy host would be a good idea. It could mount the RBD
directly and then reshare it using NFS and Samba to the various other
machines. It could have 10Gbit networking to make full use of the
available storage from Ceph.

I could make the filesystem on the proxy host just ext4, but I pondered
ZFS for some of the extra features it offers. For example, creating a
file system per user and easy snapshots.

The overall idea is to consolidate storage from various different
systems using locally attached storage arrays to a central storage pool
based on Ceph. It's just an idea at this stage, so I'm testing to see
what's feasible, and what works.

Please do let me know if I'm approaching this in the wrong way!

Thank you,

Tim.

(I submitted a bug report to the ZFS folk:
https://github.com/zfsonlinux/spl/issues/241 )

-- 
Tim Bishop
http://www.bishnet.net/tim/
PGP Key: 0x5AE7D984
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux