On 05/24/2013 06:22 PM, Tim Bishop wrote:
On Fri, May 24, 2013 at 05:10:14PM +0200, Wido den Hollander wrote:
On 05/23/2013 11:34 PM, Tim Bishop wrote:
I'm evaluating Ceph and one of my workloads is a server that provides
home directories to end users over both NFS and Samba. I'm looking at
whether this could be backed by Ceph provided storage.
So to test this I built a single node Ceph instance (Ubuntu precise,
ceph.com packages) in a VM and popped a couple of OSDs on it. I then
built another VM and used it to mount an RBD from the Ceph node. No
problems... it all worked as described in the documentation.
Then I started to look at the filesystem I was using on top of the RBD.
I'd tested ext4 without any problems. I'd been testing ZFS (from stable
zfs-native PPA) separately against local storage on the client VM too,
so I thought I'd try that on top of the RBD. This is when I hit
problems, and the VM paniced (trace at the end of this email).
Now I am just experimenting, so this isn't a huge deal right now. But
I'm wondering if this is something that should work? Am I overlooking
something? Is it a silly idea to even try it?
It should work, but I'm not sure what is happening here. But I'm
wondering, what's the reasoning behind this? You can use ZFS on multiple
machines, so you are exporting via RBD from one machine to another.
Wouldn't it be easier to just use NBD or iSCSI in this case?
I can't find the usecase here for using RBD, since that is designed to
work in a distributed load.
Is this just a test you wanted to run or something you were thinking
about deploying?
Thank you for the reply. It's a bit of both; at this stage I'm just
testing, but it's something I might deploy, if it works.
I'll briefly explain the scenario.
So I have various systems that I'd like to move on to Ceph, including
stuff like VM servers. But this particular workload is a set of home
directories that are mounted across a mixture of Unix-based servers,
some Linux, some Solaris, and also end user desktops using Windows and
MacOS.
Ah, I get it :) Sounds like a valid use-case.
Since I can't directly mount the filesystem on all the end user machines
I thought a proxy host would be a good idea. It could mount the RBD
directly and then reshare it using NFS and Samba to the various other
machines. It could have 10Gbit networking to make full use of the
available storage from Ceph.
That will work just fine. Later you might want to use NFS via Ganesha or
the Samba integration with libcephfs, but that's when CephFS becomes
production ready.
Wido
I could make the filesystem on the proxy host just ext4, but I pondered
ZFS for some of the extra features it offers. For example, creating a
file system per user and easy snapshots.
The overall idea is to consolidate storage from various different
systems using locally attached storage arrays to a central storage pool
based on Ceph. It's just an idea at this stage, so I'm testing to see
what's feasible, and what works.
Please do let me know if I'm approaching this in the wrong way!
Thank you,
Tim.
(I submitted a bug report to the ZFS folk:
https://github.com/zfsonlinux/spl/issues/241 )
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com