ZFS on RBD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm evaluating Ceph and one of my workloads is a server that provides
home directories to end users over both NFS and Samba. I'm looking at
whether this could be backed by Ceph provided storage.

So to test this I built a single node Ceph instance (Ubuntu precise,
ceph.com packages) in a VM and popped a couple of OSDs on it. I then
built another VM and used it to mount an RBD from the Ceph node. No
problems... it all worked as described in the documentation.

Then I started to look at the filesystem I was using on top of the RBD.
I'd tested ext4 without any problems. I'd been testing ZFS (from stable
zfs-native PPA) separately against local storage on the client VM too,
so I thought I'd try that on top of the RBD. This is when I hit
problems, and the VM paniced (trace at the end of this email).

Now I am just experimenting, so this isn't a huge deal right now. But
I'm wondering if this is something that should work? Am I overlooking
something? Is it a silly idea to even try it?

The trace looks to be in the ZFS code, so if there's a bug that needs
fixing it's probably over there rather than in Ceph, but I thought here
might be a good starting point for advice.

Thanks in advance everyone,

Tim.

[  504.644120] divide error: 0000 [#1] SMP
[  504.644298] Modules linked in: coretemp(F) ppdev(F) vmw_balloon(F) microcode(F) psmouse(F) serio_raw(F) parport_pc(F) vmwgfx(F) i2c_piix4(F) mac_hid(F) ttm(F) shpchp(F) drm(F) rbd(F) libceph(F) lp(F) parport(F) zfs(POF) zcommon(POF) znvpair(POF) zavl(POF) zunicode(POF) spl(OF) floppy(F) e1000(F) mptspi(F) mptscsih(F) mptbase(F) btrfs(F) zlib_deflate(F) libcrc32c(F)
[  504.646156] CPU 0
[  504.646234] Pid: 2281, comm: txg_sync Tainted: PF   B      O 3.8.0-21-generic #32~precise1-Ubuntu VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
[  504.646550] RIP: 0010:[<ffffffffa0258092>]  [<ffffffffa0258092>] spa_history_write+0x82/0x1d0 [zfs]
[  504.646816] RSP: 0018:ffff88003ae3dab8  EFLAGS: 00010246
[  504.646940] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[  504.647091] RDX: 0000000000000000 RSI: 0000000000000020 RDI: 0000000000000000
[  504.647242] RBP: ffff88003ae3db28 R08: ffff88003b2afc00 R09: 0000000000000002
[  504.647423] R10: ffff88003b9a4512 R11: 6d206b6e61742066 R12: ffff88003add6600
[  504.647600] R13: ffff88003cfc2000 R14: ffff88003d3c9000 R15: 0000000000000008
[  504.647778] FS:  0000000000000000(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
[  504.647997] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[  504.648153] CR2: 00007fbc1ef54a38 CR3: 000000003bf3e000 CR4: 00000000000007f0
[  504.648380] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  504.648586] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  504.648766] Process txg_sync (pid: 2281, threadinfo ffff88003ae3c000, task ffff88003b7c45c0)
[  504.648990] Stack:
[  504.649087]  0000000000000002 ffffffffa01e3360 ffff88003b2afc00 ffff88003ae3dba0
[  504.649461]  ffff88003d3c9000 0000000000000008 ffff88003cfc2000 000000005530ebc2
[  504.649835]  ffff88003d22ac40 ffff88003d22ac40 ffff88003cfc2000 ffff88003b2afc00
[  504.650209] Call Trace:
[  504.650351]  [<ffffffffa0258415>] spa_history_log_sync+0x235/0x650 [zfs]
[  504.650554]  [<ffffffffa023fdf3>] dsl_sync_task_group_sync+0x123/0x210 [zfs]
[  504.650760]  [<ffffffffa0237deb>] dsl_pool_sync+0x41b/0x530 [zfs]
[  504.650953]  [<ffffffffa024cfd8>] spa_sync+0x3a8/0xa50 [zfs]
[  504.651117]  [<ffffffff810ae6ac>] ? ktime_get_ts+0x4c/0xe0
[  504.651302]  [<ffffffffa025de3f>] txg_sync_thread+0x2df/0x540 [zfs]
[  504.651501]  [<ffffffffa025db60>] ? txg_init+0x250/0x250 [zfs]
[  504.651676]  [<ffffffffa0156c58>] thread_generic_wrapper+0x78/0x90 [spl]
[  504.651856]  [<ffffffffa0156be0>] ? __thread_create+0x310/0x310 [spl]
[  504.652029]  [<ffffffff8107f000>] kthread+0xc0/0xd0
[  504.652174]  [<ffffffff8107ef40>] ? flush_kthread_worker+0xb0/0xb0
[  504.652339]  [<ffffffff816facac>] ret_from_fork+0x7c/0xb0
[  504.652492]  [<ffffffff8107ef40>] ? flush_kthread_worker+0xb0/0xb0
[  504.652655] Code: 55 b0 48 89 fa 48 29 f2 48 01 c2 48 39 55 b8 0f 82 bc 00 00 00 4c 8b 75 b0 41 bf 08 00 00 00 48 29 c8 31 d2 49 8b b5 70 08 00 00 <48> f7 f7 4c 8d 45 c0 4c 89 f7 48 01 ca 48 29 d3 48 83 fb 08 49
[  504.659810] RIP  [<ffffffffa0258092>] spa_history_write+0x82/0x1d0 [zfs]
[  504.660045]  RSP <ffff88003ae3dab8>
[  504.660187] ---[ end trace e69c7eee3ba17773 ]---

-- 
Tim Bishop
http://www.bishnet.net/tim/
PGP Key: 0x5AE7D984
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux