Re: [ceph-commit] Ceph Zfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 27 Oct 2012, Raghunandhan wrote:
> Hi Dan,
> 
> Yes once a zpool is created there is a way we can use the zpool and make a
> partition out of it using "zfs create -V". The newly created partition will be
> available on fdisk. Later the same partition can be formatted with ext4 and
> used with ceph-osd.
> 
> I have also tried using a zfs filesystem in the zpool and mapped it with osd.
> When i run mkcephfs i get "error creating empty object store /osd.0: (22)
> invalid argument
> 
> == osd.0 ===
> 2012-10-27 10:40:33.939961 7f6e6165d780 -1 filestore(/osd.0) mkjournal error
> creating journal on /osd.0/journal: (22) Invalid argument
> 2012-10-27 10:40:33.939981 7f6e6165d780 -1 OSD::mkfs: FileStore::mkfs failed
> with error -22
> 2012-10-27 10:40:33.940036 7f6e6165d780 -1  ** ERROR: error creating empty
> object store in /osd.0: (22) Invalid argument
> failed: '/sbin/mkcephfs -d /tmp/mkcephfs.3zqOx7Btvl --init-daemon osd.0'

Can you generate a log with 'debug filestore = 20' of this happening so we 
can see exactly which operation is failing with -EINVAL?  There is 
probably some ioctl or syscall that is going awry.

Thanks!
sage


> 
> ---
> Regards,
> Raghunandhan.G
> IIHT Cloud Solutions Pvt. Ltd.
> #15, 4th Floor, 'A' Wing, Sri Lakshmi Complex,
> St. Marks Road, Bangalore - 560 001, India
> 
> On 27-10-2012 02:08, Dan Mick wrote:
> > On 10/25/2012 09:46 PM, Raghunandhan wrote:
> > > Hi Sage,
> > > 
> > > Thanks for replying back, Once a zpool is created if i mount it on
> > > /var/lib/ceph/osd/ceph-0 the cephfs doesnt recognize it as a superblock
> > > and hence it fails,
> > 
> > I assume you mean "once a zfs is created"?  One can't mount zpools, can one?
> > 
> > > Im trying to build this on our cloud storage since
> > > btrfs has not been stable nor they have come up with online dedup i have
> > > no other choice for now to work with zfs ceph which makes sense.
> > > 
> > > So what i exactly did was created a zpool store
> > > 1 Then used the same store and made a block device from it using zfs
> > > create
> > > 2 Once the zfs create was successful i was able to format with ext4
> > > using xattr
> > > 3 On top of it was the ceph
> > > 
> > > Following this process doesnt make sense because of multiple layer on
> > > the storage and the ceph consumes a lot of RAM and cpu cycles which ends
> > > up in kernel hung task. It would be great if there is a way i could
> > > directly use the zfs pool with ceph and make it work.
> > 
> > Have you actually tried making a zfs filesystem in the zpool, and
> > using that as backing store for the osd?
> > 
> > > 
> > > ---
> > > Regards,
> > > Raghunandhan.G
> > > IIHT Cloud Solutions Pvt. Ltd.
> > > #15, 4th Floor, 'A' Wing, Sri Lakshmi Complex,
> > > St. Marks Road, Bangalore - 560 001, India
> > > 
> > > On 25-10-2012 22:06, Sage Weil wrote:
> > > > [moved to ceph-devel]
> > > > 
> > > > On Thu, 25 Oct 2012, Raghunandhan wrote:
> > > > > Hi All,
> > > > > 
> > > > > I have been working around ceph quite a long and trying to stitch zfs
> > > > > with
> > > > > ceph. I was able to do it to certain extent as follows:
> > > > > 1. zpool creation
> > > > > 2. set dedup
> > > > > 3. create a mountable volume of zfs (zfs create)
> > > > > 4. format the volume with ext4 and enabling xattr
> > > > > 5. mkcephfs on the volume
> > > > > 
> > > > > This actually works and dedup is perfect. But i need to avoid
> > > > > multiple layers
> > > > > on the storage since the performance is very slow and the kernel
> > > > > timeout
> > > > > occurs often for a 8GB RAM. I want to test the performance between
> > > > > btrfs and
> > > > > zfs. I want to avoid the above multiple layering on storage and make
> > > > > the ceph
> > > > > cluster aware of zfs. Let me know if anyone has workaround this.
> > > > 
> > > > I'm not familiar enough with zfs to know what 'mountable volume' means..
> > > > is that a block device/lun that you're putting ext4 on?  Probably the
> > > > best
> > > > results will come from creating a zfs *file system* (using the ZPL or
> > > > whatever it is) and running ceph-osd on top of that.
> > > > 
> > > > There is at least one open bug from someone having problems there, but
> > > > we'd very much like to sort out the problem.
> > > > 
> > > > sage
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux