ZFS setup question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok,
Well I setup my zfs filesystems like this:

zfs create pool1/glusterfs
zfs create pool1/glusterfs/audio
zfs create pool1/glusterfs/video
zfs create pool1/glusterfs/documents


it all went well, I restarted gluster...could read the files 
fine...however...I see these errors when trying to write (touch a file):

[2011-01-06 19:11:23] W [posix.c:331:posix_fstat_with_gen] posix1: 
Access to fd 13 (on dev 47775760) is crossing device (47775759)
[2011-01-06 19:11:23] E [posix.c:2267:posix_create] posix1: fstat on 13 
failed: Cross-device link
[2011-01-06 19:15:58] W [posix.c:331:posix_fstat_with_gen] posix1: 
Access to fd 13 (on dev 47775761) is crossing device (47775759)
[2011-01-06 19:15:58] E [posix.c:2267:posix_create] posix1: fstat on 13 
failed: Cross-device link


which makes me think I cannot put everything under pool1/glusterfs like 
that.

So I thought I would do something like this:

zfs create pool1/glusterfs01/audio
zfs create pool1/glusterfs02/video
zfs create pool1/glusterfs03/documents

but the problem with that is then I cannot just share out 
pool1/glusterfs in the .vol file like I wanted to.

Do I have any choice...other then a longer .vol file...or a single 
gluster zfs filesystem named pool1/glusterfs?

At this point the whole cluster is down...so I have to make a choice 
shortly.

Thanks in advance,

Shain



On 01/06/2011 05:12 PM, Jacob Shucart wrote:
> Shain,
>
> That's correct.  There really is no downside to doing separate ZFS
> filesystems unless you consider the process of creating them or managing
> them a downside.  ZFS is pretty easy to administer, so my overall
> recommendation would be scenario #2.
>
> -Jacob
>
> -----Original Message-----
> From: Shain Miley [mailto:smiley at npr.org]
> Sent: Thursday, January 06, 2011 2:10 PM
> To: Jacob Shucart
> Cc: 'Gluster General Discussion List'
> Subject: Re: ZFS setup question
>
> Jocab,
> Thanks for the input.  I did consider that along with having the ability
> to set different properties on each (compression, dedup, etc)  none of
> which I plan on using right now...however I would at least have the
> option in the future.
>
> The only other thing I was able to come up with was this:
>
> If one of the shares did get out of sync for example (or if you simply
> wanted to know the size the shares for that matter)...it might be easier
> to tell which one using 'zfs list' or something like that...rather then
> having to do a 'du' on a several TB folder.
>
>
> Shain
>
>
>
>
> On 01/06/2011 04:58 PM, Jacob Shucart wrote:
>> Shain,
>>
>> If you are planning on taking snapshots of the underlying filesystems
> then
>> #2 would be better.  If you are not planning on taking snapshots then #1
>> and #2 are equal really and so I would say that #1 is fine because there
>> are less filesystems to manage.  I hope this clarifies things.  Since
> ZFS
>> snapshots are done at the filesystem level, if you wanted to take a
>> snapshot of just music then you could not do that unless music was on
> its
>> own ZFS filesystem.
>>
>> -Jacob
>>
>> -----Original Message-----
>> From: gluster-users-bounces at gluster.org
>> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Shain Miley
>> Sent: Thursday, January 06, 2011 1:47 PM
>> To: Gluster General Discussion List
>> Subject: ZFS setup question
>>
>> Hello,
>> I am in the process of setting up my Gluster shares and I am looking at
>> the following two setup options and I am wondering if anyone can speak
>> to the pros/cons of either:
>>
>> 1) Create one large zfs filesystem for gluster.
>>
>> eg:
>>
>> zfs create pool1/glusterfs
>>
>> and then create several folders with 'mkdir' inside '/pool1/glusterfs'
>> (music,videos,documents).
>>
>> 2) Create 1 zfs filesystem per share.
>>
>> eg:
>>
>> zfs create pool1/glusterfs
>> zfs create pool1/glusterfs/music
>> zfs create pool1/glusterfs/videos
>> zfs create pool1/glusterfs/documents
>>
>>
>> I would then share /pool1/glusterfs out with gluster (I do not want to
>> have to have an overly complicated .vol file with each share having it's
>> own gluster volume).
>>
>>
>> Any thoughts would be great.
>>
>> Thanks,
>>
>> Shain
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux