RE: Re-exporting NFS to vmware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
Thanks for your advices. It helps me a lot.
And now I know that it's impossible for nfsd to re-export a FUSE mounted filesystem.
But the workflow of the gluster native nfsd is not smart just as the white paper mentioned.
Gluster will act stupid when not using glusterfs protocol.
1. A client asks server A for a file but server A doesn't have it.
2. Server A finds server B has the file.
3. Server B transfers the file to the server A.
4. Server A transfers the file to the client.

So step 3 is a stupid and time-wasting process.
How do you solve this problem when you have to use nfs protocol?
Thanks in advance.

Best Regards,
Sylar Shen
________________________________________
Hi,

The Linux kernel nfsd does work with re-exporting a FUSE mounted gluster filesystem. The gluster native nfsd is clearly better/faster than unfsd, but in benchmarks we've done both the gluster and the kernel nfs does have performance bottlenecks that limit throughput to around 500MB/s with many concurrent sessions even though the storage backend support a much higher load.

Kind regards,
Fredrik Widlund


-----Ursprungligt meddelande-----
Från: gluster-devel-bounces+fredrik.widlund=qbrick.com@xxxxxxxxxx [mailto:gluster-devel-bounces+fredrik.widlund=qbrick.com@xxxxxxxxxx] För Gordan Bobic
Skickat: den 6 januari 2011 13:15
Till: 'gluster-devel@xxxxxxxxxx'
Ämne: Re: [Gluster-devel] Re-exporting NFS to vmware

沈允中 wrote:
> Hi,
> Thanks for the advice.
> My problem is just like you said.
> But is there any alternative way that I can solve my problem?
> Because vmware really doesn't have the glusterfs protocol to mount.
> I know that the Gluster.com may publish their VMStor product to improve this.
> However, to tell the truth, I don't want to spend money.......:p
>
> If the problem cannot be solved now, does anyone know other file systems which are similar to Gluster so that I can mount by nfs protocol without losing performance?
> Thanks in advance.
>
>
> Best Regards,
> Sylar Shen
> ________________________________________
>
>
> Sylar wrote:
>> Hi All:
>>
>> I wanted to use GlusterFS as a share storage to connect with vmware.
>>
>> But the nfs protocol had a poor performance when the scalability got
>> larger.(I have 20 servers as a GlusterFS)
>>
>> So I figured out a way when I saw the wiki of Ceph
>>
>> http://ceph.newdream.net/wiki/Re-exporting_NFS
>>
>>
>>
>> I think that I can add a middle-converter between vmware and GlusterFS.
>>
>> It can connect with vmware by nfs and mount GlusterFS by glusterfs.
>>
>> Here is the architecture I thought.......
>>
>> And then I had a problem. The middle-tier is OK to connect with
>> GlusterFS by glusterfs protocol.
>>
>> But the errors happened when vmware connects with middle-tier by nfs
>> protocol.
>>
>> The vmware cannot mount middle-tier by nfs at the first time.
>>
>> Even if vmware can mount the middle-tier by nfs, it cannot see the data
>> in the GlusterFS.
>>
>> It can only see the data(directory) in the middle-tier
>>
>>
>>
>> Does anyone have the same problem as I ?
>>
>> How do you solve this thorny problem?
>
> Are you saying you are mounting GlusterFS on an interim node, and then
> re-exporting that via NFS? What are you using for the NFS export? Last I
> checked kernel nfsd didn't work with fuse based file systems, so you'd
> have to use something like unfsd (user-space) instead. You may, however,
> find that if you do that, the extra performance hit from unfsd will undo
> most of the speed-up you are hoping to achieve.

If the only problem you have is providing an NFS share to the client,
then you could either use unfsd (google it, I'm sure you'll find it), or
use the GlusterFS's NFS interface that is supposed to be more efficient
than unfsd. Both of these were discussed here a while back, check the
archives.

Gordan

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux