RE: mount r/w and r/o

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the reply.  Very interesting.  Could you explain how the bsd box read the raw device and built the internal lookup table?

The main reason I wrote "not GFS" is because I'm aware of it and that it would take a bit of work to implement.  I'm currently looking for a quick fix to give me some time to implement a more robust solution.  Also, realizing I had some definite issues w/ my current config, I researched GFS a little while back.  It's my understanding that total storage in a GFS cluster cannot exceed 8TB and we have > 12TB.  I didn't investigate too much further for a work-around. 

Andreas suggested lustre which on the surface appears to be viable.  

-----Original Message-----
From: Jérôme Petazzoni [mailto:jp@xxxxxxxx] 
Sent: Friday, November 04, 2005 12:11 PM
To: Jeff Dinisco
Cc: Wolber, Richard C; Damian Menscher; ext3-users@xxxxxxxxxx
Subject: Re: mount r/w and r/o

[one r-w mount, multiple r-o mounts shared thru FC switch]

>>>should I use it?
>>>Am I going about this all wrong, is there a better way to do this 
>>>(other than GFS)?
>>>      
>>>
I once heard about someone doing something like that for a video farm, 
intermixing solaris and freebsd servers (so as far as he, and I, knew, 
there was no easy sharing solution). He did the following :
- create the filesystem on the solaris bow
- create many 1 GB files, with a specific byte pattern (512 bytes 
sectors iirc)
- the freebsd box would read the raw device, detect the byte patterns 
and build an internal lookup table, to know that file F, offset O was 
located on physical sector S
- the solaris box would then write data to the 1 GB files, and the 
freebsd box could then read back the data, thanks to the previously 
built lookup table (the 1 GB files would only be rewritten to, never 
truncated or rewritten, AFAIK)

IIRC, there was 2 solaris boxen using some HA solution, and many freebsd 
boxen accessing the data. This worked because the files were smaller 
than 1 GB (to be honnest, I don't know the exact size he used), and the 
very impressive performance of the solution balanced the hassle involved 
in setting up the whole thing.

Now, I would not ask "why not NFS?", but "why not GFS?" (and please 
apologize if it the answer is obvious...)




_______________________________________________

Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux