Fábio Jr. wrote: > Rob Gardner escreveu: > >> Fábio Jr. wrote: >> >>> Rob Gardner escreveu: >>> >>> >>>> Fábio Jr. wrote: >>>> >>>> >>>>> Rob Gardner escreveu: >>>>> >>>>> >>>>> >>>>>> Fábio Jr. wrote: >>>>>> >>>>>> >>>>>>> Hello all. >>>>>>> >>>>>>> Is there a way to export only some specific file extensions from >>>>>>> a directory, like exporting only the files in /home/fabio with >>>>>>> the .jpg extension? >>>>>>> >>>>>>> >>>>>> You could do this pretty easily by exporting a 'fuse' filesystem >>>>>> layered on top of your home directory. >>>>>> >>>>>> Rob Gardner >>>>>> HP >>>>>> >>>>>> >>>>>> >>>>> Thanks Rob for the reply, >>>>> >>>>> This means that if I use this FUSE, I can export via NFS only the >>>>> file extension that I need? let me explain my problem, not really a >>>>> problem, but a doubt. >>>>> >>>>> I have one storage that have my aplication files. This storage is >>>>> mounted via NFS in my 3 aplication servers. Now I will put another >>>>> server, to serve only static files (jpg,png,css,js basically), and >>>>> though that maybe there is a way to export only these files from >>>>> the storage. The problem is that these files are not organized in >>>>> diferent folders. >>>>> >>>>> Maybe my first question doesn't explained my real necessity, but >>>>> your answer will make me search a little more about FUSE. >>>>> >>>>> >>>> I think your first question explained your need clearly. FUSE is a >>>> filesystem layer that lets you filter requests through a user >>>> program such as a python script. So, for instance, every time a >>>> process opens a file, a function is called in your script, and you >>>> can look at the name of the file being opened, and there decide >>>> whether or not to allow the file to be opened. I think this would >>>> solve your problem easily. Though all the storage is "exported", >>>> only files with certain names (ie, *.jpg, etc) could be opened. You >>>> could also decide which files get enumerated with readdir, etc. >>>> >>>> Rob Gardner >>>> HP >>>> >>>> >>>> >>>> >>> Oh yes, I think I didn't understand you answer.. but now all became >>> clear in my mind. I'm afraid that using this solution maybe cause an >>> increase on storage processor load, because for every request the >>> script must be executed. I already have some issues with server >>> availability, and perhaps the solution of a problem becomes the >>> worsening of another. >>> >>> Still thanks for the reply and thanks for helping to clear my mind. >>> >> It's a valid concern, but fuse does not "execute a script" for every >> operation. The script is always running, sort of like a server for the >> pseudo-filesystem. Each requests causes a few lines in the script to >> be executed. There is no process creation and dispatch for each >> operation, only a process wakeup for each operation. It is >> impressively lightweight and it's worth trying before dismissing it as >> a resource drain. >> >> > Hmm.. that's interesting, but, how many requests this script can handle? > This can vary in diferent systems or configs, or even can be measured? > OK now you're talking about throughput... that's different than processor load. The fuser script is single threaded, I think, so it probably won't scale too well. You could go into the kernel and hack a filter into say, the nfs lookup code, and cause lookup failures for files named '*.jpg' on certain exports. That would scale, though would take quite a bit more work. >> But anyway, here's another idea for you-- Export a new directory that >> contains links to only those the files you want to export. >> >> > Unfortunately, even if I wanted to, it's not possible to do this in my > scenario, since I have aprox. 4 milion files only in jpeg format in > different folders, not counting the other extensions. The major problem > was the planning phase, that not counted with so many files and not > prepared the files organization to better performance. But for smallers > systems, its a very good solution. I guess you're in a better position to judge the suitability of a proposed solution... I can think of various reasons why creating links might not be a good solution, but I don't see why having millions of files presents any problem, as the process of creating links can be automated rather easily. Rob Gardner HP ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ NFS maillist - NFS@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/nfs _______________________________________________ Please note that nfs@xxxxxxxxxxxxxxxxxxxxx is being discontinued. Please subscribe to linux-nfs@xxxxxxxxxxxxxxx instead. http://vger.kernel.org/vger-lists.html#linux-nfs -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html