Directory tree layout turned out to be the most scalable approach of
all the models we thought about. Inode allocation or disk space
usage is relatively small even for an extremely large storage
deployment.
Think of an alternative approach to this problem, using gdbm (GNU
Database Manager). For a peta byte scale (even in TB scale)
deployment, hashing everything in memory will become expensive in
terms of searches and memory consumption.
Linux kernel's file system / block device buffering is as good as
ramdisk. You may not get any better performance with ramdisk store.
Instead, you may end up losing performance trying to re-build cache
for every reboots.
--
Anand Babu Periasamy
James Porter writes:
Couldn't you make a ramdisk with a small block size. The only thing I can
see bad with that method would be if you are only using unify on a few
bricks and a machine is powered off or crashes.
On 7/24/07, Amar S. Tumballi <amar@xxxxxxxxxxxxx> wrote:
Hi,
Sorry for the late reply.
The size of namespace as it contains entries for each file/directory on
the storage nodes, is directly dependent on number of files.
Size of namespace = total number of files * size of one block in
filesystem.
so, it would be useful to create a namespace directory (say partition),
with very less block size.
-amar
On 7/24/07, James Porter < jameslporter@xxxxxxxxx> wrote:
>
> I don't know about number of files but you certainly can limit the size
> with
> the min-free-disk option in the rr scheduler. I assume you could also
> just
> use ulimit. Anyone else with suggestions / knowledge?
>
> On 7/16/07, Sebastien LELIEVRE < slelievre@xxxxxxxxxxxxxxxx> wrote:
> >
> > Hi everyone.
> >
> > I just have a little question:
> >
> > Would there be a way to define the namespace volume size regards to
> the
> > bricks in use?
> >
> > Let's say it simply : Is there any rule that would say :
> >
> > "I have X bricks of Y Gb-size with Z thousand of files on each,
> > so I need a namespace volume of *how-to define it* Mb"
> >
> > Does anyone have a clue on this ?
> >
> > Cheers,
> >
> > Sebastien.
> >
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel@xxxxxxxxxx
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
> >
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
--
Amar Tumballi
http://amar.80x25.org
[bulde on #gluster/irc.gnu.org]
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel