Re: performance question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gerry,

If you are using xfs try this command:
mkfs.xfs -i attr=2 /dev/<hda>

if you are using ext3:
mkfs.ext3 -i 256 /dev/<hda>

This will ensure that extended attributes are put into the inode
structure itself. If this is not done, an extra block is allocated
for EAs thus taking more time. It dramatically improved the
performance for me (4 to 5 folds) for creation of files.
reiserfs was fast by default, so it must be using in-inode EAs
by default.

Regards
Krishna

On 7/5/07, Krishna Srinivas <krishna@xxxxxxxxxxxxx> wrote:
> Hi Gerry,
>
> Good observation. I was checking on the performance with
> self-heal turned off and the turning it on.
>
> On glusterfs mounted directory:
> With self-heal on:
> bash-3.1# time cp -r /etc/ .
>
> real    0m28.048s
> user    0m0.004s
> sys     0m0.060s
> bash-3.1# time rm -rf etc/
>
> real    0m28.666s
> user    0m0.000s
> sys     0m0.032s
> bash-3.1#
>
> With self-heal off:
> bash-3.1# time cp -r /etc/ .
>
> real    0m2.376s
> user    0m0.012s
> sys     0m0.060s
> bash-3.1# time rm -rf etc/
>
> real    0m3.639s
> user    0m0.000s
> sys     0m0.000s
> bash-3.1#
>
> So there is significant difference. The difference is that there is no
> external attributes management when the self-heal is off. AFR code
> does getxattr/setxattr when self-heal is on, so this introduces
> some overhead. Also there will be overhead in the backend filesystem
> code to manage xttrs.
>
> However, if i try to delete the etc directory directly in the backend
> (it was copied through glusterfs with self-heal on)
> bash-3.1# time rm -rf /export/dir1/etc/
>
> real    0m18.414s
> user    0m0.000s
> sys     0m0.132s
> bash-3.1#
>
> So there is significant overhead in the backend filesystem itself
> when xattrs are involved.
>
> Checking the overhead on open/close calls:
> (here a.out opens, writes a byte, closes)
>
> Selfheal is on:
> bash-3.1# time find . -type f -exec /root/a.out {}  \;
>
> real    0m1.529s
> user    0m0.120s
> sys     0m0.284s
> bash-3.1#
>
> Selfheal off:
> bash-3.1# time find . -type f -exec /root/a.out {}  \;
>
> real    0m0.577s
> user    0m0.124s
> sys     0m0.260s
> bash-3.1#
>
>
>
> There is  not much difference here. So setxattr/getxattr
> does not take much time if xattrs are already existing on
> he file. Hence there will be lot overhead only during create/unlink.
>
> We will see if we can optimize anyway in the AFR code.
> We have to note that backend system takes a lot
> of time during create/delete which we dont have control
> over. But still it is acceptable as there is not much overhead
> during open/close/write calls.
>
> Regards
> Krishna
>
> On 7/5/07, Anand Avati <avati@xxxxxxxxxxxxx> wrote:
> >
> > Gerry,
> > please use the write-behind translator on the client side (above AFR)
> >
> > thanks,
> > avati
> >
> > 2007/7/4, Gerry Reno <greno@xxxxxxxxxxx>:
> > >
> > > In copying my /usr tree (4.9G) to a gluster client mount with a 4 brick
> > > AFR with no other translators I see it is taking about 1 hr. 45 min.  Is
> > > this normal performance?
> > > Now this is with the bricks all on the same machine and same ext3
> > > filesystem, but that seems like a long time even still.
> > >
> > > Gerry
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Gluster-devel mailing list
> > > Gluster-devel@xxxxxxxxxx
> > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> > --
> > Anand V. Avati
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel@xxxxxxxxxx
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux