Re: Ceph Hackathon: More Memory Allocator Testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've frequently run fio + libosd (cohort ceph-osd linked as a library) with jemalloc preloaded, without problems.

Matt

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-761-4689
fax.  734-769-8938
cel.  734-216-5309

----- Original Message -----
> From: "Daniel Gryniewicz" <dang@xxxxxxxxxx>
> To: "Ceph Development" <ceph-devel@xxxxxxxxxxxxxxx>
> Sent: Thursday, September 3, 2015 9:06:47 AM
> Subject: Re: Ceph Hackathon: More Memory Allocator Testing
> 
> I believe preloading should work fine.  It has been a common way to
> debug buffer overruns using electric fence and similar tools for
> years, and I have used it in large applications of similar size to
> Ceph.
> 
> Daniel
> 
> On Thu, Sep 3, 2015 at 5:13 AM, Shinobu Kinjo <skinjo@xxxxxxxxxx> wrote:
> >
> > Pre loading jemalloc after compiling with malloc
> >
> > $ cat hoge.c
> > #include <stdlib.h>
> >
> > int main()
> > {
> >     int *ptr = malloc(sizeof(int) * 10);
> >
> >     if (ptr == NULL)
> >         exit(EXIT_FAILURE);
> >     free(ptr);
> > }
> >
> >
> > $ gcc ./hoge.c
> >
> >
> > $ ldd ./a.out
> >         linux-vdso.so.1 (0x00007fffe17e5000)
> >         libc.so.6 => /lib64/libc.so.6 (0x00007fc989c5f000)
> >         /lib64/ld-linux-x86-64.so.2 (0x000055a718762000)
> >
> >
> > $ nm ./a.out | grep malloc
> >                  U malloc@@GLIBC_2.2.5                       // malloc
> >                  loaded
> >
> >
> > $ LD_PRELOAD=/usr/lib64/libjemalloc.so.1 \
> > > ldd a.out
> >         linux-vdso.so.1 (0x00007fff7fd36000)
> >         /usr/lib64/libjemalloc.so.1 (0x00007fe6ffe39000)    // jemallo
> >         loaded
> >         libc.so.6 => /lib64/libc.so.6 (0x00007fe6ffa61000)
> >         libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fe6ff844000)
> >         /lib64/ld-linux-x86-64.so.2 (0x0000560342ddf000)
> >
> >
> > Logically it could work, but in real world I'm not 100% sure if it works
> > for large scale application.
> >
> > Shinobu
> >
> > ----- Original Message -----
> > From: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>
> > To: "Alexandre DERUMIER" <aderumier@xxxxxxxxx>
> > Cc: "Sage Weil" <sage@xxxxxxxxxxxx>, "Milosz Tanski" <milosz@xxxxxxxxx>,
> > "Shishir Gowda" <Shishir.Gowda@xxxxxxxxxxx>, "Stefan Priebe"
> > <s.priebe@xxxxxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx>, "ceph-devel"
> > <ceph-devel@xxxxxxxxxxxxxxx>
> > Sent: Sunday, August 23, 2015 2:03:41 AM
> > Subject: RE: Ceph Hackathon: More Memory Allocator Testing
> >
> > Need to see if client is overriding the libraries built with different
> > malloc libraries I guess..
> > I am not sure in your case the benefit you are seeing is because of qemu is
> > more efficient with tcmalloc/jemalloc or the entire client stack ?
> >
> > -----Original Message-----
> > From: Alexandre DERUMIER [mailto:aderumier@xxxxxxxxx]
> > Sent: Saturday, August 22, 2015 9:57 AM
> > To: Somnath Roy
> > Cc: Sage Weil; Milosz Tanski; Shishir Gowda; Stefan Priebe; Mark Nelson;
> > ceph-devel
> > Subject: Re: Ceph Hackathon: More Memory Allocator Testing
> >
> > >>Wanted to know is there any reason we didn't link client libraries with
> > >>tcmalloc at the first place (but did link only OSDs/mon/RGW) ?
> >
> > Do we need to link client librairies ?
> >
> > I'm building qemu with jemalloc , and it's seem to be enough.
> >
> >
> >
> > ----- Mail original -----
> > De: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>
> > À: "Sage Weil" <sage@xxxxxxxxxxxx>, "Milosz Tanski" <milosz@xxxxxxxxx>
> > Cc: "Shishir Gowda" <Shishir.Gowda@xxxxxxxxxxx>, "Stefan Priebe"
> > <s.priebe@xxxxxxxxxxxx>, "aderumier" <aderumier@xxxxxxxxx>, "Mark Nelson"
> > <mnelson@xxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
> > Envoyé: Samedi 22 Août 2015 18:15:36
> > Objet: RE: Ceph Hackathon: More Memory Allocator Testing
> >
> > Yes, even today rocksdb also linked with tcmalloc. It doesn't mean all the
> > application using rocksdb needs to be built with tcmalloc.
> > Sage,
> > Wanted to know is there any reason we didn't link client libraries with
> > tcmalloc at the first place (but did link only OSDs/mon/RGW) ?
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
> > Sent: Saturday, August 22, 2015 6:56 AM
> > To: Milosz Tanski
> > Cc: Shishir Gowda; Somnath Roy; Stefan Priebe; Alexandre DERUMIER; Mark
> > Nelson; ceph-devel
> > Subject: Re: Ceph Hackathon: More Memory Allocator Testing
> >
> > On Fri, 21 Aug 2015, Milosz Tanski wrote:
> > > On Fri, Aug 21, 2015 at 12:22 AM, Shishir Gowda
> > > <Shishir.Gowda@xxxxxxxxxxx> wrote:
> > > > Hi All,
> > > >
> > > > Have sent out a pull request which enables building librados/librbd
> > > > with either tcmalloc(as default) or jemalloc.
> > > >
> > > > Please find the pull request @
> > > > https://github.com/ceph/ceph/pull/5628
> > > >
> > > > With regards,
> > > > Shishir
> > >
> > > Unless I'm missing something here, this seams like the wrong thing to.
> > > Libraries that will be linked in by other external applications should
> > > not have a 3rd party malloc linked in there. That seams like an
> > > application choice. At the very least the default should not be to
> > > link in a 3rd party malloc.
> >
> > Yeah, I think you're right.
> >
> > Note that this isn't/wasn't always the case, though.. on precise, for
> > instance, libleveldb links libtcmalloc. They stopped doing this sometime
> > before trusty.
> >
> > sage
> >
> > ________________________________
> >
> > PLEASE NOTE: The information contained in this electronic mail message is
> > intended only for the use of the designated recipient(s) named above. If
> > the reader of this message is not the intended recipient, you are hereby
> > notified that you have received this message in error and that any review,
> > dissemination, distribution, or copying of this message is strictly
> > prohibited. If you have received this communication in error, please
> > notify the sender by telephone or e-mail (as shown above) immediately and
> > destroy any and all copies of this message in your possession (whether
> > hard copies or electronically stored copies).
> >
> > N�����r��y���b�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w������j:+v���w�j�m��������zZ+��ݢj"��
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux