Re: Ceph Hackathon: More Memory Allocator Testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>I am not sure in your case the benefit you are seeing is because of qemu is more efficient with tcmalloc/jemalloc or the entire client stack ? 

>From my test, qemu, fio or "rados bench" are more efficient with tcmalloc/jemmaloc when using librbd.

For qemu, I don't see any difference with other backends (iscsi,nfs,local), only rbd backend have a big difference when using another memory allocator.

Here some qemu results (1disk / 1 iothread):

glibc malloc
------------

1 disk      29052
2 disks     55878
4 disks     127899
8 disks     240566
15 disks    269976

jemalloc
--------

1 disk      41278
2 disks     75781
4 disks     195351
8 disks     294241
15 disks    298199

tcmalloc  default cache (increasing threads hit tcmalloc bug)
----------------------------

1 disk   37911
2 disks  67698
4 disks  41076
8 disks  43312
15 disks 37569

tcmalloc : 256M cache
---------------------------

1 disk     33914
2 disks    58839
4 disks    148205
8 disks    213298
15 disks   218383


----- Mail original -----
De: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>
À: "aderumier" <aderumier@xxxxxxxxx>
Cc: "Sage Weil" <sage@xxxxxxxxxxxx>, "Milosz Tanski" <milosz@xxxxxxxxx>, "Shishir Gowda" <Shishir.Gowda@xxxxxxxxxxx>, "Stefan Priebe" <s.priebe@xxxxxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
Envoyé: Samedi 22 Août 2015 19:03:41
Objet: RE: Ceph Hackathon: More Memory Allocator Testing

Need to see if client is overriding the libraries built with different malloc libraries I guess.. 
I am not sure in your case the benefit you are seeing is because of qemu is more efficient with tcmalloc/jemalloc or the entire client stack ? 

-----Original Message----- 
From: Alexandre DERUMIER [mailto:aderumier@xxxxxxxxx] 
Sent: Saturday, August 22, 2015 9:57 AM 
To: Somnath Roy 
Cc: Sage Weil; Milosz Tanski; Shishir Gowda; Stefan Priebe; Mark Nelson; ceph-devel 
Subject: Re: Ceph Hackathon: More Memory Allocator Testing 

>>Wanted to know is there any reason we didn't link client libraries with tcmalloc at the first place (but did link only OSDs/mon/RGW) ? 

Do we need to link client librairies ? 

I'm building qemu with jemalloc , and it's seem to be enough. 



----- Mail original ----- 
De: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx> 
À: "Sage Weil" <sage@xxxxxxxxxxxx>, "Milosz Tanski" <milosz@xxxxxxxxx> 
Cc: "Shishir Gowda" <Shishir.Gowda@xxxxxxxxxxx>, "Stefan Priebe" <s.priebe@xxxxxxxxxxxx>, "aderumier" <aderumier@xxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
Envoyé: Samedi 22 Août 2015 18:15:36 
Objet: RE: Ceph Hackathon: More Memory Allocator Testing 

Yes, even today rocksdb also linked with tcmalloc. It doesn't mean all the application using rocksdb needs to be built with tcmalloc. 
Sage, 
Wanted to know is there any reason we didn't link client libraries with tcmalloc at the first place (but did link only OSDs/mon/RGW) ? 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: Sage Weil [mailto:sage@xxxxxxxxxxxx] 
Sent: Saturday, August 22, 2015 6:56 AM 
To: Milosz Tanski 
Cc: Shishir Gowda; Somnath Roy; Stefan Priebe; Alexandre DERUMIER; Mark Nelson; ceph-devel 
Subject: Re: Ceph Hackathon: More Memory Allocator Testing 

On Fri, 21 Aug 2015, Milosz Tanski wrote: 
> On Fri, Aug 21, 2015 at 12:22 AM, Shishir Gowda 
> <Shishir.Gowda@xxxxxxxxxxx> wrote: 
> > Hi All, 
> > 
> > Have sent out a pull request which enables building librados/librbd with either tcmalloc(as default) or jemalloc. 
> > 
> > Please find the pull request @ 
> > https://github.com/ceph/ceph/pull/5628 
> > 
> > With regards, 
> > Shishir 
> 
> Unless I'm missing something here, this seams like the wrong thing to. 
> Libraries that will be linked in by other external applications should 
> not have a 3rd party malloc linked in there. That seams like an 
> application choice. At the very least the default should not be to 
> link in a 3rd party malloc. 

Yeah, I think you're right. 

Note that this isn't/wasn't always the case, though.. on precise, for instance, libleveldb links libtcmalloc. They stopped doing this sometime before trusty. 

sage 

________________________________ 

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux