> -----Original Message----- > From: M Ranga Swami Reddy [mailto:swamireddy@xxxxxxxxx] > Sent: 18 February 2016 13:44 > To: Nick Fisk <nick@xxxxxxxxxx> > Subject: Re: OSD Journal size config > > > Hello All, > > I have increased my cluster's OSD journal size from 2GB to 10GB. > > But could NOT see much write/read performance improvements. > > >You probably won't unless your journal was getting to the point where > >it was filling up. I think the filestore throttle stops the journal > >getting too far ahead of the disks to avoid massively long sync periods. > > Thank. > > Oh, so what is the option to get the performance? It's not as easy as that. If the OSD disks are at the point they cannot perform IO any faster that the journal starts filling up, the only real solution is to add more OSD's. What performance issues are you seeing? > > Thanks > Swami > > > On Thu, Feb 18, 2016 at 6:32 PM, Nick Fisk <nick@xxxxxxxxxx> wrote: > > > > > >> -----Original Message----- > >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf > >> Of M Ranga Swami Reddy > >> Sent: 18 February 2016 12:09 > >> To: ceph-users <ceph-users@xxxxxxxx> > >> Subject: OSD Journal size config > >> > >> Hello All, > >> I have increased my cluster's OSD journal size from 2GB to 10GB. > >> But could NOT see much write/read performance improvements. > > > > You probably won't unless your journal was getting to the point where > > it was filling up. I think the filestore throttle stops the journal > > getting too far ahead of the disks to avoid massively long sync periods. > > > >> > >> (Cluster is with 4 servers + 96 osds). > >> > >> Do I miss anything here? Or do I need to update some more config > >> variables related to journalling? > >> > >> like(using the default for the below): > >> == > >> journal max write bytes > >> journal max write entries > >> journal queue max ops > >> journal queue max bytes > >> journal max corrupt search > >> == > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@xxxxxxxxxxxxxx > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com