RE: chooseleaf may cause some unnecessary pg migrations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I just realized the measurement I mentioned last time is not precise. It should be 'number of changed mappings' instead of 'number of remapped PGs'.
For example, [2,1,7] -> [0,7,4] should be regarded differently from [2,1,7] -> [1,7,4], as the first change causes doubled data transfer.
Is this the reason for your test results?

> -----Original Message-----
> From: Chen, Xiaoxi [mailto:xiaoxi.chen@xxxxxxxxx]
> Sent: Monday, October 19, 2015 3:34 PM
> To: Sage Weil; xusangdi 11976 (RD)
> Cc: ceph-devel@xxxxxxxxxxxxxxx
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
>
> Thanks sage.
>
> Retesting by using --test --weight ${rand} 0 , still in 40 OSDs, 10 per host case:
>
> New code average on 204.31 while old code average on 202.31
>
>
> > -----Original Message-----
> > From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
> > Sent: Monday, October 19, 2015 10:18 AM
> > To: Xusangdi
> > Cc: Chen, Xiaoxi; ceph-devel@xxxxxxxxxxxxxxx
> > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> >
> > On Mon, 19 Oct 2015, Xusangdi wrote:
> > >
> > > > -----Original Message-----
> > > > From: ceph-devel-owner@xxxxxxxxxxxxxxx
> > > > [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Chen,
> > > > Xiaoxi
> > > > Sent: Monday, October 19, 2015 9:11 AM
> > > > To: xusangdi 11976 (RD)
> > > > Cc: ceph-devel@xxxxxxxxxxxxxxx
> > > > Subject: RE: chooseleaf may cause some unnecessary pg migrations
> > > >
> > > > Sorry but not following...
> > > >
> > > > > then shut down one or more osds (please don't touch the
> > > > > crushmap, just stop the osd service or kill
> > > > its process).
> > > >
> > > > In this case, OSD is only down but not out, but will be marked out
> > > > after
> > 300s.
> > > >
> > > > So in what case your patch is helping?
> > > >
> > > >       If you said your patch helps on "down and out" , then my
> > > > experiment is exactly the case,
> > > >
> > >
> > > I am afraid it is probably not. Could you tell me how did you
> > > simulate the osd "down and out" situation using crushtool? If it was
> > > done by arguments such as '--remove-item' or 'reweight-item', it
> > > modified the crushmap and is not what I'm aiming for.
> >
> > There is a --weight argument (noted in usage near --test, which is the
> > only piece that uses it).  The crush map is not modified--only the
> > weight vector that is passed in when a mapping is calculated (which is
> > the equivalent of the in/out state in Ceph's OSDMap).  This should let you simulate this case.
> >
> > When I'm debugging/understanding these issues I usually change the
> > dprintk #define at the top of crush/mapper.c and use crushtool or
> > osdmaptool to calculate a single mapping, comparing the log before and
> > after a particular change.
> >
> > sage
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux