Re: Quota issues of pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jason and Sage,
  Thanks a lot for your input. We will follow up the pull request and offer any help we can to fix this issue. In the mean time, we got another issue regarding to pool full.
   We created two pools in a cluster: One is called rbd pool, another one is poc-pool,  
  
  $ceph -s
      cluster 789bda76-218d-4c4f-bbf6-182fbe5e7970
       health HEALTH_WARN
              pool 'rbd' is full
       monmap e1: 1 mons at {mon1=11.238.224.6:6789/0}
              election epoch 4, quorum 0 mon1
       osdmap e6940: 32 osds: 32 up, 32 in
              flags sortbitwise,require_jewel_osds
        pgmap v380010: 1524 pgs, 2 pools, 64568 MB data, 4027 kobjects
              218 GB used, 59208 GB / 59427 GB avail
                  1524 active+clean
    client io 43274 kB/s wr, 0 op/s rd, 5409 op/s wr
  
   Once we make rbd pool full. The another pool “poc-pool”  performance drops a lot even though poc-poll is far away from full. Any advices?
  
  Regards,
  James
  
    
本邮件及其附件含有阿里巴巴集团的商业秘密信息,仅限于发送给上面地址中列出的个人和群组,禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制和散发)本邮件及其附件中的信息,如果您错收本邮件,请您立即电话或邮件通知发件人并删除本邮件。
This email and its attachments contain confidential information from Alibaba Group.which is intended only for the person or entity whose address is listed above.Any use of information contained herein in any way(including,but not limited to,total or partial disclosure,reproduction or dissemination)by persons other than the intended recipient(s) is prohibited.If you receive this email in error,please notify the sender by phone or email immediately and delete it.

On 1/18/17, 11:33 AM, "Jason Dillaman" <jdillama@xxxxxxxxxx> wrote:

    For what it's worth, there is work in-progress right now to add the
    necessary hooks to librbd to support removing an image from a full
    cluster [1].
    
    [1] https://github.com/ceph/ceph/pull/12627
    
    On Tue, Jan 17, 2017 at 9:30 PM, LIU, Fei <james.liu@xxxxxxxxxxxxxxx> wrote:
    > Hi Sage,
    >   Thanks for your promptly response. It is clear to us of the root cause that cause  the delay which in the end make quota enforcement somehow not accurate . However, when we tried to remove the image which is exceeding the quota ,the image can not be removed and remove process hang over there.
    >
    > sudo rbd unmap /dev/rbd0
    > rbd rm quota/100
    >
    > We got FULL,paused modify error.
    >
    > Is that normal? Thanks for your advise in advance.
    >
    > Regards,
    > James
    >
    > 本邮件及其附件含有阿里巴巴集团的商业秘密信息,仅限于发送给上面地址中列出的个人和群组,禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制和散发)本邮件及其附件中的信息,如果您错收本邮件,请您立即电话或邮件通知发件人并删除本邮件。
    > This email and its attachments contain confidential information from Alibaba Group.which is intended only for the person or entity whose address is listed above.Any use of information contained herein in any way(including,but not limited to,total or partial disclosure,reproduction or dissemination)by persons other than the intended recipient(s) is prohibited.If you receive this email in error,please notify the sender by phone or email immediately and delete it.
    >
    > ------------------Original Mail ------------------
    > From:Sage Weil <sweil@xxxxxxxxxx>
    > Date:2017-01-18 08:23:28
    > Recipient:LIU, Fei <james.liu@xxxxxxxxxxxxxxx>
    > CC:ceph-devel <ceph-devel@xxxxxxxxxxxxxxx>, Mark Nelson <mnelson@xxxxxxxxxx>
    > Subject:Re: Quota issues of pool
    > On Wed, 18 Jan 2017, LIU, Fei wrote:
    >> Hi ,
    >>
    >> We have tested the quota features of pool, We found we can set up the quota of pool less than existing image size. We can also create a image with quota more than the pool size. Sometime ,we even set up a quota for a image, but we found we can write the data more than the quota we setup for a image. For example, we set up a poor with quota 100M and create a image inside of pool with 200M. but we found we can write more than 100M to the image which is way over 100M.
    >>
    >> ceph osd pool set-quota quota max_bytes 100M
    >> rbd create –s 200M quota/100
    >> sudo rbd map quota/100
    >> sudo dd if= /dev/zero of=/dev/rbd0 bs=16k
    >>
    >> we in the end wrote 163M into the image.
    >>
    >> We are wondering the quota function of the pool did not work from our
    >> observation. Appreciate for any input?
    >
    > Unlike a single storage server in which all IO passes through a single
    > server, Ceph is distributed and can't easily do perfect
    > quota enforcement. It's a feedback loop, but there is always some
    > delay:
    >
    > - osds report pg stats to mon
    > - mon sees pool usage exceed quota, sets full flag on pool
    > - new osdmap propagates to osds
    > - osds block new writes
    >
    > This is usually not a problem when limits are large, as we generally
    > expect them to be. (1 PB vs 1.00001 PB written doesn't really matter.)
    > It doesn't work well on very small pools (e.g., 100 MB), but you generally
    > don't want such small pools anyway or else you'll need to have millions of
    > them in a single cluster, and that isn't supported.
    >
    > sage
    >
    >
    >
    > --
    > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
    > the body of a message to majordomo@xxxxxxxxxxxxxxx
    > More majordomo info at  http://vger.kernel.org/majordomo-info.html
    
    
    
    -- 
    Jason
    


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux