Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 11, 2018 at 9:16 AM, Dongsheng Yang
<dongsheng.yang@xxxxxxxxxxxx> wrote:
>
>
> On 05/10/2018 10:56 PM, Ilya Dryomov wrote:
>
> On Tue, May 8, 2018 at 8:13 AM, Dongsheng Yang
> <dongsheng.yang@xxxxxxxxxxxx> wrote:
>
> To be more specific, the ENOSPC was introduced by
> a9d6ceb838755c24dde8a0ca02c3378926fc63db.
>
> Please check:
>
> https://github.com/torvalds/linux/commit/a9d6ceb838755c24dde8a0ca02c3378926fc63db
>
> This commit added an error string for -ENOSPC, so that "critical space
> allocation error" is printed instead of "I/O error".  It didn't change
> anything about generic error code propagation.
>
> Or, maybe we can keep the default behavior of blocking IO, but add a rbd
> option like abort_on_full to
> return ENOSPC in the case we really don't want any IO blocking in our
> program.
>
> What is your use case?  Are you using raw block devices with pool
> quotas enabled?
>
> both raw block and xfs.
>
> One of the problem cases is that:
> We provide a container service for customer. Each project has
> a pool with quota, and the customer in this project can create
> containers with specified size of disk, but that's thin provisioned.
>
> When they reach the quota, the problem is the process in container
> which is doing writing will be blocked, and go into D state. Then we
> can't kill processes or stop these containers.
>
> Customers complained times
>
> "Just tell me what happened, rather than suspend everything and forbid to
> cancel."
>
> So we want to just return error to user and allow them to do a clean work
> and recovery
> their containers.

Right, so you are using pool quotas.  Support for pool quotas is
a relatively recent addition (since 4.7), so you are hitting all the
sharp edges.  This is exactly how it works in userspace -- no changes
were made.

Hanging processes in D state is definitely undesirable, but I'm curious
how would the recovery look like?  There is nothing that can done on the
customer side once the pool is marked full.  It will remain full until
an administrative action is taken.

Thanks,

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux