On 05/11/2018 09:09 PM, Ilya Dryomov wrote:
On Fri, May 11, 2018 at 9:16 AM, Dongsheng Yang
<dongsheng.yang@xxxxxxxxxxxx> wrote:
On 05/10/2018 10:56 PM, Ilya Dryomov wrote:
On Tue, May 8, 2018 at 8:13 AM, Dongsheng Yang
<dongsheng.yang@xxxxxxxxxxxx> wrote:
...
Customers complained times
"Just tell me what happened, rather than suspend everything and forbid to
cancel."
So we want to just return error to user and allow them to do a clean work
and recovery
their containers.
Right, so you are using pool quotas. Support for pool quotas is
a relatively recent addition (since 4.7), so you are hitting all the
sharp edges. This is exactly how it works in userspace -- no changes
were made.
Hanging processes in D state is definitely undesirable, but I'm curious
how would the recovery look like? There is nothing that can done on the
customer side once the pool is marked full. It will remain full until
an administrative action is taken.
We can remove some unused rbd images in this case,
https://github.com/ceph/ceph/pull/12627
In our using, there is a reset button at web for each container. When
the user
found no space left, they can remove some unused volumes. Then
we can click the reset button of each container which will reset the
container status, actually, it just stop and then start the container again.
And then this container would be running again.
Thanx
Thanks,
Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html