Re: Can i improve the performance of rbd rollback in this way?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm not sure whether the user_version, user_at_version and mtime does
not change for the object. It seems to be ok now. you may wait others
to raise the problem.
Improve if you make snap, and all data in the rbd is not modified,
then you call rollback.

It seems not improve, if you make snap, and some data in the rbd is
modified but some not, then after calling rollback. It seems lots of
objects (which is not modified before) also run clone() and generate
the new object. I do not investigate the reasons, you may try it.


Regards
Ning Yao


2015-03-17 17:13 GMT+08:00 徐昕 <xuxinhfut@xxxxxxxxx>:
> Hi Ning,
>
> Do you mean that this change sacrifice the write performance to
> rollback performance. Without considering the write performance, does
> this method exist potential problem that crash ceph? Or it cannot get
> correct result in some corner case?
>
> Thanks
> Xin Xu
>
> 2015-03-17 16:40 GMT+08:00 Ning Yao <zay11022@xxxxxxxxx>:
>> 2015-03-17 15:25 GMT+08:00 徐昕 <xuxinhfut@xxxxxxxxx>:
>>> Hi Alexandre,
>>>
>>> I have tried this out. It can improve the performance of rbd rollback
>>> greatly when the difference between the image and the sanpshot is
>>> small.
>>>
>> If the clone does not happen in the rollback process, you may consider
>> it would properly happen when it is modified. I may agree, if one
>> object is always not modified since the snapshot N, it may be treated
>> as a cold object so that it is likely not to be modified in the
>> following period.
>>
>>> But i'm not sure if this change may cause potential problem.
>>>
>>> Thanks
>>> Xin Xu
>>>
>>> 2015-03-17 14:43 GMT+08:00 Alexandre DERUMIER <aderumier@xxxxxxxxx>:
>>>> Hi,
>>>> I'm not sure it's helping for snapshot rollback, but hammer have a new object_map feature:
>>>>
>>>> https://wiki.ceph.com/Planning/Blueprints/Hammer/librbd%3A_shared_flag,_object_map
>>>>
>>>>
>>>> which help for resize,flatten,...
>>>>
>>>>
>>>> ----- Mail original -----
>>>> De: "徐昕" <xuxinhfut@xxxxxxxxx>
>>>> À: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
>>>> Envoyé: Mardi 17 Mars 2015 04:24:28
>>>> Objet: Can i improve the performance of rbd rollback in this way?
>>>>
>>>> Hi,
>>>>
>>>> I am a newbie to ceph. Recently i need to improve the performance of
>>>> rbd rollback on xfs in my project. Through experiment i found that the
>>>> mechanism of rbd rollback is like this:
>>>>
>>>> When a image is rolled back to one of its snapshot (say N), for every
>>>> object that has been allocated,
>>>> case A) if the object has no snap version since snap N, then the head
>>>> version will be copied back to the latest snap (say N+M);
>>>> case B) if the object has snap version since snap N, and it has the
>>>> latest snap version (N+M), then the oldest snap version since snap N
>>>> (say N+x, 0<=x<=M) will be copied to cover the head version;
>>>> case C) if the object has snap version since snap N, and it has no
>>>> version in the latest snap (N+M), then the head version will be copied
>>>> back to the latest snap N+M, and the oldest snap version since snap N
>>>> (say N+x, 0<=x<M) will be copied to cover the head version.
>>>>
>>>> I think in case A, we need not to copied the head version back. So for
>>>> improving performance, i comment the following statement in function
>>>> ReplicatedPG::_rollback_to in osd/ReplicatedPG.cc,
>>>>
>>>> ......
>>>> else if (rollback_to->obs.oi.soid.snap == CEPH_NOSNAP) {
>>>> // rolling back to the head; we just need to clone it.
>>>> // ctx->modify = true;
>>>> }
>>>> ......
>>>>
>>>> Can i improve performance of rollback like this? If CAN NOT, what is the reason?
>>>>
>>>> Thank you.
>>>>
>>>> Xin Xu
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux