Re: free krbd size in ubuntu12.04 in ceph 0.67.9

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2016-05-23 10:31 GMT+08:00 Sharuzzaman Ahmat Raslan <sharuzzaman@xxxxxxxxx>:
> is your service only have one instance?
> are your service running on vm?

about twenty krbd instance.they are map and mount in three osd machine.


> On May 23, 2016 10:23 AM, "lin zhou" <hnuzhoulin2@xxxxxxxxx> wrote:
>>
>> Christian Balzer,thanks for your reply.
>>
>> you say,kernel upgrade will interrupt service.but AFAIK,umount rbd and
>> strim
>> it on another machine will interrupt service too.
>>
>> In my environment,all krbd are online business,service interruption is
>> now allow.
>>
>> 2016-05-21 11:58 GMT+08:00 Christian Balzer <chibi@xxxxxxx>:
>> > On Fri, 20 May 2016 17:00:03 +0800 lin zhou wrote:
>> >
>> >> Hi,cephers
>> >> we only  using krbd in ceph.and it works well near two yeas.but now I
>> >> face a size problem.
>> >>
>> >> I have 7 nodes with 10 3T osd each.we using ceph 0.67.9 in
>> >> ubuntu12.04.I know it is too old,but update is beyond my control.
>> >>
>> >> and now we use 80% size,so we start to delete historic unneeded
>> >> data,but free size do not increase.
>> >>
>> >> and then I found  total assigned size of rbds are much bigger than
>> >> ceph size,so it means if we do not do something,it will reach the
>> >> limit.
>> >>
>> >> but the true data in the user side,we only use 40%.
>> >>
>> >> I read the blog of Sebastien and some maillist.I know the command of
>> >> fstrim in kernel 3.18 can deal with it. and is it risky that update
>> >> kernel to 3.18 in ubuntu12.04?
>> >>
>> >> I try to add discard option in mount command,but it do not work.
>> >>
>> >> so what way do you recommend to free krbd size in ceph 0.67.9 in
>> >> ubuntu12.04
>> >>
>> >
>> > If you're considering a kernel upgrade (and thus a service
>> > interruption),
>> > another option would be to unmount that image on your old machine, mount
>> > it on a newer machine and/or with librbd and fuse and then run fstrim.
>> >
>> > However trim is a pretty costly activity in Ceph, so it may
>> > a) impact your cluster performance and
>> > b) take a while, depending on how much data we're talking about.
>> >
>> > Lastly, while having a sparse storage serice like Ceph is very nice I
>> > always try to have enough actual space available to handle all
>> > commitments.
>> >
>> > Christian
>> > --
>> > Christian Balzer        Network/Systems Engineer
>> > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
>> > http://www.gol.com/
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux