resizing the OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Or did you mean some OSD are near full while others are under-utilized?

On Sat, Sep 6, 2014 at 5:04 PM, Christian Balzer <chibi at gol.com> wrote:

>
> Hello,
>
> On Fri, 05 Sep 2014 15:31:01 -0700 JIten Shah wrote:
>
> > Hello Cephers,
> >
> > We created a ceph cluster with 100 OSD, 5 MON and 1 MSD and most of the
> > stuff seems to be working fine but we are seeing some degrading on the
> > osd's due to lack of space on the osd's.
>
> Please elaborate on that degradation.
>
> > Is there a way to resize the
> > OSD without bringing the cluster down?
> >
>
> Define both "resize" and "cluster down".
>
> As in, resizing how?
> Are your current OSDs on disks/LVMs that are not fully used and thus could
> be grown?
> What is the size of your current OSDs?
>
> The normal way of growing a cluster is to add more OSDs.
> Preferably of the same size and same performance disks.
> This will not only simplify things immensely but also make them a lot more
> predictable.
> This of course depends on your use case and usage patterns, but often when
> running out of space you're also running out of other resources like CPU,
> memory or IOPS of the disks involved. So adding more instead of growing
> them is most likely the way forward.
>
> If you were to replace actual disks with larger ones, take them (the OSDs)
> out one at a time and re-add it. If you're using ceph-deploy, it will use
> the disk size as basic weight, if you're doing things manually make sure
> to specify that size/weight accordingly.
> Again, you do want to do this for all disks to keep things uniform.
>

Just want to emphasize this - if your disks already have high utilization
and you add a [much] larger drive and auto-weights it for say 2 or 3x the
other disks, that disk will have that much higher utilization and will most
likely max out and bottleneck your cluster. So keep that in mind :).

Cheers,
Martin


>
> If your cluster (pools really) are set to a replica size of at least 2
> (risky!) or 3 (as per Firefly default), taking a single OSD out would of
> course never bring the cluster down.
> However taking an OSD out and/or adding a new one will cause data movement
> that might impact your cluster's performance.
>
> Regards,
>
> Christian
> --
> Christian Balzer        Network/Systems Engineer
> chibi at gol.com           Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140909/39c2e10e/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux