Re: question on reusing OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 16/09/2015 01:21, John-Paul Robinson a écrit :
> Hi,
>
> I'm working to correct a partitioning error from when our cluster was
> first installed (ceph 0.56.4, ubuntu 12.04).  This left us with 2TB
> partitions for our OSDs, instead of the 2.8TB actually available on
> disk, a 29% space hit.  (The error was due to a gdisk bug that
> mis-computed the end of the disk during the ceph-disk-prepare and placed
> the journal at the 2TB mark instead of the true end of the disk at
> 2.8TB. I've updated gdisk to a newer release that works correctly.)
>
> I'd like to fix this problem by taking my existing 2TB OSDs offline one
> at a time, repartitioning them and then bringing them back into the
> cluster.  Unfortunately I can't just grow the partitions, so the
> repartition will be destructive.

Hum, why should it be? If the journal is at the 2TB mark, you should be
able to:
- stop the OSD,
- flush the journal, (ceph-osd -i <osdid> --flush-journal),
- unmount the data filesystem (might be superfluous but the kernel seems
to cache the partition layout when a partition is active),
- remove the journal partition,
- extend the data partition,
- place the journal partition at the end of the drive (in fact you
probably want to write a precomputed partition layout in one go).
- mount the data filesystem, resize it online,
- ceph-osd -i <osdid> --mkjournal (assuming your setup can find the
partition again automatically without reconfiguration)
- start the OSD

If you script this you should not have to use noout: the OSD should come
back in a matter of seconds and the impact on the storage network minimal.

Note that the start of the disk is where you get the best sequential
reads/writes. Given that most data accesses are random and all journal
accesses are sequential I put the journal at the start of the disk when
data and journal are sharing the same platters.

Best regards,

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux