-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 What version of Ceph are you using? I seem to remember an enhancement of ceph-disk for Hammer that is more aggressive in reusing previous partition. - ---------------- Robert LeBlanc GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, May 25, 2015 at 4:22 AM, Eneko Lacunza wrote: > Hi all, > > We have a firefly ceph cluster (using Promxox VE, but I don't think this is > revelant), and found a OSD disk was having quite a high amount of errors as > reported by SMART, and also quite high wait time as reported by munin, so we > decided to replace it. > > What I have done is down/out the osd, then remove it (removing partitions). > Replace the disk and create a new OSD, which was created with the same ID as > the removed one (as I was hoping to not change CRUSH map). > > So everything has worked as expected, except one minor non-issue: > - Original OSD journal was on a separate SSD disk, which had partitions #1 > and #2 (journals of 2 OSDs). > - Original journal partition (#1) was removed > - A new partition has been created as #1, but has been assigned space after > the last existing partition. So there is now hole of 5GB in the beginning of > SSD disk. Promox is using ceph-disk prepare for this, I seen in the docs > (http://ceph.com/docs/master/man/8/ceph-disk/) that ceph-disk prepare > creates a new partition in the journal block device. > > What I'm afraid is that given enough OSD replacements, Proxmox wouldn't find > free space for new journals in that SSD disk? Although there would be plenty > in the beginning? > > Maybe the journal-partition creation can be improved so that it can detect > free space also in the beginning and between existing partitions? > > Cheers > Eneko > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943575997 > 943493611 > Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -----BEGIN PGP SIGNATURE----- Version: Mailvelope v0.13.1 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJVZJUvCRDmVDuy+mK58QAAZ+4QAIr27ymAPpOZPr9JUVWZ M8avNyddIiJpG/S2pP91UyxAzrgAy+mGbVQG0istpo98QKjT9UNxi/ySe64c OxmIHb1tp40nyMtWFnv3W0Iw1iiScTxp2hWc2KSubbibFS6YY4ACRmTysBh+ Curdo9TG9h6k4zSbQ1gAInuMCh6NIoxUMnNatkyju5UgxpGYKg9iN8Ddt+wX H/YC3yKLnwuqIkYBWsMpQCNpry2RZYWTUF9tRiuGTJg5lnIuU572sXRCpXkZ NGcVYjbOX2g16MMxohSfivxJ36PbCGsvPIde3WZz0RDP7xmeJnEanR3Zw9mC Td80pyVkuu28lRJ/UYWwTRkd0PECNejYaGvBN6LjidbZE2nejTz31Pl0DGuZ 9zlCyNFQDvUAcrKgIB0iE0qgNNzGgtmfgq+dvcu5+uFY0FLev8s7SZWCVcMf UUwGe+UldfDo9w5g2vo89jMFvG+SIA7Pmk3ZsSvt1NzQCAYABRsb4MXUwNJ8 k/S8ZgtNr1GcDeTSH+C+SqOdGS4i+AXVr3+r01Jw+9CbIWerI9aFZ8iBifUf Amhz0DCqFe4m4ZHNp1HSaGaHtc1DZYiqaRggQ73FeIfGnyheNllJXx9hlJJF ioLHk84XoiRn4KgdATF6XXIi1lk7zp0KyvyIxpGX958Q8qqPc5AbVDg3Q8OY f0yb =w3HG -----END PGP SIGNATURE-----