> -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Alex Gorbachev > Sent: 05 December 2016 15:39 > To: Pierre BLONDEAU <pierre.blondeau@xxxxxxxxxx> > Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx> > Subject: Re: Reusing journal partitions when using ceph-deploy/ceph-disk --dmcrypt > > Hi Pierre, > > On Mon, Dec 5, 2016 at 3:41 AM, Pierre BLONDEAU <pierre.blondeau@xxxxxxxxxx> wrote: > > Le 05/12/2016 à 05:14, Alex Gorbachev a écrit : > >> Referencing > >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293. > >> html > >> > >> When using --dmcrypt with ceph-deploy/ceph-disk, the journal device > >> is not allowed to be an existing partition. You have to specify the > >> entire block device, on which the tools create a partition equal to > >> osd journal size setting. > >> > >> However, in the case when an HDD fails and OSD is deleted and then > >> replaced with another HDD, I have not been able to find a way to > >> reuse the earlier journal partition. Ceph-deploy creates a new one, > >> which can lead into unpleasant situations on the SSD used for journaling. > > > > Hello, > > > > Remove the old journal partition ( ex : parted /dev/sdc rm 2 ). > > Ceph-deploy should reuse the space for the new one. > > Unfortunately, it does not look like space is released or reused. > Here is the partition table following the above operation. > ceph-deploy just created a new partition (5) rather than reuse (2). > > root@croc2:~# parted /dev/sdb p > Model: INTEL(R) SSD 910 200GB (scsi) > Disk /dev/sdb: 200GB > Sector size (logical/physical): 512B/4096B Partition Table: gpt > > Number Start End Size File system Name Flags > 1 1049kB 5370MB 5369MB ceph journal > 3 10.7GB 16.1GB 5369MB ceph journal > 4 16.1GB 21.5GB 5369MB ceph journal > 5 21.5GB 26.8GB 5369MB ceph journal > > Best regards, > Alex I can confirm I have just experienced the same thing. No dmcrypt in use, but using ansible (ceph-disk) to deply OSD's. 1. Remove all OSD things (umount/auth/rm..etc) 2. Used gdisk to remove journal partition in middle of SSD 3. ran partprobe 4. Ran ansible Afterwards I find an extra partition at the end of the device instead of in the empty space. Not a massive problem as I will not likely exhaust SSD space over life of cluster node, but I don't believe this is the intended behaviour. > > > > > Regards > > > >> Is there a way anyone know of, to continue to use a specific > >> partition as journal with ceph-deploy? > >> > >> Thanks in advance, > >> Alex > >> > >> > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@xxxxxxxxxxxxxx > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > > > > > > -- > > ---------------------------------------------- > > Pierre BLONDEAU > > Administrateur Système & réseau > > Université de Caen Normandie > > Laboratoire GREYC, Département d'informatique > > > > Tel : 02 31 56 75 42. > > Bureau : Campus 2, Science 3, 406 > > ---------------------------------------------- > > > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com