Hi Pierre, On Mon, Dec 5, 2016 at 3:41 AM, Pierre BLONDEAU <pierre.blondeau@xxxxxxxxxx> wrote: > Le 05/12/2016 à 05:14, Alex Gorbachev a écrit : >> Referencing >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html >> >> When using --dmcrypt with ceph-deploy/ceph-disk, the journal device is >> not allowed to be an existing partition. You have to specify the entire >> block device, on which the tools create a partition equal to osd journal >> size setting. >> >> However, in the case when an HDD fails and OSD is deleted and then >> replaced with another HDD, I have not been able to find a way to reuse >> the earlier journal partition. Ceph-deploy creates a new one, which can >> lead into unpleasant situations on the SSD used for journaling. > > Hello, > > Remove the old journal partition ( ex : parted /dev/sdc rm 2 ). > Ceph-deploy should reuse the space for the new one. Unfortunately, it does not look like space is released or reused. Here is the partition table following the above operation. ceph-deploy just created a new partition (5) rather than reuse (2). root@croc2:~# parted /dev/sdb p Model: INTEL(R) SSD 910 200GB (scsi) Disk /dev/sdb: 200GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 5370MB 5369MB ceph journal 3 10.7GB 16.1GB 5369MB ceph journal 4 16.1GB 21.5GB 5369MB ceph journal 5 21.5GB 26.8GB 5369MB ceph journal Best regards, Alex > > Regards > >> Is there a way anyone know of, to continue to use a specific partition >> as journal with ceph-deploy? >> >> Thanks in advance, >> Alex >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > > -- > ---------------------------------------------- > Pierre BLONDEAU > Administrateur Système & réseau > Université de Caen Normandie > Laboratoire GREYC, Département d'informatique > > Tel : 02 31 56 75 42. > Bureau : Campus 2, Science 3, 406 > ---------------------------------------------- > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com