Re: Change Partition Schema on OSD Possible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Op 17 jan. 2017 om 05:31 heeft Hauke Homburg <hhomburg@xxxxxxxxxxxxxx> het volgende geschreven:
> 
> Am 16.01.2017 um 12:24 schrieb Wido den Hollander:
>>> Op 14 januari 2017 om 14:58 schreef Hauke Homburg <hhomburg@xxxxxxxxxxxxxx>:
>>> 
>>> 
>>> Am 14.01.2017 um 12:59 schrieb Wido den Hollander:
>>>>> Op 14 januari 2017 om 11:05 schreef Hauke Homburg <hhomburg@xxxxxxxxxxxxxx>:
>>>>> 
>>>>> 
>>>>> Hello,
>>>>> 
>>>>> In our Ceph Cluster are our HDD in the OSD with 50% DATA in GPT
>>>>> Partitions configured. Can we change this Schema to have more Data Storage?
>>>>> 
>>>> How do you mean?
>>>> 
>>>>> Our HDD are 5TB so i hope to have more Space when i change the GPT
>>>>> bigger from 2TB to 3 oder 4 TB.
>>>>> 
>>>> On a 5TB disks only 50% is used for data? What is the other 50% being used for?
>>> I think for Journal. We worked with cephdeploy an with
>>> data-path:journal-path on a Device.
>> Hmm, that's weird. ceph-deploy uses a 5GB partition by default for the journal.
>> 
>> Are you sure about that? Can you post a partition scheme of a disk and a 'df -h' output?
> sgdisk -p /dev/sdg
> Disk /dev/sdg: 11721045168 sectors, 5.5 TiB
> Logical sector size: 512 bytes
> Disk identifier (GUID): BFC047BB-75D7-4F18-B8A6-0C538454FA43
> Partition table holds up to 128 entries
> First usable sector is 34, last usable sector is 11721045134
> Partitions will be aligned on 2048-sector boundaries
> Total free space is 2014 sectors (1007.0 KiB)
> 
> Number  Start (sector)    End (sector)  Size       Code  Name
>   1        10487808     11721045134   5.5 TiB     FFFF  ceph data
>   2            2048        10487807   5.0 GiB     FFFF  ceph journal
> 

Looks good. 5GB journal and rest for the OSD's data. Nothing wrong there.

Wido

>> 
>> Wido
>> 
>>>>> Can we modify the Partitions without install reinstall the Server?
>>>>> 
>>>> Sure! Just like changing any other GPT partition. Don't forget to resize XFS afterwards with xfs_growfs.
>>>> 
>>>> However, test this on one OSD/disk first before doing it on all.
>>>> 
>>>> Wido
>>>> 
>>>>> Whats the best Way to do this? Boot the Node with a Rescue CD and change
>>>>> the Partition with gparted, and boot the Server again?
>>>>> 
>>>>> Thanks for help
>>>>> 
>>>>> Regards
>>>>> 
>>>>> Hauke
>>>>> 
>>>>> -- 
>>>>> www.w3-creative.de
>>>>> 
>>>>> www.westchat.de
>>>>> 
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>>> -- 
>>> www.w3-creative.de
>>> 
>>> www.westchat.de
>>> 
> 
> 
> -- 
> www.w3-creative.de
> 
> www.westchat.de
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux