Re: Is it ok to add a luminous ceph-disk osd to nautilus still?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
I did not have time to convert all drives to lvm yet, so I would like to 
stick to the use of the partition until I have time to change 
everything.



-----Original Message-----
Sent: 01 March 2020 18:17
Subject: Re:  Is it ok to add a luminous ceph-disk osd to 
nautilus still?

So use ceph-volume. ??

The nautilus release notes explain why.   

> On Mar 1, 2020, at 9:02 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> 
wrote:
> 
> 
> ceph-disk is not available in Nautilus.elease
> 
> why scrub first? It is a new disk not having any data yet. Scrubbing 
> is verifying pg's not?
> 
> I just created a vm on the ceph node where I want to add this osd. Did 

> a passthru of the disk and installed a few rpm's with nodeps to get 
> the ceph-disk command.
> 
> 
> 
> -----Original Message-----
> Sent: 01 March 2020 17:47
> Subject: Re:  Is it ok to add a luminous ceph-disk osd to 
> nautilus still?
> 
> Ensure that it gets scrubbed at least once by Luminous first.  But how 

> and why are you doing this ? Why not use Nautilus binaries ?
> 
>>> On Mar 1, 2020, at 8:36 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
>> wrote:
>> 
>> 
>> If I create and osd with luminous 12.0.3 binaries, can I just add it 
>> to an existing Nautilus cluster?
>> 
>> I sort of did this already, just wondered if there are any drawbacks.
>> 
>> 
>> [@test2 software]# ceph-disk prepare --bluestore --zap-disk /dev/sdb 
>> Creating new GPT entries.
>> GPT data structures destroyed! You may now partition the disk using 
>> fdisk or other utilities.
>> Creating new GPT entries.
>> The operation has completed successfully.
>> Setting name!
>> partNum is 0
>> REALLY setting name!
>> The operation has completed successfully.
>> Setting name!
>> partNum is 1
>> REALLY setting name!
>> The operation has completed successfully.
>> The operation has completed successfully.
>> meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=6400 
>> blks
>>        =                       sectsz=512   attr=2, projid32bit=1
>>        =                       crc=1        finobt=0, sparse=0
>> data     =                       bsize=4096   blocks=25600, 
imaxpct=25
>>        =                       sunit=0      swidth=0 blks
>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
>> log      =internal log           bsize=4096   blocks=864, version=2
>>        =                       sectsz=512   sunit=0 blks, 
> lazy-count=1
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>> The operation has completed successfully.
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
>> email to ceph-users-leave@xxxxxxx
> 
> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux