Re: Issue Replacing OSD with cephadm: Partition Path Not Accepted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One of my major regrets is that there isn't a "Ceph Lite" for setups
where you want a cluster with "only" a few terabytes and a half-dozen
servers. Ceph excels at really, really big storage and the tuning
parameters reflect that.

I, too ran into the issue where I couldn't allocate a disk partition to
Ceph. I didn't really want the extra potential overhead of LVM, but I
had a 4TB SSD and only needed about a quarter of that for an OSD with
the rest for local private use, so a partition seemed like a good
compromise. No such luck. It was either the entire 4TB or LVM. So it
ended up as LVM.

   Tim


On Wed, 2024-09-04 at 10:47 +0000, Eugen Block wrote:
> Hi,
> 
> apparently, I was wrong about specifying a partition in the path  
> option of the spec file. In my quick test it doesn't work either.  
> Creating a PV, VG, LV on that partition makes it work:
> 
> ceph orch daemon add osd soc9-ceph:data_devices=ceph-manual-vg/ceph-
> osd
> Created osd(s) 3 on host 'soc9-ceph'
> 
> But if you want an easy cluster management, especially OSDs, I'd  
> recommend to use the entire (raw) devices. It really makes (almost)  
> everything easier. Just recently a customer was quite pleased with
> the  
> process compared to prior ceph versions. They removed a failed disk  
> and only had to run 'ceph orch osd rm <OSD> --replace' and ceph did  
> the rest (given the existing service specs cover it). They literally 
> said:"wow, this changes our whole impression of ceph". They didn't  
> have many disk replacements in the past 5 years, this was the first  
> one since they adopted the cluster with cephadm.
> 
> Fiddling with partitions seems quite unnecessary, especially on
> larger  
> deployments.
> 
> Regards,
> Eugen
> 
> Zitat von Herbert Faleiros <faleiros@xxxxxxxxx>:
> 
> > On 03/09/2024 03:35, Robert Sander wrote:
> > > Hi,
> > 
> > Hello,
> > 
> > > On 9/2/24 20:24, Herbert Faleiros wrote:
> > > 
> > > > /usr/bin/docker: stderr ceph-volume lvm batch: error: /dev/sdb1
> > > > is a
> > > > partition, please pass LVs or raw block devices
> > > 
> > > A Ceph OSD nowadays needs a logical volume because it stores  
> > > crucial metadata in the LV tags. This helps to activate the OSD.
> > > IMHO you will have to redeploy the OSD to use LVM on the disk.
> > > It  
> > > does not need to be the whole disk if there is other data on it.
> > > It  
> > > should be sufficient to make /dev/sdb1 a PV of a new VG for the
> > > LV  
> > > of the OSD.
> > 
> > thank you for the suggestion. I understand the need for Ceph OSDs
> > to  
> > use LVM due to the metadata stored in LV tags. However, I’m facing
> > a  
> > challenge with the disk replacement process. Since I’ve already  
> > migrated the OSDs to use |ceph-volume|, I was hoping that
> > |cephadm|  
> > would handle the creation of LVM structures automatically.  
> > Unfortunately, it doesn’t seem to recreate these structures on its 
> > own when replacing a disk, and manually creating them isn’t ideal  
> > because |ceph-volume| uses its own specific naming conventions.
> > 
> > Do you have any recommendations on how to proceed with |cephadm|
> > in  
> > a way that it can handle the LVM setup automatically, or perhaps  
> > another method that aligns with the conventions used by |ceph-
> > volume|?
> > 
> > --
> > 
> > Herbert
> > 
> > 
> > > Regards
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux