Replacing disk with xfs on it, documentation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I haven't needed to replace a disk in awhile and it seems that I have misplaced my quick little guide on how to do it.

When searching the docs it is now recommending that you should use ceph-volume to create OSDs when doing that it creates LV:

Disk /dev/sde: 4000.2 GB, 4000225165312 bytes, 7812939776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/ceph--34b5a0a9--f84a--416f--8b74--fb1e05161f80-osd--block--4581caf4--eef0--42e1--b237--c114dfde3d15: 4000.2 GB, 4000220971008 bytes, 7812931584 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Cool but all of my other OSDs look like this and appear to just be XFS.

Disk /dev/sdd: 4000.2 GB, 4000225165312 bytes, 7812939776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: D0195044-10B5-4113-8210-A5CFCB9213A2


#         Start          End    Size  Type            Name
1         2048       206847    100M  Ceph OSD        ceph data
2       206848   7812939742    3.7T  unknown         ceph block

35075 ?        S<     0:00 [xfs-buf/sdd1]
  35076 ?        S<     0:00 [xfs-data/sdd1]
  35077 ?        S<     0:00 [xfs-conv/sdd1]
  35078 ?        S<     0:00 [xfs-cil/sdd1]
  35079 ?        S<     0:00 [xfs-reclaim/sdd]
  35080 ?        S<     0:00 [xfs-log/sdd1]
  35082 ?        S      0:00 [xfsaild/sdd1]

I can't seem to find the instructions in order to create an OSD like the original ones anymore.

I'm pretty sure that when this cluster was setup it was setup using the:

ceph-deploy osd prepare
ceph-deploy osd create
ceph-deploy osd activate

commands, but it seems as though the prepare and activate commands have since been removed from ceph-deploy, so I am really confused. =)

Does anyone by chance have the instructions for replacing a failed drive when it meets the above criteria?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux