Re: cephadm: Move DB/WAL from HDD to SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Oh, was my formatting really that bad as it looks in your response? I apologize for that!

It was, actually; all the commands were one after the other.  It's OK; I was able to make it out.  Might have something to do with how my MUA (Outlook) formatted it.

> Can you show the output of:
> 
> cephadm ceph-volume lvm list 10

Copied-and-pasted from my terminal:

root@cephnode03:~# cephadm ceph-volume lvm list 10
Inferring fsid 474264fe-b00e-11ee-b586-ac1f6b0ff21a


====== osd.10 ======

  [block]       /dev/ceph-0c438d0c-f25a-41b2-b478-b0f98558f585/osd-block-fb0e0a45-75a0-4400-9b1f-7568f185544c

      block device              /dev/ceph-0c438d0c-f25a-41b2-b478-b0f98558f585/osd-block-fb0e0a45-75a0-4400-9b1f-7568f185544c
      block uuid                iHgcum-NAV1-cLy3-PLa3-D8jp-5qXe-L6Vy8u
      cephx lockbox secret
      cluster fsid              474264fe-b00e-11ee-b586-ac1f6b0ff21a
      cluster name              ceph
      crush device class
      encrypted                 0
      osd fsid                  fb0e0a45-75a0-4400-9b1f-7568f185544c
      osd id                    10
      osdspec affinity          cost_capacity
      type                      block
      vdo                       0
      devices                   /dev/sda
 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux