Re: OSDs and tmpfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

I have also these mounts with bluestore

/dev/sde1 on /var/lib/ceph/osd/ceph-32 type xfs 
(rw,relatime,attr2,inode64,noquota)
/dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs 
(rw,relatime,attr2,inode64,noquota)
/dev/sdc1 on /var/lib/ceph/osd/ceph-6 type xfs 
(rw,relatime,attr2,inode64,noquota)
/dev/sdd1 on /var/lib/ceph/osd/ceph-8 type xfs 
(rw,relatime,attr2,inode64,noquota)
/dev/sdj1 on /var/lib/ceph/osd/ceph-19 type xfs 
(rw,relatime,attr2,inode64,noquota)

[@c01 ~]# ls -l /var/lib/ceph/osd/ceph-0
total 52
-rw-r--r-- 1 ceph ceph  3 Aug 24  2017 active
lrwxrwxrwx 1 ceph ceph 58 Jun 30  2017 block -> 
/dev/disk/by-partuuid/63b970b7-2759-4eae-a66e-b84335eba598
-rw-r--r-- 1 ceph ceph 37 Jun 30  2017 block_uuid
-rw-r--r-- 1 ceph ceph  2 Jun 30  2017 bluefs
-rw-r--r-- 1 ceph ceph 37 Jun 30  2017 ceph_fsid
-rw-r--r-- 1 ceph ceph 37 Jun 30  2017 fsid
-rw------- 1 ceph ceph 56 Jun 30  2017 keyring
-rw-r--r-- 1 ceph ceph  8 Jun 30  2017 kv_backend
-rw-r--r-- 1 ceph ceph 21 Jun 30  2017 magic
-rw-r--r-- 1 ceph ceph  4 Jun 30  2017 mkfs_done
-rw-r--r-- 1 ceph ceph  6 Jun 30  2017 ready
-rw-r--r-- 1 ceph ceph  3 Oct 19  2019 require_osd_release
-rw-r--r-- 1 ceph ceph  0 Sep 26  2019 systemd
-rw-r--r-- 1 ceph ceph 10 Jun 30  2017 type
-rw-r--r-- 1 ceph ceph  2 Jun 30  2017 whoami


-----Original Message-----
To: ceph-users@xxxxxxx
Subject:  Re: OSDs and tmpfs

>     We have a 23 node cluster and normally when we add OSDs they end 
> up mounting like
> this:
> 
>     /dev/sde1       3.7T  2.0T  1.8T  54% /var/lib/ceph/osd/ceph-15
> 
>     /dev/sdj1       3.7T  2.0T  1.7T  55% /var/lib/ceph/osd/ceph-20
> 
>     /dev/sdd1       3.7T  2.1T  1.6T  58% /var/lib/ceph/osd/ceph-14
> 
>     /dev/sdc1       3.7T  1.8T  1.9T  49% /var/lib/ceph/osd/ceph-13
> 

I'm pretty sure those OSDs have been deployed with Filestore backend as 
the first partition of the device is the data partition and needs to be 
mounted.

>     However I noticed this morning that the 3 new servers have the 
> OSDs mounted like
> this:
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-246
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-240
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-248
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-237
> 

And here, it looks like those OSDs are using Bluestore backend because 
this backend doesn't need to mount any data partitions.
What you're seeing is the Bluestore metadata in this tmpfs.
You should find in the mount point some usefull information (fsid, 
keyring and symlinks to the data block and/or db/wal).

I don't know if you're using ceph-disk or ceph-volume but you can find 
information about this by running either:
  - ceph-disk list
  - ceph-volume lvm list
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux