Re: OSDs and tmpfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am going to attempt to answer my own question here and someone can correct me if I am wrong.

Looking at a few of the other OSDs that we have replaced over the last year or so it looks like they are mounted using tmpfs as well and that this is just a result of switching from filestore to bluestore and that this is really nothing to worry about.

Thanks,
Shain



On 9/9/20, 11:16 AM, "Shain Miley" <SMiley@xxxxxxx> wrote:

    Hi,
    I recently added 3 new servers to Ceph cluster.  These servers use the H740p mini raid card and I had to install the HWE kernel in Ubuntu 16.04 in order to get the drives recognized.


    We have a 23 node cluster and normally when we add OSDs they end up mounting like this:

    /dev/sde1       3.7T  2.0T  1.8T  54% /var/lib/ceph/osd/ceph-15

    /dev/sdj1       3.7T  2.0T  1.7T  55% /var/lib/ceph/osd/ceph-20

    /dev/sdd1       3.7T  2.1T  1.6T  58% /var/lib/ceph/osd/ceph-14

    /dev/sdc1       3.7T  1.8T  1.9T  49% /var/lib/ceph/osd/ceph-13



    However I noticed this morning that the 3 new servers have the OSDs mounted like this:

    tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-246

    tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-240

    tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-248

    tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-237


    Is this normal for deployments going forward…or did something go wrong?  These are 12TB drives but they are showing up as 47G here instead.


    We are using ceph version 12.2.13 and I installed this using ceph-deply version 2.0.1.



    Thanks in advance,



    Shain

    Shain Miley | Director of Platform and Infrastructure | Digital Media | smiley@xxxxxxx
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux