Unexpected data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

Im not sure if it’s normal or not but each time I add a new osd with ceph-deploy osd create --data /dev/sdg ceph-n1.

It add 1GB to my global data but I just format the drive so it’s supposed to be at 0 right ?

So I have 6 osd in my ceph and it took 6gib.

 

[root@ceph-n1 ~]# ceph -s

  cluster:

    id:     1d97aa70-2029-463a-b6fa-20e98f3e21fb

    health: HEALTH_OK

 

  services:

    mon: 1 daemons, quorum ceph-n1

    mgr: ceph-n1(active)

    mds: cephfs-1/1/1 up  {0=ceph-n1=up:active}

    osd: 6 osds: 6 up, 6 in

 

  data:

    pools:   2 pools, 600 pgs

    objects: 341 objects, 63109 kB

    usage:   6324 MB used, 2782 GB / 2788 GB avail

    pgs:     600 active+clean

 

 

So im kind of confused...

Thanks for your help.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux