OSDs with primary affinity 0 still used for primary PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm looking into storing the primary copy on SSDs, and replicas on spinners.
One way to achieve this should be the primary affinity setting, as outlined in this post:

https://www.sebastien-han.fr/blog/2015/08/06/ceph-get-the-best-of-your-ssd-with-primary-affinity

So I've deployed a small test cluster and set the affinity to 0 for half the OSDs and to 1 for the rest:

# ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.07751 root default                            
-3       0.01938     host osd001                         
 1   hdd 0.00969         osd.1       up  1.00000 1.00000 
 4   hdd 0.00969         osd.4       up  1.00000       0 
-7       0.01938     host osd002                         
 2   hdd 0.00969         osd.2       up  1.00000 1.00000 
 6   hdd 0.00969         osd.6       up  1.00000       0 
-9       0.01938     host osd003                         
 3   hdd 0.00969         osd.3       up  1.00000 1.00000 
 7   hdd 0.00969         osd.7       up  1.00000       0 
-5       0.01938     host osd004                         
 0   hdd 0.00969         osd.0       up  1.00000 1.00000 
 5   hdd 0.00969         osd.5       up  1.00000       0 

Then I've created a pool. The summary at the end of "ceph pg dump" looks like this:

sum 0 0 0 0 0 0 0 0 
OSD_STAT USED  AVAIL  TOTAL  HB_PEERS        PG_SUM PRIMARY_PG_SUM 
7        1071M  9067M 10138M [0,1,2,3,4,5,6]    192             26 
6        1072M  9066M 10138M [0,1,2,3,4,5,7]    198             18 
5        1071M  9067M 10138M [0,1,2,3,4,6,7]    192             21 
4        1076M  9062M 10138M [0,1,2,3,5,6,7]    202             15 
3        1072M  9066M 10138M [0,1,2,4,5,6,7]    202            121 
2        1072M  9066M 10138M [0,1,3,4,5,6,7]    195            114 
1        1076M  9062M 10138M [0,2,3,4,5,6,7]    161             95 
0        1071M  9067M 10138M [1,2,3,4,5,6,7]    194            102 
sum      8587M 72524M 81111M                                       

Now, the OSDs for which the primary affinity is set to zero are acting as primary a lot less than the others.

But what I'm wondering about is this:

For those OSDs that have primary affinity set to zero, why is the PRIMARY_PG_SUM column not zero?

# ceph -v
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)

Note that I've created the pool after setting the primary affinity, and no data is stored yet.

Thanks,
Teun

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux