Re: Slow rbd reads (fast writes) with luminous + bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 13/08/2018 à 16:29, Jason Dillaman a écrit :

For such a small benchmark (2 GiB), I wouldn't be surprised if you are not just seeing the Filestore-backed OSDs hitting the page cache for the reads whereas the Bluestore-backed OSDs need to actually hit the disk. Are the two clusters similar in terms of the numbers of HDD-backed OSDs?


New cluster has a bit more OSDs, and better hardware (raid card with cache, more memory, more cpu, and less workload).


Old:

# ceph osd tree
ID WEIGHT   TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 17.28993 root default                                         
-2  3.63998     host hyp-prs-01                                  
 0  1.81999         osd.0            up  1.00000          1.00000
 1  1.81999         osd.1            up  1.00000          1.00000
-3  3.63997     host hyp-prs-02                                  
 3  1.81999         osd.3            up  1.00000          1.00000
 2  1.81998         osd.2            up  1.00000          1.00000
-4  4.54999     host hyp-prs-03                                  
 4  1.81999         osd.4            up  1.00000          1.00000
 5  2.73000         osd.5            up  1.00000          1.00000
-5  5.45999     host hyp-prs-04                                  
 6  2.73000         osd.6            up  1.00000          1.00000
 7  2.73000         osd.7            up  1.00000          1.00000


New:

# ceph osd tree
ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
-1       43.66919 root default                           
-3       14.55640     host osd-01                        
 0   hdd  3.63910         osd.0       up  1.00000 1.00000
 1   hdd  3.63910         osd.1       up  1.00000 1.00000
 2   hdd  3.63910         osd.2       up  1.00000 1.00000
 3   hdd  3.63910         osd.3       up  1.00000 1.00000
-5       14.55640     host osd-02                        
 4   hdd  3.63910         osd.4       up  1.00000 1.00000
 5   hdd  3.63910         osd.5       up  1.00000 1.00000
 6   hdd  3.63910         osd.6       up  1.00000 1.00000
 7   hdd  3.63910         osd.7       up  1.00000 1.00000
-7       14.55640     host osd-03                        
 8   hdd  3.63910         osd.8       up  1.00000 1.00000
 9   hdd  3.63910         osd.9       up  1.00000 1.00000
10   hdd  3.63910         osd.10      up  1.00000 1.00000
11   hdd  3.63910         osd.11      up  1.00000 1.00000


do you mean that with bluestore there is no  page cache involved?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux