Testing Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi ,

When i Started test on ceph cluster today:


#ceph -w

2013-03-01 08:57:43.990529 mon.0 [INF] pgmap v7601: 6400 pgs: 6400
active+clean;6849 MB data, 997 GB used, 674 TB / 675 TB avail;
5261KB/s wr, 5op/s
2013-03-01 08:57:44.895664 mon.0 [INF] pgmap v7602: 6400 pgs: 6400
active+clean;6859 MB data, 997 GB used, 674 TB / 675 TB avail; 0B/s
rd, 8257KB/s wr, 267op/s
2013-03-01 08:57:47.374271 mon.0 [INF] pgmap v7603: 6400 pgs: 6400
active+clean;6901 MB data, 997 GB used, 674 TB / 675 TB avail; 0B/s
rd, 36514KB/s wr, 2099op
                                                              /s

2013-03-01 08:57:49.801775 mon.0 [INF] pgmap v7604: 6400 pgs: 6400
active+clean; 6913 MB data, 997 GB used, 674 TB / 675 TB avail;0B/s
rd, 5575KB/s wr, 430op/s
2013-03-01 08:57:51.833305 mon.0 [INF] pgmap v7605: 6400 pgs: 6400
active+clean; 6933 MB data, 998 GB used, 674 TB / 675 TB avail;0B/s
rd, 8741KB/s wr, 172op/s
2013-03-01 08:57:53.170430 mon.0 [INF] pgmap v7606: 6400 pgs: 6400
active+clean; 6943 MB data, 998 GB used, 674 TB / 675 TB avail;
7056KB/s wr, 7op/s
2013-03-01 08:57:54.914054 mon.0 [INF] pgmap v7607: 6400 pgs: 6400
active+clean; 6945 MB data, 998 GB used, 674 TB / 675 TB avail;
1532KB/s wr, 2op/s
2013-03-01 08:57:55.769889 mon.0 [INF] pgmap v7608: 6400 pgs: 6400
active+clean; 6967 MB data, 998 GB used, 674 TB / 675 TB avail;
12922KB/s wr, 12op/s
2013-03-01 08:57:57.393463 mon.0 [INF] pgmap v7609: 6400 pgs: 6400
active+clean; 7001 MB data, 998 GB used, 674 TB / 675 TB avail;
29706KB/s wr, 83op/s
2013-03-01 08:57:58.490561 mon.0 [INF] pgmap v7610: 6400 pgs: 6400
active+clean; 7006 MB data, 998 GB used, 674 TB / 675 TB avail;
3866KB/s wr, 3op/s
2013-03-01 08:57:59.754231 mon.0 [INF] pgmap v7611: 6400 pgs: 6400
active+clean; 7009 MB data, 998 GB used, 674 TB / 675 TB avail;
2607KB/s wr, 2op/s
2013-03-01 08:58:00.826096 mon.0 [INF] pgmap v7612: 6400 pgs: 6400
active+clean; 7023 MB data, 998 GB used, 674 TB / 675 TB avail;
12965KB/s wr, 89op/s

NOTICE: data started with 6913 MB

BUT after a series of write,read and delete operations

#ceph -w

2013-03-01 11:52:29.568074 mon.0 [INF] pgmap v13814: 6400 pgs: 6400
active+clean; 91856 MB data, 1248 GB used, 674 TB / 675 TB avail
2013-03-01 11:52:30.762315 mon.0 [INF] pgmap v13815: 6400 pgs: 6400
active+clean; 91856 MB data, 1248 GB used, 674 TB / 675 TB avail
2013-03-01 11:52:31.907384 mon.0 [INF] pgmap v13816: 6400 pgs: 6400
active+clean; 91856 MB data, 1248 GB used, 674 TB / 675 TB avail
2013-03-01 11:52:33.076477 mon.0 [INF] pgmap v13817: 6400 pgs: 6400
active+clean; 91856 MB data, 1248 GB used, 674 TB / 675 TB avail
2013-03-01 11:52:34.258712 mon.0 [INF] pgmap v13818: 6400 pgs: 6400
active+clean; 91856 MB data, 1248 GB used, 674 TB / 675 TB avail
2013-03-01 11:52:36.003697 mon.0 [INF] pgmap v13819: 6400 pgs: 6400
active+clean; 91856 MB data, 1248 GB used, 674 TB / 675 TB avail



Question:
data has increased to 91856 MB
Although i have done a delete operation, after all write+read operations.
Is there any rados utility command to verify what exactly these 91856
MB datas are? and what command is good to clean up the cluster without
re-initializing the disk all the time with mkcephfs command.????

My intention is to ensure a clean cluster for each of the test cases
since i notice performance of the ceph cluster dropped after a few
write,read and delete operations using same system conditions but
different load clients.


Regards,
Femi.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux