Hi Ceph users:
I'm using Ceph 12.2.4 with CentOS 7.4, and
tring to use cephfs for MariaDB deployment, the configuration is
default, but got very pool performance during creating tables, if I use
the local file system, not this issue.
Here is the sql scripts I used:
[root@cmv01cn01]$ cat mysql_test.sql
CREATE TABLE test.t001 (col INT)\g
CREATE TABLE test.t002 (col INT)\g
CREATE TABLE test.t003 (col INT)\g
CREATE TABLE test.t004 (col INT)\g
CREATE TABLE test.t005 (col INT)\g
CREATE TABLE test.t006 (col INT)\g
CREATE TABLE test.t007 (col INT)\g
CREATE TABLE test.t008 (col INT)\g
CREATE TABLE test.t009 (col INT)\g
DROP TABLE test.t001\g
DROP TABLE test.t002\g
DROP TABLE test.t003\g
DROP TABLE test.t004\g
DROP TABLE test.t005\g
DROP TABLE test.t006\g
DROP TABLE test.t007\g
DROP TABLE test.t008\g
DROP TABLE test.t009\g
The following is the running result:
[root@cmv01cn01]$ mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 6522
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> source mysql_test.sql
Query OK, 0 rows affected (3.26 sec)
Query OK, 0 rows affected (5.32 sec)
Query OK, 0 rows affected (4.53 sec)
Query OK, 0 rows affected (5.09 sec)
Query OK, 0 rows affected (4.96 sec)
Query OK, 0 rows affected (4.94 sec)
Query OK, 0 rows affected (4.96 sec)
Query OK, 0 rows affected (5.02 sec)
Query OK, 0 rows affected (5.08 sec)
Query OK, 0 rows affected (0.11 sec)
Query OK, 0 rows affected (0.07 sec)
Query OK, 0 rows affected (0.07 sec)
Query OK, 0 rows affected (0.10 sec)
Query OK, 0 rows affected (0.06 sec)
Query OK, 0 rows affected (0.10 sec)
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected (0.02 sec)
MariaDB [(none)]>
As you can see, the average time for creating table around 5s, regarding to drop table, it is acceptable.
I also dumped the ops for mds as below:
[root@cmv01sn01 ceph]# ceph daemon mds.cmv01sn01 dump_ops_in_flight
{
"ops": [
{
"description":
"client_request(client.854369:4659 create #0x10000003994/t2_0.frm
2018-03-28 17:24:53.744090 caller_uid=27, caller_gid=27{})",
"initiated_at": "2018-03-28 17:24:53.744827",
"age": 4.567939,
"duration": 4.567955,
"type_data": {
"flag_point": "submit entry: journal_and_reply",
"reqid": "client.854369:4659",
"op_type": "client_request",
"client_info": {
"client": "client.854369",
"tid": 4659
},
"events": [
{
"time": "2018-03-28 17:24:53.744827",
"event": "initiated"
},
{
"time": "2018-03-28 17:24:53.745226",
"event": "acquired locks"
},
{
"time": "2018-03-28 17:24:53.745364",
"event": "early_replied"
},
{
"time": "2018-03-28 17:24:53.745367",
"event": "submit entry: journal_and_reply"
}
]
}
},
{
"description":
"client_request(client.854369:4660 create #0x10000003994/t2_0.ibd
2018-03-28 17:24:53.751090 caller_uid=27, caller_gid=27{})",
"initiated_at": "2018-03-28 17:24:53.752039",
"age": 4.560727,
"duration": 4.560763,
"type_data": {
"flag_point": "submit entry: journal_and_reply",
"reqid": "client.854369:4660",
"op_type": "client_request",
"client_info": {
"client": "client.854369",
"tid": 4660
},
"events": [
{
"time": "2018-03-28 17:24:53.752039",
"event": "initiated"
},
{
"time": "2018-03-28 17:24:53.752358",
"event": "acquired locks"
},
{
"time": "2018-03-28 17:24:53.752480",
"event": "early_replied"
},
{
"time": "2018-03-28 17:24:53.752483",
"event": "submit entry: journal_and_reply"
}
]
}
}
],
"num_ops": 2
}
It seems like stuck at journal_and_reply.
So, does anyone faced this situation? Any thoughts are appropriated.
Thanks,
Steven
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com