Re: Journal / WAL drive size?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Rudi,

First off all do not deploy an OSD specifying the same seperate device for DB and WAL:

Please read the following why:

http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/


That said you have a fairly large amount of SSD size available so i recommend using it as block.db:

You can specify a fixed size block.db size in ceph.conf using:

[global]
bluestore_block_db_size = 16106127360

The above is a 15GB block.db size

Now when you deploy an OSD with a seperate block.db device the partition will be 15GB.

The default size is a percentage of the device i believe and not always a usable amount.

Caspar

Met vriendelijke groet,

Caspar Smit
Systemengineer
SuperNAS
Dorsvlegelstraat 13
1445 PA Purmerend

t: (+31) 299 410 414
e: casparsmit@xxxxxxxxxxx
w: www.supernas.eu

2017-11-23 10:27 GMT+01:00 Rudi Ahlers <rudiahlers@xxxxxxxxx>:
Hi, 

Can someone please explain this to me in layman's terms. How big a WAL drive do I really need?

I have a 2x 400GB SSD drives used as WAL / DB drive and 4x 8TB HDD's used as OSD's. When I look at the drive partitions the DB / WAL partitions are only 576Mb & 1GB respectively. This feels a bit small. 


root@virt1:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0   7.3T  0 disk
├─sda1               8:1    0   100M  0 part /var/lib/ceph/osd/ceph-0
└─sda2               8:2    0   7.3T  0 part
sdb                  8:16   0   7.3T  0 disk
├─sdb1               8:17   0   100M  0 part /var/lib/ceph/osd/ceph-1
└─sdb2               8:18   0   7.3T  0 part
sdc                  8:32   0   7.3T  0 disk
├─sdc1               8:33   0   100M  0 part /var/lib/ceph/osd/ceph-2
└─sdc2               8:34   0   7.3T  0 part
sdd                  8:48   0   7.3T  0 disk
├─sdd1               8:49   0   100M  0 part /var/lib/ceph/osd/ceph-3
└─sdd2               8:50   0   7.3T  0 part
sde                  8:64   0 372.6G  0 disk
├─sde1               8:65   0     1G  0 part
├─sde2               8:66   0   576M  0 part
├─sde3               8:67   0     1G  0 part
└─sde4               8:68   0   576M  0 part
sdf                  8:80   0 372.6G  0 disk
├─sdf1               8:81   0     1G  0 part
├─sdf2               8:82   0   576M  0 part
├─sdf3               8:83   0     1G  0 part
└─sdf4               8:84   0   576M  0 part
sdg                  8:96   0   118G  0 disk
├─sdg1               8:97   0     1M  0 part
├─sdg2               8:98   0   256M  0 part /boot/efi
└─sdg3               8:99   0 117.8G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  29.3G  0 lvm  /
  ├─pve-data_tmeta 253:2    0    68M  0 lvm
  │ └─pve-data     253:4    0  65.9G  0 lvm
  └─pve-data_tdata 253:3    0  65.9G  0 lvm
    └─pve-data     253:4    0  65.9G  0 lvm




--
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux