I believe I found something but I don't know how to fix it.
I run "ceph df" and I'm seeing that cephfs_data and cephfs_metadata is at 100% USED.
How can I increase the cephfs_data and cephfs_metadata pool.
Sorry I'm new with Ceph.
root@pf-us1-dfs1:/etc/ceph# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
73 TiB 34 TiB 39 TiB 53.12
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
poolcephfs 1 0 B 0 0 B 0
cephfs_data 2 3.6 TiB 100.00 0 B 169273821
cephfs_metadata 3 1.0 GiB 100.00 0 B 208981
.rgw.root 4 1.1 KiB 100.00 0 B 4
default.rgw.control 5 0 B 0 0 B 8
default.rgw.meta 6 0 B 0 0 B 0
default.rgw.log 7 0 B 0 0 B 207
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
73 TiB 34 TiB 39 TiB 53.12
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
poolcephfs 1 0 B 0 0 B 0
cephfs_data 2 3.6 TiB 100.00 0 B 169273821
cephfs_metadata 3 1.0 GiB 100.00 0 B 208981
.rgw.root 4 1.1 KiB 100.00 0 B 4
default.rgw.control 5 0 B 0 0 B 8
default.rgw.meta 6 0 B 0 0 B 0
default.rgw.log 7 0 B 0 0 B 207
On Tue, Jan 8, 2019 at 10:30 AM Rodrigo Embeita <rodrigo@xxxxxxxxxxxxxxx> wrote:
Hi guys, I need your help.I'm new with Cephfs and we started using it as file storage.Today we are getting no space left on device but I'm seeing that we have plenty space on the filesystem.Filesystem Size Used Avail Use% Mounted on192.168.51.8,192.168.51.6,192.168.51.118:6789:/pagefreezer/smhosts 73T 39T 35T 54% /mnt/cephfs
We have 35TB of disk space. I've added 2 additional OSD disks with 7TB each but I'm getting the error "No space left on device" every time that I want to add a new file.
After adding the 2 additional OSD disks I'm seeing that the load is beign distributed among the cluster.Please I need your help.root@pf-us1-dfs1:/etc/ceph# ceph -s
cluster:
id: 609e9313-bdd3-449e-a23f-3db8382e71fb
health: HEALTH_ERR
2 backfillfull osd(s)
1 full osd(s)
7 pool(s) full
197313040/508449063 objects misplaced (38.807%)
Degraded data redundancy: 2/508449063 objects degraded (0.000%), 2 pgs degraded
Degraded data redundancy (low space): 16 pgs backfill_toofull, 3 pgs recovery_toofull
services:
mon: 3 daemons, quorum pf-us1-dfs2,pf-us1-dfs1,pf-us1-dfs3
mgr: pf-us1-dfs3(active), standbys: pf-us1-dfs2
mds: pagefs-2/2/2 up {0=pf-us1-dfs3=up:active,1=pf-us1-dfs1=up:active}, 1 up:standby
osd: 10 osds: 10 up, 10 in; 189 remapped pgs
rgw: 1 daemon active
data:
pools: 7 pools, 416 pgs
objects: 169.5 M objects, 3.6 TiB
usage: 39 TiB used, 34 TiB / 73 TiB avail
pgs: 2/508449063 objects degraded (0.000%)
197313040/508449063 objects misplaced (38.807%)
224 active+clean
168 active+remapped+backfill_wait
16 active+remapped+backfill_wait+backfill_toofull
5 active+remapped+backfilling
2 active+recovery_toofull+degraded
1 active+recovery_toofull
io:
recovery: 1.1 MiB/s, 31 objects/s
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com