Hi All,
We need to test three OSD and one image with replica 2(size 1GB). While testing data is not writing above 1GB. Is there any option to write on third OSD.
ceph osd pool get repo pg_num
pg_num: 126
# rbd showmapped
id pool image snap device
0 rbd integdownloads - /dev/rbd0 -- Already one
2 repo integrepotest - /dev/rbd2 -- newly created
[root@hm2 repository]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda5 ext4 289G 18G 257G 7% /
devtmpfs devtmpfs 252G 0 252G 0% /dev
tmpfs tmpfs 252G 0 252G 0% /dev/shm
tmpfs tmpfs 252G 538M 252G 1% /run
tmpfs tmpfs 252G 0 252G 0% /sys/fs/cgroup
/dev/sda2 ext4 488M 212M 241M 47% /boot
/dev/sda4 ext4 1.9T 20G 1.8T 2% /var
/dev/mapper/vg0-zoho ext4 8.6T 1.7T 6.5T 21% /zoho
/dev/rbd0 ocfs2 977G 101G 877G 11% /zoho/build/downloads
/dev/rbd2 ocfs2 1000M 1000M 0 100% /zoho/build/repository
@:~$ scp -r sample.txt root@integ-hm2:/zoho/build/repository/
root@integ-hm2's password:
sample.txt
100% 1024MB 4.5MB/s 03:48
scp: /zoho/build/repository//sample.txt: No space left on device
Regards
Prabu
---- On Thu, 13 Aug 2015 19:42:11 +0530 gjprabu <gjprabu@xxxxxxxxxxxx> wrote ----
Dear Team,We are using two ceph OSD with replica 2 and it is working properly. Here my doubt is (Pool A -image size will be 10GB) and its replicated with two OSD, what will happen suppose if the size reached the limit, Is there any chance to make the data to continue writing in another two OSD's.RegardsPrabu
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com