Hi,
for test purposes, I have set up two 100 GB OSDs, one
taking a data pool and the other metadata pool for cephfs.
Am running 14.2.6-1-gffd69200ad-1 with packages from
https://mirror.croit.io/debian-nautilus
Am then running a program that creates a lot of 1 MiB files by calling
fopen()
fwrite()
fclose()
for each of them. Error codes are checked.
This works successfully for ~100 GB of data, and then strangely also succeeds
for many more 100 GB of data... ??
All written files have size 1 MiB with 'ls', and thus should contain the data
written. However, on inspection, the files written after the first ~100 GiB,
are full of just 0s. (hexdump -C)
To further test this, I use the standard tool 'cp' to copy a few random-content
files into the full cephfs filessystem. cp reports no complaints, and after
the copy operations, content is seen with hexdump -C. However, after forcing
the data out of cache on the client by reading other earlier created files,
hexdump -C show all-0 content for the files copied with 'cp'. Data that was
there is suddenly gone...?
I am new to ceph. Is there an option I have missed to avoid this behaviour?
(I could not find one in
https://docs.ceph.com/docs/master/man/8/mount.ceph/ )
Is this behaviour related to
https://docs.ceph.com/docs/mimic/cephfs/full/
?
(That page states 'sometime after a write call has already returned 0'. But if
write returns 0, then no data has been written, so the user program would not
assume any kind of success.)
Best regards,
Håkan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx