Hi,
On 1/17/19 7:27 PM, Void Star Nill
wrote:
Hi,
We am trying to use Ceph in our products to address
some of the use cases. We think Ceph block device for us.
One of the use cases is that we have a number of jobs
running in containers that need to have Read-Only access
to shared data. The data is written once and is consumed
multiple times. I have read through some of the similar
discussions and the recommendations on using CephFS for
these situations, but in our case Block device makes
more sense as it fits well with other use cases and
restrictions we have around this use case.
The following scenario seems to work as expected when
we tried on a test cluster, but we wanted to get an expert
opinion to see if there would be any issues in production.
The usage scenario is as follows:
- A block device is created with "--image-shared"
options:
rbd create mypool/foo --size 4G --image-shared
- The image is mapped to a host, formatted in ext4 format
(or other file formats), mounted to a directory in
read/write mode and data is written to it. Please note that
the image will be mapped in exclusive write mode -- no other
read/write mounts are allowed a this time.
- The volume is unmapped from the host and then mapped on
to N number of other hosts where it will be mounted in
read-only mode and the data is read simultaneously from N
readers
There is no read-only ext4. Using the 'ro' mount option is by no
means a read-only access to the underlying storage. ext4 maintains
a journal for example, and needs to access and flush the journal
on mount. You _WILL_ run into unexpected issues.
There are filesystems that are intended for this use case like
ocfs2. But they require extra overhead, since any parallel access
to any kind of data has its cost.
Regards,
Burkhard
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com