Re: ceph and efficient access of distributed resources

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2013/4/16 Mark Kampe <mark.kampe@xxxxxxxxxxx>:
> RADOS is the underlying storage cluster, but the access methods (block,
> object, and file) stripe their data across many RADOS objects, which
> CRUSH very effectively distributes across all of the servers.  A 100MB
> read or write turns into dozens of parallel operations to servers all
> over the cluster.

Let me try to explain.
AFAIK check will split datas into chunks of 4MB each, so, a single
12MB file will be stored in 3 different chunks across multiple OSDs
and then replicated many times (based on value of replica count)

Let's assume a 12MB file and a 3x replica.
RADOS will create 3x3 chuks for the same file stored on 9 OSDs

When reading AFAIK replicas are not used, so all reads are done to the
"master copy".
But these 3 chunks are read in parallel on multiple OSDs or all read
request are done trough a single OSD? In the first case we will have
3x bandwidth for read operations directed to a file with at least 3
chunks, in the latter we have a big bottleneck.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux