Re: Cloud tiering thoughts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 16, 2018 at 06:08:10PM -0700, Yehuda Sadeh-Weinraub wrote:
> Here are my current thoughts about tiering, and also specifically
> about cloud tiering.
> 
> 1. Storage-classes
> 
> Previously a placement target would be mapped into a set of rados
> pools (index, data, extra), whereas now placement targets will add
> storage classes (S3 uses these). Object placement will be defined by
> the placement target, and the storage class.
...
> We should probably make it so that when head and tail are being placed
> on different placement targets, the head will not contain any data,
> other than the object’s metadata.

This ties into work I was hoping to have time in Q4 to work on.

Right now, with erasure-coded pools, access to the metadata is very
slow, esp. for metadata-write workloads.

At the risk of having more objects, I was wondering about a full split
of:
(metadata)
(head)
(tails)
Each of which might be a different pool, depending on user workload.

Example policy I had in mind:
1. Metadata goes to replicated SSD pool.
2. (N) 4KB Head goes to EC SSD pool. N should be configurable.
3. Tails go to EC spinner pool.

These would be pools/targets selected based on the storage class.

Zero-byte files wind up (metadata)-only.
Tiny files implicitly wind up with just (metadata)(head).  Large files
wind up spread over all 3.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robbat2@xxxxxxxxxx
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux