It shouldn't happen, provided the sustained write rate doesn't exceed
the sustained erase capabilities of the device I guess. Daily fstrim
will not hurt though.
Essentially the mapping between LBAs and physical cells isn't
persistent in SSDs (unlike LBA and physical sectors on an HDD).
Provided the SSD has a free cell, a write is committed to an available
(pre-charged) cell and the mapping table updated. In the process, the
cell that contained the now 'overwritten' (from the OS perspective) data
is marked as part of the free pool, so cleared in the background ready
to accept some future write.
Under-provisioning maintain this mode of operation since LBAs that have
never had a write (at least since a TRIM operation) will have no
physical backing, i.e. cells will be free for the controller to use in
the background.
Some reasonable background info here:
http://en.wikipedia.org/wiki/Write_amplification
Hope that helps.
On 2013-12-03 15:10, Loic Dachary wrote:
On 03/12/2013 13:38, James Pearce wrote:
Since the journal partitions are generally small, it shouldn't need
to be.
For example implement with substantial under-provisioning, either
via HPA or simple partitions.
Does that mean the problem will happen much later or that it will
never happen ? As far as I understand it is just postponing it but I
just discover this and may be completely mistaken :-)
Cheers
On 2013-12-03 12:18, Loic Dachary wrote:
Hi Ceph,
When an SSD partition is used to store a journal
https://github.com/ceph/ceph/blob/master/src/os/FileJournal.cc#L90
how is it trimmed ?
http://en.wikipedia.org/wiki/Trim_%28computing%29
Cheers
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com