Am 10.03.2014 11:41, schrieb Konrad Gutkowski: > W dniu 10.03.2014 o 07:54 Stefan Priebe - Profihost AG > <s.priebe@xxxxxxxxxxxx> pisze: > >> Am 07.03.2014 16:56, schrieb Konrad Gutkowski: >>> Hi, >>> >>> If those are journal drives you could have n+1 ssd's and swap them at >>> some intervals, could introduce more problems. >>> If it required data to be synchronized one could operate it with >>> degraded raid1 to swap disks, would introduce unnecessary wear though... >>> Just a thought. >> >> No they're running as OSD disks. >> > > It depends on your operational constraints then, don't know how quickly > ssd's can complete GC, but if you can take the whole cluster offline for > a short while if should be simple and clean way to solve this. But I > guess you already thought about it. No not the whole - but may be OSD per OSD. It takes between 4 to 8 hours. > My thinking is.. shouldn't TRIM suffice? (internet tells me it should) Trim just marks the blocks as free but they're not freed by it. So the GC then collects the marked blocks and frees them. Stefan >>> W dniu 07.03.2014 o 15:22 Stefan Priebe - Profihost AG >>> <s.priebe@xxxxxxxxxxxx> pisze: >>> >>>> Hello list, >>>> >>>> a lot of SSDs do their garbage collection only if the SSD is idle. But >>>> in a ceph cluster the ssd gets never idle. >>>> >>>> Does anybody have creative ideas how to solve this? >>>> >>>> Greets, >>>> Stefan >>>> _______________________________________________ >>>> ceph-users mailing list >>>> ceph-users@xxxxxxxxxxxxxx >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >>> > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com