Re: All SSD storage and journals

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



They were some investigations as well around F2FS (https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt), the last time I tried to install an OSD dir under f2fs it failed.
I tried to run the OSD on f2fs however ceph-osd mkfs got stuck on a xattr test:

fremovexattr(10, "user.test@5848273")   = 0

Maybe someone from the core dev has an update on this?

> On 24 Oct 2014, at 07:58, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> 
> Hello,
> 
> as others have reported in the past and now having tested things here
> myself, there really is no point in having journals for SSD backed OSDs on
> other SSDs.
> 
> It is a zero sum game, because:
> a) using that journal SSD as another OSD with integrated journal will
> yield the same overall result performance wise, if all SSDs are the same.
> And In addition its capacity will be made available for actual storage.
> b) if the journal SSD is faster than the OSD SSDs it tends to be priced
> accordingly. For example the DC P3700 400GB is about twice as fast (write)
> and expensive as the DC S3700 400GB.
> 
> Things _may_ be different if one doesn't look at bandwidth but IOPS (though
> certainly not in the near future in regard to Ceph actually getting SSDs
> busy), but even there the difference is negligible when for example
> comparing the Intel S and P models in write performance.
> Reads are another thing, but nobody cares about those in journals. ^o^
> 
> Obvious things that come to mind in this context would be the ability to
> disable journals (difficult, I know, not touching BTRFS, thank you) and
> probably K/V store in the future.
> 
> Regards,
> 
> Christian
> -- 
> Christian Balzer        Network/Systems Engineer                
> chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
–––– 
Sébastien Han 
Cloud Architect 

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien.han@xxxxxxxxxxxx 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux