On 2020-03-03T14:08:44, Roman Penyaev <rpenyaev@xxxxxxx> wrote: Hi Roman, I'm going to just respond to the new list ... > Eventually this Pech OSD can be a starting point to something > different, something which is not RADOS, which is fast, with minimum > IO ordering requirements and acts as a RAID 1 cluster, e.g. something > which is described here [2]. The IO ordering reminded me of the master thesis behind DRBD (which effectively infers IO ordering requirements). But, uh, RAID1 is not enough these days. We really need EC. I understand not doing EC early on, but this reads as if it's not even a goal? > Q: Why C, why Linux kernel sources? > A: I found more comfortable to hack Ceph, analyzing protocol > implementation, monitor and OSD client code reading Linux kernel > C code, instead of legacy OSD C++ code or Crimson project. Rewrite it in Rust while at it :-D > I also really like the idea of code unification: same sources > can be compiled and used on both sides. I'm seeing more and more a trend to move Ceph out of the kernel as far as possible. I wonder if that trend will be continuing. > Q: What is the architecture? > A: I do not use threads, I use cooperative scheduling and jump from > task contexts using setjmp()/longjmp() calls. This model perfectly > fits UP kernel with disabled preemption, thus reworked scheduling > (sched.c), workqueue.c and timer.c code runs the event loop. setjmp()/longjmp(). Uh. I get the idea of an event-driven state machine thingy that's single threaded and all, but, uh? Is that a scalable model that will make it easy to hack and extend? ;-) I think there are any number of issues that we need to overcome in Ceph - too much overhead, and the pseudo-random bits don't scale down all that well - but am curious to see where this approach leads ;-) Regards, Lars -- SUSE Software Solutions Germany GmbH, MD: Felix Imendörffer, HRB 36809 (AG Nürnberg) "Architects should open possibilities and not determine everything." (Ueli Zbinden) _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx