Re: Vitastor, a fast Ceph-like block storage for VMs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

Vitaliy you are crazy ;) But really cool work. Why not combine efforts 
with ceph? Especially with something as important as SDS and PB's of 
clients data stored on it, everyone with a little bit of brain chooses a 
solution from a 'reliable' source. For me it was decisive to learn that 
CERN and NASA were using this on a large scale. I do not have the 
expertise nor time (like probably 90% of ceph users) to test how they 
have been testing and using ceph. 
 
I often see opensource projects that could benefit from cooperation. 
Some teams totally lack the expertise that others have, and vice versa. 
Providing the community with 10 or 20 'shitty' projects instead of 3 
'good' projects.
I think opensource projects should more often embrace a sort of modular 
development solution. Where others can change functionality by replacing 
just a module. If I ever get my idea funded, I would make it like this. 




-----Original Message-----
Cc: dev@xxxxxxx; ceph-users@xxxxxxx
Subject:  Re: Vitastor, a fast Ceph-like block storage for 
VMs

I love how it’s not possible to delete inodes yet. Data loss would be a 
thing of the past!

Jokes aside, interesting project.

Sent from mobile

> Op 23 sep. 2020 om 00:45 heeft vitalif@xxxxxxxxxx het volgende 
geschreven:
> 
> Hi!
> 
> After almost a year of development in my spare time I present my own 
> software-defined block storage system: Vitastor - https://vitastor.io
> 
> I designed it similar to Ceph in many ways, it also has Pools, PGs, 
OSDs, different coding schemes, rebalancing and so on. However it's much 
simpler and much faster. In a test cluster with SATA SSDs it achieved 
Q1T1 latency of 0.14ms which is especially great compared to Ceph RBD's 
1ms for writes and 0.57ms for reads. In an "iops saturation" parallel 
load benchmark it reached 895k read / 162k write iops, compared to 
Ceph's 480k / 100k on the same hardware, but the most interesting part 
was CPU usage: Ceph OSDs were using 40 CPU cores out of 64 on each node 
and Vitastor was only using 4.
> 
> Of course it's an early pre-release which means that, for example, it 
lacks snapshot support and other useful features. However the base is 
finished - it works and runs QEMU VMs. I like the design and I plan to 
develop it further.
> 
> There are more details in the README file which currently opens from 
> the domain https://vitastor.io
> 
> Sorry if it was a bit off-topic, I just thought it could be 
> interesting for you :)
> 
> --
> With best regards,
>  Vitaliy Filippov
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux