On Wed, Nov 4, 2015 at 7:00 AM, 池信泽 <xmdxcxz@xxxxxxxxx> wrote: > hi, all: > > I am focus on the cpu usage of ceph now. I find the struct (such > as pg_info_t , transaction and so on) encode and decode exhaust too > much cpu resource. > > For now, we should encode every member variable one by one which > calling encode_raw finally. When there are many members, we should > encode it many times. But I think, we could reduce some in some cases. > > For example , struct A { int a; int b; int c }; ceph would > encoding int a , and then encode int b , finally int c. But for this > case , we could calling bufferlist.append((char *)(&a), sizeof(A)) > because there are not padding bytes in this struct. > > I use the above optimization, the cpu usage of object_stat_sum_t > encoding decrease from 0.5% to 0% (I could not see any using perf > tools). > > This is only a case, so I think we could do similar optimization > other struct. I think we should pay attention to the padding in > struct. The problem with this approach is that the encoded versions need to be platform-independent — they are shared over the wire and written to disks that might get transplanted to different machines. Apart from padding bytes, we also need to worry about endianness of the machine, etc. *And* we often mutate structures across versions in order to add new abilities, relying on the encode-decode process to deal with any changes to the system. How could we deal with that if just dumping the raw memory? Now, maybe we could make these changes on some carefully-selected structs, I'm not sure. But we'd need a way to pick them out, guarantee that we aren't breaking interoperability concerns, etc; and it would need to be something we can maintain as a group going forward. I'm not sure how to satisfy those constraints without burning a little extra CPU. :/ -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html