On Wed, Mar 25, 2020 at 10:07 PM Aurélien Lajoie <orel@xxxxxxxxx> wrote: > > On Wed, Mar 25, 2020 at 3:16 PM Peter Cordes <peter@xxxxxxxxx> wrote: > > If you really are bottlenecking on UUID throughput, see my SIMD answer > > on https://stackoverflow.com/questions/53823756/how-to-convert-a-binary-integer-number-to-a-hex-string > > with x86 SSE2 (baseline for x86-64), SSSE3, AVX2 variable-shift, and > > AVX512VBMI integer -> hex manual vectorization > > I will take a look at it, but in a second time, I get your idea. > I am not familiar with this, nice way to jumb on SIMD operations. I can write that code with _mm_cmpgt_epi8 intrinsics from immintrin.h if libuuid actually wants a patch add an #ifdef __SSE2__ version that x86-64 can use all the time instead of the scalar version. I'm very familiar with x86 SIMD intrinsics so it would be easy for me to write the code I'm already imagining in my head. But it might not be worth the trouble if it won't get merged because nobody wants to maintain it. Also __SSSE3__, __AVX2__, and __AVX512VBMI__ versions if we want them, but those would only get enabled for people compiling libuuid with -march=native on their machines, or stuff like that. Or we could even to runtime CPU detection to set a function pointer to the version that's best for the current CPU. SSSE3 helps a lot (byte shuffle as a hexdigit LUT, and to line up data for the '-' gaps). And AVX512VBMI is fantastic for this on IceLake client/server. It's only called internally so we don't need to use the dynamic-link-time CPU detection that glibc uses to resolve memset to for example __memset_avx2_unaligned_erms, using a custom symbol resolver function. We can see how much speedup we get from using more than SSE2 and decide if it's worth the trouble.