Scenarios:
[1] Cache size 1024 + early cache build up [Small change in your cache
patch,
see the patch below]
[2] Hash 64 approach[my original v2 patch]
[3] Cache size 64
[4] Cache size 128
[5] Cache size 256
[6] Base build
Result (boot to shell in sec):
[1] 14.292498 14.370994 14.313537 --> 850ms avg gain
[2] 14.340981 14.395900 14.398149 --> 800ms avg gain
[3] 14.546429 14.488783 14.468694 --> 680ms avg gain
[4] 14.506007 14.497487 14.523062 --> 670ms avg gain
[5] 14.671100 14.643344 14.731853 --> 500ms avg gain
It's strange that bigger sizes are slower. Based on this data, I'd pick [3].
How many phandles do you have? I thought it was hundreds, so 1024
entries would be more than enough and you should see some curve to a
max gain as cache size approaches # of phandles.
1063 phandles for my device. In one of the previous mails, I estimated it to
be few hundreds but I wastoo short of actual number. However, 1063 still
doesn't justify why [4] and [5] are notbetter than [3].
I would still be interested to find out a way to dynamically allocate array
with size near to total # of phandles with pre-stored mapping. And free this
array once done with it. But at present, no idea how will I achieve this. If
you can share any pointers around this, that would help !
Thanks,
Chintan Pandya
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html