On Tue, Sep 12, 2017 at 08:04:52PM +0530, Kaartic Sivaraam wrote: > > On Tue, 2017-09-05 at 15:05 -0700, Stefan Beller wrote: > > > > After having a sneak peak at the implementation > > it is O(1) in runtime for each added element, and the > > space complexity is O(well). > > > > Incidentally I was reading about "complexity of algorithms" and there > was the following paragraph in the book, > > > Unfortunately, as Knuth observed, big-O notation is often used by careless writers and > speakers as if it had the same meaning as big-Theta notation. Keep this in mind when you see > big-O notation used. The recent trend has been to use big-Theta notation whenever both upper > and lower bounds on the size of a function are needed. > > So, if my interpretation is correct the above usage of O(1) and O(well) > should have been Θ(1) and Θ(well). But theta-well isn't a pun. :P It is true that prepending to a linked list is also Θ(1), but I'm not sure if it's carelessness that causes many programmers to use big-O. It's that what we care about is worst-case performance. So knowing that we have a lower bound isn't usually that interesting. What we want to know is whether it will blow up in our face as "n" gets large. Plus typing non-ascii characters is annoying. :) If you want to talk about sloppy analysis, the two most common problems I see are: 1. People talk about big-O complexity without discussing constants. For reasonable values of "n", the constants often matter (they're not wrong about big-O, but they are wrong about what will run fast in practice). 2. Glossing over things like amortized costs. Hash tables aren't really O(1) because they eventually fill up and get collisions. You have to talk about load factor, resizing, etc. I'm sure I'm guilty of doing those things sometimes, too. -Peff