JS time complexity - Back-story and Arrays
In recalling the heady academic days of yore, I remember being taught all about algorithmic complexity, and data structures. Particularly of interest was always the performance in space and time as data sets got really large. There are a ton of treatise on this, and wikipedia’s article on big O notation is a good place to start.
Some time ago I was a newly minted C++ programmer. I used to use the STL library ( this was before it became incorporated into runtime as the Standard Library) for storing and processing data. I was a big fan of the STL, finding it’s iterator and predicate patterns a nerdy pleasure to use. One of the great things about the library was that it provided guarantees around the algorithmic complexity (both average and worst case behavior) of the containers it implemented.
Using the excellent jsperf.com I tested with 1000, 5000, and 100000 element arrays. If they behave as expected ( amortized constant time for insert and random lookup) I would expect to see identical performance regardless of array size. The results for appending and random access confirmed my expectations.
Other takeaways are that push and pop are definitely the way to go instead of using
array_var[array_var.length]; delete array_var[array_var.length-1];
Also, it really behooves you to use an Array when you mean it (using objects instead of arrays is about 99 times slower on append)