Some foundational computer science relating to performance I've found poorly understood by the programming books I read as a child:

In compsci to reason about performance we derive a formula for how many loop iterations or bytes of memory a datastructure or algorithm uses & take the largest magnitude component of it. Namely O(1), O(log n), O(n), O(n log n), O(n^2), or O(2^n). O(2^n) means we're undesirably bruteforcing a solution, whilst O(1) or O(log n) are ideal.

1/2?

We may estimate these values for best, average, or worst case performance & choose based on the situation. If you're only processing small ammounts of data it tends to be not worth the setup cost of otherwise more efficient algorithms.

If you're not interested in mathematically analyzing your programs you can still take this as an intuition: Minimize the number of loops in your program. Structure your data to be easily processed. Use your language's standard library, unless it's C.

2/3

Follow

When optimizing your programs DON'T try, say, swapping out / for >>. At least if you're writing C, C++, Rust, or JavaScript. These sorts of optimizations (all some people know) are usually already done for you, as I've been spending over a month now explaining, and as such just serves to make your code less readable.

If you have otherwise squeezed all the performance out from your code & need more, take measurements. Know what you're doing.

3/3

Sign in to participate in the conversation
FLOSS.social

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).