• 0 Posts
  • 125 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle







  • xthexder@l.sw0.comtoMemes@lemmy.mlMistakes
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    12 days ago

    Interesting read.
    I think by all the same arguments, running raw machine code (not even assembly) is not a “low-level language” either by their definition.
    The branch prediction, instruction-level-parallelism, and cache behaviors all happen in hardware at a lower level than the programmer can control.

    All the talk about compiler optimizations seem irrelevant because you can still just turn them off and output simple machine code.

    I’m not really sure what the point of arguing the distinction is anyway? Any practical arguments would be much more specific about typical high-level features like garbage collection.










  • A quadratic function is just one possible polynomial. They’re also not really related to big-O complexity, where you mostly just care about what the highest exponent is: O(n^2) vs O(n^3).

    For most short programs it’s fairly easy to determine the complexity. Just count how many nested loops you have. If there’s no loops, it’s probably O(1) unless you’re calling other functions that hide the complexity.

    If there’s one loop that runs N times, it’s O(n), and if you have a nested loop, it’s likely O(n^2).

    You throw out any constant-time portion, so your function’s actual runtime might be the polynomial: 5n^3 + 2n^2 + 6n + 20. But the big-O notation would simply be O(n^3) in that case.

    I’m simplifying a little, but that’s the overview. I think a lot of people just memorize that certain algorithms have a certain complexity, like binary search being O(log n) for example.