That’s still good! I’m proud of you for working though the parts of the problem that you were capable of
Just add a delay that pads it out the execute time to 10 seconds. O(1) ez.
O(n!). I like it.
O((2(n2))!) or bust.
O(BB(n))
“That matches no single computable function” is just an excuse.
There’s O(1), O(n), O(nlgn), O( this code is crap ).
[Cries in matrix multiplication]
Matrix multiplication is O(n) if you do it in parallel /j
Umm, AKCTSHUALLY it gets more like O(n2) in parallel, assuming you’re using a physically achievable memory. There’s just a lot of criss-crossing the entries have to do.
Strassen’s algorithm gets O(n2.8) in serial by being terrible, and the weird experimental ones get O(n2.3), but the asymptotic benefits of Coppersmith-Winograd and friends only kick in if you’re a Kardashev III civilisation computing a single big product on a Dyson sphere.
I can’t decide whether this sentence is a joke or not. It has the same tone that triggers my PTSD from my CS degree classes and I also do recognize some of the terms, but it also sounds like it’s just throwing random science terms around as if you asked a LLM to talk about math.
I love it.
Also, it’s apparently also real and correct.
Thank you, I’m glad to share the pain of numerical linear algebra with anyone who will listen.
Yeah, in fact, I somehow calculated in assumption of
n
being the amount of elements in matrix, notn²
(assuming square matrix)But I am impressed to know that there are serial algorithms that approach
O(n²)
, thank you for sharing that infoYeah, they work by turning the problem into some crazy kind of group theory and attacking it that way. Every once in a while someone shaves the decimal down slightly, just by implementing the deep math in a more efficient way. A new approach will be needed if it is in fact possible to get down to O(n2), though. Strassen’s is a divide and conquer algorithm, and each step of the iteration looks like this:
S[1] = B[1, 2] - B[2, 2] S[2] = A[1, 1] + A[1, 2] S[3] = A[2, 1] + A[2, 2] S[4] = B[2, 1] - B[1, 1] S[5] = A[1, 1] + A[2, 2] S[6] = B[1, 1] + B[2, 2] S[7] = A[1, 2] - A[2, 2] S[8] = B[2, 1] + B[2, 2] S[9] = A[1, 1] - A[2, 1] S[10] = B[1, 1] + B[1, 2] P[1] = STRASSEN(A[1, 1], S[1]) P[2] = STRASSEN(S[2], B[2, 2]) P[3] = STRASSEN(S[3], B[1, 1]) P[4] = STRASSEN(A[2, 2], S[4]) P[5] = STRASSEN(S[5], S[6]) P[6] = STRASSEN(S[7], S[8]) P[7] = STRASSEN(S[9], S[10]) C[1..n / 2][1..n / 2] = P[5] + P[4] - P[2] + P[6] C[1..n / 2][n / 2 + 1..n] = P[1] + P[2] C[n / 2 + 1..n][1..n / 2] = P[3] + P[4] C[n / 2 + 1..n][n / 2 + 1..n] = P[5] + P[1] - P[3] - P[7] return C
In my copy of Introduction to Algorithms, it says something like “this is the most bullshit algorithm in the book and it’s not close” underneath. You can make it a bit neater by representing the multiplication operation as a 3-dimensional tensor, but at the end of the day it’s still just a stupid arithmetic trick. One that’s built into your GPU.
O(n)
? More LikeOh(No)
Great, now I’m hungry for apples.
Why would you want a specific time complexity? Wouldn’t it be better if it’s faster? /s
Likely they want a lower time complexity.
for example a question can be trivially solved in O(n^2). but there is no know < O(n) solution, so they ask for O(n)
Most of the time O(n^2) is optimized to O(n log n). You’ll get some sort of award if you can figure out a sorting function that runs in O(n).
That’s a huge leap from O(n²) to O(n), in this example it would likely good to at least specify that it should be strictly less than best known solution (not sure if there are such cases on leet code, I thought they only restrict you to what is known to be solvable)