For the original library we did all the numpy tricks we could think of, but we really needed to do this type of exhaustive search for some of the data.
If someone wants to open a PR with a "fully optimized" numpy code, that would be very cool just for comparison :)
3000 calls to list.append() cost only 2ms. In a computational intense program, no one bothers. Because usually one call to do matrix multiplication is already costing 500ms or so.
Of couse you can prelocate memory for size=3000, and append stuff in a loop. But this saves only 10ms. Too insignificant.
https://github.com/ohadravid/poly-match/blob/main/poly_match...
Expecting Python engineers unable to read defacto standard numpy code but meanwhile expect everyone can read Rust.....
Not to mention that the semi-vectorized code is still suboptimal. Too many for loops despite the author clearly know they can all be vectorized.
For example instead the author can just write something like:
Also in oneplace there is: You can just slap numexpr on top of it to compile this line on the fly.https://github.com/pydata/numexpr