Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Semi Vectorized code:

https://github.com/ohadravid/poly-match/blob/main/poly_match...

Expecting Python engineers unable to read defacto standard numpy code but meanwhile expect everyone can read Rust.....

Not to mention that the semi-vectorized code is still suboptimal. Too many for loops despite the author clearly know they can all be vectorized.

For example instead the author can just write something like:

   np.argmin(
    distances[distances<=threshold]
    )
Also in oneplace there is:

    np.xxx( np.xxx, np.xxx + np.xxx)
You can just slap numexpr on top of it to compile this line on the fly.

https://github.com/pydata/numexpr



Author here:

For the original library we did all the numpy tricks we could think of, but we really needed to do this type of exhaustive search for some of the data.

If someone wants to open a PR with a "fully optimized" numpy code, that would be very cool just for comparison :)


Not super familiar with Python but isn't that append call within a loop going to cause a lot of allocations?


3000 calls to list.append() cost only 2ms. In a computational intense program, no one bothers. Because usually one call to do matrix multiplication is already costing 500ms or so.

Of couse you can prelocate memory for size=3000, and append stuff in a loop. But this saves only 10ms. Too insignificant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: