Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
NPhysics: 2D and 3D Real-Time Physics Engine for Rust (nphysics.org)
188 points by ArtWomb on June 23, 2020 | hide | past | favorite | 34 comments


Is anyone out there working on continuous physics engines? This, like most any physics library I can find, uses the fix-after-intersecting fixed-time-step model, which makes everything act like a stiff sponge, even if you greatly increase the number of iterations per frame.

Carmageddon 2 (as the last example I can think of) instead found out when the next intersection would happen, precisely, moved time forward exactly that far and then resolved the collision using impulses and would then repeat until it reached a frame's worth of time. In pathological cases, this resulted in a frame taking nearly a second, but that was rare and the physics were much less spongey and more realistic.


It does CCD: https://nphysics.org/continuous_collision_detection/

Not sure if that is what you had in mind. Also, intersections / collisions aren't the only dynamic in such a simulation.


> It does CCD

Intersection and post-correction still seems to be present (and the sponge/spring behaviour) in demos even when you increase CCD steps and enable sub-stepping. Look at how much the bottom cubes/spheres intersect when the stack lands. (It's hard to tell what the demo called CCD is demonstrating.)

> intersections / collisions aren't the only dynamic in such a simulation.

Fair. (Contact points and friction also seemed more stable/realistic in Carmageddon 2 than in most physics libraries.)


There are two separate things here, interpenetration due how contacts/constraints are solved, and whether you do continuous collision sub-stepping or not.

For stability and performance reasons physics engines usually have parameters that soften or add compliance to contacts/constraints. A bit of compliance is almost always better than infinitely stiff collisions/constraints. There are cases where infinitely stiff systems either have no solution, are very expensive to solve, or would produce very extreme impulses (causing things to explode). Including some compliance fixes these issues. It is also often required to produces more realistic looking results since objects in real life aren't infinitely stiff, they either flex or break.

For performance reasons most physics engines also do not completely solve their constraints. They either use a fixed number of iterations (most common, including the demos here) or solve up to some specified error threshold. This tends to add some additional compliance to complex scenes (stacks/piles of objects for example).

With the right parameters a good rigid body physics engine should be able to prevent noticeable interpenetration in most situations, though the performance cost may not be worth it. In these demos if you max out Position iterations, velocity iterations, and frequency you should see significantly less interpenetration.

As for continuous collision detection/sub-stepping, this is a very common feature to prevent fast moving objects from clipping into or tunneling through other objects. However resting/continuous contact cannot be handled by stepping to the next intersection time and so have to be handled differently. Also multiple simultaneous or near simultaneous collisions can grind things to a halt in degenerate cases (such as multiple stacked objects that are almost, but not quite in resting contact). This is why physics engines that support continuous collision usually let you set a maximum number of sub-steps.


Yeah, I have. I’ve implemented all of the primitives I could find and then created novel algorithms for primitives I could not find algorithms for. You can find my work here, although I don’t recommend using the library itself: https://github.com/maplant/mgf

Continuous collision detection is great if you only need spheres, capsules, or some combination for moving objects and all of your static objects are triangles. EPA algorithms are far less accurate than exact algorithms, plus the continuous algorithms are faster and give you the contact normal for free (I.e, the problem of determining which direction the objects need to be pushed is trivial, as the objects aren’t overlapping to begin with).

If you only have one moving object that interacts with static objects only, it is possible to prevent overlap as you described.

However, if you have a complex system, you can’t just stop an object before it overlaps. The best thing to do is to move the objects the remaining part of the time step, converting the contact information from the TOI collision to one where the contact points have traveled and the geometries are overlapping.

So really, I would say the primary benefit of continuous collision detection routines are supplemental to a good collision/contact generation routine, and they don’t really affect the physics simulation in very many meaningful ways, unless your engine uses it in extremely specific situations


>In pathological cases, this resulted in a frame taking nearly a second

I wonder if someone uses a hybrid approach of CCD with a constrained time budget, meaning you will use CCD until the budget is exhausted, after which simulation will jump to the next frame time.


Do you have any links to a description of the Carma 2 physics engine?


Didn't look at rust too much, but I love this already. Will try it out.

Body deformation is mentioned as a feature. Is this equivalent to what other renderer call soft body physics opposed to rigid body physics? And can I treat such an object as a rigid object again if I wanted to and can I have compound objects with soft and rigid parts? Had some problems with that in some renderers.

Here is an example of what I would basically understand under that term:

https://www.babylonjs-playground.com/#480ZBN#2

This is babylon.js with ammo.js (I think it is ammo.js for the physics part).


I think so.

Check out the demos in your browser: https://nphysics.org/all_examples3/

Also check out the fluid simulation engine that is a sister project of nphysics: https://salva.rs


It implements finite element method solver for continuous solids. This is done by filling the object with tetrahedrons and applying deformation constraints. Varying the constraint strength throughout the object lets you adjust the stiffness. (I imagine nphysics supports this already.)


This looks great. I've used most of the popular JS physics libraries (matter, p2, cannon, ammo, box2dJS, etc) and while they're all good they all have a few JS-related drawbacks (mainly around garbage collection killing performance occasionally.) Having something WASM based could address some of that.


I remember releasing a game written in Haxe for the original iPad. It ran quite slowly.

I had to hack up the Box2D port to pool short lived objects that would normally exist on the stack.

Which was kind of frustrating, as it was all compiled back down to C++ again anyway.


Bullet Physics also has a WASM version.


Very cool, I was just reading about constraint solvers and position-, velocity- / impulse-, forced-based dynamics etc the last few days and wondered how hard it would be to do a physics engine from scratch.

As far as I can tell, this is single threaded CPU only, right? I wonder if it wouldn't be more appealing to do it parallel on the GPU instead when starting a new physics engine today, especially given how hard it is to change that later on.


There was an article on HN not too long ago, where the title was something like: "When in parallel pull, not push". It was about calculating sand pile fractals and how pulling strategy removes the need of locking. I guess for a physics engine the same principle applies. Instead of telling objects, that they are being pushed, let objects themselves ask whether they are pushed, pulling the information readonly from their environment. You would need really light weight processes for situations with many bodies though.


It would be tricky: constraint solving is a single-threaded algorithm, so the best option might be to run the broad-phase, and then break the scene down into islands of disconnected bodies for parallel constraint solving.

The problems are that: 1) There may not be enough islands to make full use of the available threads. 2) For many frames there might be a single island which is particularly expensive to solve, so the benefit from parallelisation is reduced.

The broad-phase itself might benefit significantly from parallelisation though.


While island based parallelism is doable and works reasonably well for many workloads, it's possible to dynamically batch constraints based on their involved bodies such that no constraints in a given batch share the same bodies. All constraints within each batch can be solved in parallel.

Of course, pathological cases can generate a ton of batches. Usually such implementations will use a final 'catchall' batch of some kind after N normal batches. That final batch can be handled sequentially or with a fully parallel jacobi-ish solver (which typically has worse convergence than PGS/SI, but it's usually not a problem in context).


> wondered how hard it would be to do a physics engine from scratch

It's extremely easy to get started with the basics, and then there's a long ramp of increasingly difficult features you can implement. So if you're doing it for fun/learning, it's a great project.

One of my first programming projects when I started learning Java in high school was a 2D physics system. Just took the equations from my physics class and implemented them in code and bam, you have particle physics. And then I figured out how to make those particles into circles with collision and added infinite lines for them to bounce off of, without really knowing what I was doing.

Then in an early CS class in college my team for a project did a similar engine but with arbitrary equilateral polygons for colliders. We had someone who already knew the math for doing those intersection checks, so it wasn't too hard.

But yeah: things obviously get hard at a certain point, but it's not one of those projects where you don't see any satisfying results until you've mastered it.


Having both written physics engines and worked with GPUs extensively (including GPU physics simulation), I would be really hesitant to go GPU-first unless there was a very specific use case in mind.

There's no doubt that GPUs have the flops/bandwidth advantage, and even all the overheads of adapting a physics engine to a GPU won't completely eliminate those advantages. If a company needed an absolute-maximum-performance backend simulation on computers they built to spec, it could definitely make sense.

But when you move into the realm of game client simulation, it's harder to justify the use of GPUs. Very, very few gamers have multiple GPUs, and the one GPU they do have is often already quite busy.

Ignoring all the nightmarish issues around driver bugs and vendor compatibility, it comes down a question of comparative advantage. GPUs are extremely good at graphics. They can also do a pretty darn good job at physics. Or sound path tracing. Or machine learning inference. Or any number of other high throughput workloads. There's a limited budget, and there's no reason to leave a beefy CPU idle. What task allocation gives you the most bang for the buck? Would your average gamer be okay with worse graphics, just because the GPU is needed to perform physics simulation in order to play the game? The answer will sometimes be yes, but it's something to be careful about.

Plus, especially with the latest surge in CPU performance and competition courtesy of AMD, CPUs can do a pretty darn good job of running physics simulations per dollar.


A silly question: how to build the examples? I haven't found a build tutorial in the project.

As a rust newbie, I guess I should run in "cargo build" in the example folders. But both output the error

error[E0463]: can't find crate for `std`

What I missed here?


Stay in the main folder and run: cargo run --example foo


It looks the "install.sh" script must be executed by an admin to install the tools in some system folders. Setting the bin folders in PATH doesn't work, and error message is totally useless to explain why the way doesn't work.


Anyone know if there is a Rust equivalent to Intel's Embree[0] library (high performance ray tracing kernels)?

[0] https://www.embree.org/


I think your best bet will be to use a wrapper around it: https://docs.rs/embree-rs


Gosh, it seems every day I see a new shiny thing made with Rust that was developed remarkably quickly, runs quite smoothly, and appears to be magically bug-free.

I'm sooo tempted to dive into Rust head-in. Have only dabbled a bit with it so far. The main thing holding me back at this point is that Rust seems to lack a decent ecosystem for data science and AI/DL/ML (in particular, a set of crates that has proven useful and stable in real-world AI/DL/ML applications).


If you're interested, there's an unofficial working group that's been organized to start doing development in machine learning for Rust at https://github.com/rust-ml . We actually have a meeting scheduled for later today--you can find details on the Zulip chat, even if you just want to pop in to see what people are up to. As others have mentioned, everything's pretty early-stage, but there's a lot of experimentation going on around scientific computing, GPU acceleration, etc. that you might find interesting, and we're definitely open towards community participation at all levels, even if you're a relative newcomer to the language.


http://www.arewelearningyet.com is a good overview. And if there's something missing, trying to create it yourself could be a way to learn the language, even if the result doesn't actually do everything you need.


Thank you for sharing that link. The tagline on that page summarizes my views quite well: "...but the ecosystem isn't very complete yet." It's nice to see that others are thinking the same. I'll take a look.


I truly wonder, what is it about this community that leads to such quality software?

Your reaction to arewelearningyet is similar to how I feel about:

https://areweguiyet.com/

However, at least for GUI I know that Raph Levien is really pushing a focus on GUI at the moment. https://raphlinus.github.io/rust/druid/2019/10/31/rust-2020....

I wish I could contribute more. You really summed up how I feel about Rust: "It seems every day I see a new shiny thing made with Rust that was developed remarkably quickly, runs quite smoothly, and appears to be magically bug-free."

Even the maligned actix web framework in Rust is still leading performance benchmarks: https://www.techempower.com/benchmarks/#section=data-r19&hw=...


Same! I am looking to learn a low-level language and want to learn Rust due to all the things I have been hearing. However all the backend stuff (python)for deep learning is written in C++and nothing yet for Rust. I would love to avoid C++ as the language seems to have a lot of quirks and bugs that take time to get used (from what I hear) and these days I like efficient languages.

As far I know there aren’t any good computer vision libraries for Rust?


Clearly, we are not the only ones feeling this way. I suspect that if PyTorch or TensorFlow were to provide support for rust as a first-class citizen they would get a whole new community of users very quickly -- people looking to write production AI/DL/ML code in a fast, practical, memory-safe, widely used language.

The only other "fast, practical, memory-safe, widely used language" that looks like a real alternative to Rust is Swift. But unlike Swift, rust was designed from the ground up for for data parallelism and other forms of concurrency with "zero-cost abstractions" the allow "fearless concurrency" (e.g., take a look at https://github.com/nikomatsakis/rayon/ to get an idea of what's possible), and for predictable performance (e.g., compiled code doesn't suddenly slow-down to collect garbage or synchronize reference counts among CPUs). Concurrency appears to have been an afterthought in Swift's case, explicitly: https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9... . See also https://doc.rust-lang.org/book/ch16-00-concurrency.html and https://blog.spencerkohan.com/impressions-of-rust-as-a-swift...

Data parallelism and other forms of concurrency should become more and more important over time as models grow larger and larger, well beyond GPT-3's 175 billion parameters. This would favor rust.


What kind of libraries are you looking for? I know there is ongoing work on things similar to numpy/pandas, but that kinda stuff is more complicated in statically typed languages.

That being said, you should definitely give Rust a try. I find it to be an absolute joy to use


Yes, it would be great to know there are mature facilities for manipulating dataframes and arrays, along with mature linear/tensor algebra facilities, ideally with support for offloading computation to GPUs and TPUs. The equivalent of something like scikit-learn, even if it fall short of its breadth and depth. Also, facilities for automatic differentiation, or alternatively, stable and well-supported integration with frameworks like TensorFlow and/or PyTorch (tch looks like a possible option).

PS. The link shared by dabreegster has a fairly detailed list of missing facilities: https://news.ycombinator.com/item?id=23615175


Julia is quite strong in these areas and there is this as a strategy -

https://github.com/Taaitaaiger/jlrs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: