Pragmatically, I think the real issue is editor tooling. Polygons are much simpler to subdivide and edit and are perfect for real-time interaction, at least much better for high-complexity scenes. At render time, though, you need to use a bunch of extra tricks to get the right smoothing - bump/texture mapping, increased subdivision, etc. But some of these other models might actually decrease the scene complexity and increase quality if you could convert the model at render-time.
> At render time, though, you need to use a bunch of extra tricks to get the right smoothing
Nurbs or any smooth geometry needs to be subdivided too. You can set levels, max size of polygons, smoothness constraints subdivide based on the pixel size from the camera projection or any combination. In practice this is not a problem for polygons or nurbs.
> - bump/texture mapping,
This is orthogonal to the geometry type, with the exception that UV coordinates are far easier to deal with with polygons.
> increased subdivision,
There isn't any increased subdivision, both geometry types need to subdivided. Blue Sky's renderer raytraced nurbs directly but this isn't generally as good as just tracing subdivided polygons.
Polygonal geometry, even subdivided, is typically not a big part of memory or time in rendering in all but the most pathological cases. 4k would still mean that one polygon per pixel would be 8 million polygons, which is going to pale in comparison to texture data typically.
> Nurbs or any smooth geometry needs to be subdivided too
Not true; lots of smooth surfaces, including NURBS, can be and are ray traced without subdividing.
> Polygonal geometry, even subdivided, is typically not a big part of memory or time in rendering in all but the most pathological cases.
I don’t buy this either, speaking from experience using multiple commercial renderers. It is true that texture is larger, but not true that polygonal geometry is not a big part of memory consumption. RenderMan, for example, does adaptive tessellation of displacement mapped surfaces because they will run out of memory with a uniform displacement.
The balance of geometry vs texture usages is also changing right now with GPU ray tracers, and geometry is taking up a larger portion because it has to be resident for intersection, while textures can be paged.
> I don’t buy this either, speaking from experience using multiple commercial renderers.
It of course depends on exactly what is being rendered, but typically texture maps of assets for high quality cg are done at roughly the expected resolution of the final renders (rounded to a square power of 2). Typical assets will have three or four maps applied to each group of geometry, with higher quality hero assets having more groups.
> RenderMan, for example, does adaptive tessellation of displacement mapped surfaces because they will run out of memory with a uniform displacement.
It is specifically screen space displacement and this has been effective, but was originally crucial in the days where 8MB of memory cost the same as someone's yearly salary. In PRman actually polygons are even less of a burden on memory because of this with micropolygon and texture caches for efficiency, even with raytracing.
The real point here though is that nurbs don't really have much of an advantage, even in memory, because polygons are already lightweight and can be smoothed. Subdividing of polygons is typically not going to be too different from burbs and heavy polygonal meshes are likely to be extremely difficult to replicate with nurbs.
Don't get too caught up in exactly what is technically possible, this is about why nurbs are not an ideal form of geometry that anyone is trying to use again. Their disadvantages outweigh their advantages by a huge margin.
> Polygonal geometry, even subdivided, is typically not a big part of memory or time in rendering in all but the most pathological cases. 4k would still mean that one polygon per pixel would be 8 million polygons, which is going to pale in comparison to texture data typically.
Erm, at VFX level at least, that's not really true: once you have hero geometry that needs displacement (not just bump/normal mapping), you effectively have to dice down to micropoly level for everything in the camera frustum. And with path tracing (what everyone's using these days, at least in VFX), geometry caching/paging is too expensive to be used in practice with incoherent rays bouncing everywhere. Disney's Hyperion renderer does do that, but it spends a considerable amount of time sorting ray batches, and it was built to do exactly that.
Image textures, on the other hand, can be paged fairly well, and generally in shade-on-hit pathtracers (all of the commercial ones), this works reasonably well with a fairly limited texture cache size (~8 -> 16 GB). Mipmapped textures are used, so for most non-camera rays that haven't hit tight specular BSDFs not much texture data is actually needed.
Once things like hair/fur curves come into the picture, generally geometry takes up even more memory.
The original thread was about nurbs not being used anymore over polygons. Displacement doesn't change the equation between polygons and nurbs of course because they both go in as high level primitives. Hair is a its own thing of course. I know you know this, but I think a lot of people missed the original point.
You keep saying “polygons”. Are you talking about subdivision surfaces? Films don’t model in “polygons”. Subdivision surfaces are a curved surface representation, not “polygons”. Some people still use NURBS too.
Subdivision surfaces end up as curved surfaces when rendered, i.e. an approximation of the limit surface, but the modellers most definitely do model them as polygons in the DCC apps.
Some of the studios still don't even bother with crease weights and still "double-stop" ends with extra vertices/faces to create hard edges.
My original point was that it is possible to render subdivision surfaces without dicing down to micropolygon (i.e. you approximate the limit surface with Gregory Patches or something), but only if you don't have displacement: as soon as you need displacement, you pretty much need to dice down to micropolygons, and in this scenario, the geometry representation can be extremely expensive in memory with large scenes.
> but the modellers most definitely do model them as polygons in the DCC apps.
Yes, right, I know. I phrased that poorly, so I guess I should give BubRoss a break. The point I’m trying to make is that starting with the idea of polygon modeling, and starting with the idea of subdiv modeling, are two different things. If we’re talking about subdiv modeling, they it should be called subdiv modeling. Modeling “polygons” doesn’t just automatically produce decent looking smooth models and good connectivity and UVs, you have to use subdiv tools while you work.
That you can render subdivs without subdividing is related to what I was trying to say, that these surfaces are higher order, have an analytic definition, etc... they’re not just polygons. I guess it’s a good thing that subdivs are so easy to work with that they’re equated with polygons.
I can promise you they do. They are treated as the same thing. Everyone uses polygons knowing they will be smoothed/subdivided/declared as subdivision surfaces. Sharp edges, cusps and bevels are typically made by creating more subdivisions in the actual model instead of using extra subdiv data on the geometry, though pixar might be the exception.
> Some people still use NURBS too.
I think this is very rare. Maybe blue sky never transitioned away.
Yeah, so you’re talking about subdivs, not just any polygons. Yes you create subdiv geometry using polygon modeling tools, but modeling pure polygon models, e.g., for games, is a different activity. Subdivs are easier than NURBS, it’s true, but they do come with their own whole set of connectivity, workflow, texturing, pipeline, etc. Just saying “polygons” is misleading.
> modeling pure polygon models, e.g., for games, is a different activity
It really isn't. I think you want to drive home some distinction, but the vast majority of work flows model straight polygons and the only difference is that they know they are going to be subdivided and smoothed later by the renderer.
You say that there are all sorts of different issues with subdiv surfaces, but it just isn't true. Modelers and texture artists might look at everything smoothed to make sure there aren't any surprises in the interpolation and distortion in the UVs, but everyone deals with the raw polygons.
Yes, exactly, knowing they’ll be smoothed is a big difference, it leads to different choices. Looking at a smoothed surface during the modeling process is an even bigger difference than looking a mesh all the way through. Knowing how they’ll be smoothed is important. What about creases? What if you want a creased edge smoothed and it’s part of two separate mesh groups?
You can subdivide polygons without smoothing them, and people still do polygonal modeling without planning for subdivision, and produce models that aren’t intended for smoothing and wouldn’t smooth nicely, so it is important to be clear in your language that you’re talking about a subdivision surface and not just polygons. Why the resistance to just saying subdivision surface, since that’s really what you’re talking about? I agree with a lot of your points if I replace “polygons” with “subdivision surface”.
My issue with what you said far above is the claim that smoothed surface representations require extra tooling, and you claimed that “polygons” don’t have these issues. The problem with that is that a subdivision surface is a curved surface representation, and it does come with extra tooling. Just because it’s easier than NURBS, and just because you get to use a lot of polygon tools, that does not mean a subdiv workflow is the same thing as a polygon workflow. Hey it’s great if the tools are getting so good that people confuse polygons with subdivs. Nonetheless, a pure polygon workflow can mean things that aren’t compatible with smoothing or subdivs.
Creases are done by just making polygons/bevels/line loops close to the edge that needs to be sharpened.
> What if you want a creased edge smoothed and it’s part of two separate mesh groups?
Mesh groups don't have to mean their polygons don't share vertices and this is one of the reasons why - you need to be able to interpolate the attributes of the vertices, like normals.
> You can subdivide polygons without smoothing them, and people still do polygonal modeling without planning for subdivision, and produce models that aren’t intended for smoothing and wouldn’t smooth nicely, so it is important to be clear in your language that you’re talking about a subdivision surface and not just polygons.
I'm not concerned with what random people do. Professionals just say polygons in general and the workflow is all about working with the polygonal mesh directly. If two people are both making polygonal models and they will save them as .obj files, but one will be smoothed at render time , they don't say they are working with a different type of geometry. Technically there are actually many different ways to smooth polygons.
> My issue with what you said far above is the claim that smoothed surface representations require extra tooling,
No, I explained that nurbs require extra/different tools. Technically if someone (like pixar) were to use extra attributes like edge crease amounts on subdiv surfaces, some tools would need to address that, but that's not on the same level as the difficulty of working with nurbs.
> Nonetheless, a pure polygon workflow can mean things that aren’t compatible with smoothing or subdivs.
I think when you say things like this you are trying to salvage your confusion, but it really doesn't shake down like this. Any mesh can be smoothed and unless you have messed up meshes it works well and no one has an eye.
To recap: nurbs are a nightmare to work with, everyone works with regular polygons that could be saved as an .obj, they get smoothed at render time. Everyone calls them polygons because that is what they are working with and it has been this way literally for decades. Try not worry too much about it.
Really just pointing out that a subdivision surface is literally another curved surface representation, does have some conceptual differences from polygons, and can be ray traced without subdivision. Calling it polygons was confusing to me, but now that I understand what you mean, call it polygons if you want.
3delight also traces SDS and nurbs analitycally. In offline renderers geometry data constitutes to most of memory usage. Texture RAM usage is kept under a few GB by use of page-based cache.
Multiple gigabytes of geometry is a lot. That ends up working out to potentially dozens of hundreds of polygons per pixel. Even so the person I was replying to seemed to wonder why everything converted to polygons, which is because of a more holistic pragmatism.
Agree in general about geometry memory though: in high-end VFX, displacement is pretty much always used, so you have to dice down to micropoly for hero assets (unless they're far away or out of frustum).