Back in the GeForce 3 era we had "nonlinear polygon edges" from nVidia -- which joined "3DNow(here)!" in the pit of forgotten code because it was too damn hard to code.Originally posted by concorddawned
For times like these and the technology we have at hand, really this shouldnt be the case. I wish that hardware companys would put there ass into gear and make the hardware that can smoothly view high polycount characters.
Why do we have to suffer with low quality images...
Sounds good......and sorry to bother everyone, but I got one more question that I asked in another area. But I have heard that render time for NURBS and Sub Div's in animations are crazy long (especially with it running 30 frames per second). Even with render layers, how long would you estimate for render time to be for, let's say, a 30 second long animation (about the length of a cut scene)?
While it is true that you can do whatever as long as the end result is polygons, I can't help but recomend just trying to make it straight from polys to begin with. The reason I say that is because in the game industry, you are always given a polygon limitation and if you are converting from nurbs or subDs or whatever, the resulting polygon count you get is pseudo random. And I'd say definitely not the most efficient. Which then would result in a lot of time cleaning up the geometry.Originally posted by phornby
For gaming art, since I can work in NURBS and then convert to polygons at the end, will I loose any details if I fix it up good in Sub Div's and then convert to polygons? May be a stupid question.
,Pat