What is the Place of Digital Artists in the Future of Media?

BlindTelepath

Newbie
Joined
Feb 7, 2004
Messages
128
Reaction score
0
Having been reading about the technological details in engines like UE3.0 - as far as I can tell, obscene levels of detail and quality - and about games like Will Wright's Spore - dynamic animation - I've been wondering where the artist fits into production in schemes like these. For UE3, it seems like it'd take an artist an exceedingly long time (esp. in comparison anything contemporary), and for Spore, artists would appear to be practically outmoded. Speaking as someone interested in a career as a digital artist, specifically in games, I'm wondering how the role of the artist will change/is changing with these technologies. Preferably informed responses, as I suspect someone like TDE will have.
 
I find its actually easier to produce high quality work these days with the new tools at hand, Its just a matter of experiance and know how, for example digital artist's can create with the U3 engine for example, a proffessional quality map for a character.. then allow certain programmed element's enhance their work with very little effort. displacement mapping and such, . The hard work for intial design's and concept's will still remain, along with many hand drawn detail in digital art, but as the technology improves, more advanced enhancing effect's will come around and post completetion will surely become more highly automated.
 
BlindTelepath said:
Having been reading about the technological details in engines like UE3.0 - as far as I can tell, obscene levels of detail and quality - and about games like Will Wright's Spore - dynamic animation - I've been wondering where the artist fits into production in schemes like these. For UE3, it seems like it'd take an artist an exceedingly long time (esp. in comparison anything contemporary), and for Spore, artists would appear to be practically outmoded. Speaking as someone interested in a career as a digital artist, specifically in games, I'm wondering how the role of the artist will change/is changing with these technologies. Preferably informed responses, as I suspect someone like TDE will have.

Hmm, well, apart from the changeover to the new methods, using tools that are still in their infancy and generally getting used to it. The actual process is insanely simple, compared to before, and taking into account the increased quality thats now possible.

Before now, if you wanted to make a game character, you'd have a really limited amount of polygons, and it would involve a lot of trial and error getting the textures to look just right on the model in the majority of locations, since you'd have to paint in everything, folds, shadows, and so on.. and often in a limited palette, with really low resolutions. Yeah you could back then take a high detail model and bake the surfaces to the low poly version. But the quality realtime was so low, it didn't warrent the extra work, you'd not have really been able to tell back then between a high poly baked surface and a painted by hand surface.

These days, you build your model and add as much detail as you want. Being creative instead of limited. With things like ZBrush you can very easily, even on a very low end machine play around with 4 million polygons or more, with little slowdown, and just go for your life. You can be 100% creative now, instead of always keeping one eye on your limitations.

Infact, IMO the hardest part these days, is unwrapping the low poly models.. The rest is a piece of cake, so long as the low poly UV is right, normal maps and everything else is just a few clicks away. Even the coding side of things is getting easier. UE3 is introducing tons of tools that almost do away with the coder.. Not completely obviously, but it does put a great deal more control into the hands of the artist, and with fewer limitations.. It's like being 4 again and waking up on Christmas morning and finding a huge bag of toys to play with and the only limit there was your imagination.

Everything is definately more enjoyable now. Concept artists can have more fun with their designs, and in many cases expect them to turn out almost exactly the same, instead of doing these beautiful designs, only to have them turned into horrible blocky looking objects that may or may not have any similarity to the original design.

Yeah the production cycle will take longer than before, and while Sprafa complained in PM when I said this last time, I still stand by it - it will weed out the time wasters and those not putting any real effort into games. This applies to mods and commercial games. There's less room to screw about, you can't get away with as much as you could before. So people will have to put more work into something for it to have a chance, or fall by the wayside very early on. That'll benefit everyone, better games, better quality, better jobs, more money. more fun.

So things are much better for the artist now, life gets easier for the coder. SoundFX guys have all these fancy hardware effects they can play with, and all these formats they can use, keeping perfect quality audio in tiny managable files, when before you'd have to lump for midi files or CD audio just to get away with anything longer than a second. Mappers don't have the horrible limits they had before, now they can go crazy and start making these cool levels they've always tried to make, but without losing half of it because they'd run out of space or couldn't put more than a few brushes or entities into the map.

Tis a great time to be making games, especially if your used to modeling high poly stuff ;););)

Dunno if thats any help.. I'm doing 27 different things at once here and missing steptoe and son :p lol
 
BlindTelepath said:
Looks as though I have my work cut out for me then. Thanks for the feedback! :)
hehe its not as hard as it looks.. its fun :) and really, anyone who can't model high poly from here on in, will struggle to get a job, period.
 
The Dark Elf said:
hehe its not as hard as it looks.. its fun :) and really, anyone who can't model high poly from here on in, will struggle to get a job, period.
That's both encouraging and discouraging in one post. Nice one :dozey: :)
 
BlindTelepath said:
That's both encouraging and discouraging in one post. Nice one :dozey: :)

Not really

ok breaking the process down.. I'm sure you've modeled in subd mode right? Well a good percentage of polygons in the high poly stuff, are generated by subdivision, you yourself never touch those, you just use the control points, and they interpolate between them, giving a smooth organic surface.

Edges are always round when subdivided, microbevel every sharp edge (in normal mode, so you can control its sharpness in subd mode, its harder to make things super sharp in subd which is good). Again this is par for the course with high poly stuff, since a sharp edge (example below) doesn't look as good as even a tiny .05mm microbevel does when it comes to how the light effects it. Also microbeveling is handy to sharpen edges of subd surfaces.. yeah you can use inbuilt tools for the job, but considering the model might have to pass through various applications.. chances are those methods wont work in another app.. eg: edge sharpness in XSI isn't noticed in other apps.. subdivision weights in Lightwave aren't recognised in other apps. But a polygon is a polygon in any app. So that works in all of them.

nobevel.jpg
No microbevel (sharp unrealistic edge)

microbevel.jpg
microbevel (slightly rounded, picks up highlights nicely)

large_bevel.jpg
larger bevel (more pronounced and again, more realistic than perfectly sharp edges)

The second two images are in subdivision mode. The first isn't.. if the first was subdivided it would look like this

subdnormal.jpg

which isn't what you'd normally want :) Like I say, you can use special app specific methods to sharpen edges in subd modes, but they don't transfer well, and you don't want to finish your model, only to find all your sharpened edges are now completely round.. It's a bitch a real bitch to microbevel _after_ you've done the model.. always do it while your modeling or your in for a world of hurt.


The high poly modeling is the easy part (seriously) The nasty part is the UV mapping.. normal maps are not as forgiving as regular maps.. Before you could UV map something and for the most part it would look fine anyway, if you took care of stretching n stuff.. But normal maps, or in this case tangent-space normal maps (example below) work across the surface when their calculated.. So when you create your UV map.. At every point the UV is discontinous, the normals will screw up.

tut2.jpg
Object-space​

normalmap.jpg
Tangent-space​

The annoying part. With normal maps, is the kind that work perfectly regardless how you UV mapped it.. object-space (rainbow colored) only work on none deformable objects.. and in those cases you generally don't have iffy shapes to UV map. While tangent-space needs them perfect, and on objects where perfect UV's can be next to impossible to make. That really is the hardest part.

As for how you make the models, go with what you prefer.

Method 1) Using sculpted models, scanned in as object and height maps and then normally converted to NURBS (easier when it comes to UV's) and because NURBS give better results when using displacement maps than subd surfaces do (for getting the high detail normal maps)

Method 2) model the low poly object, then UV map it, then build on that, remembering not to deviate too much from the original shape (silhouette) and keeping the UV mapped original somewhere save (don't need UV's on the high poly one, unless you really want to use them for some area's) Then project the high detail onto the low poly UV

Method 3) build the high poly object, then build the low poly one to match. Similar to method 2, just the other way around. Since the high poly version could change a lot, it can save time in many cases.

Method 4) ZBrush - build a low poly version, UV map it, take it into ZBrush, add all the detailing, then lower the subd to its lowest, and generate the normal map on that.. Advantage here is ZBrush can happily handle 5 million+ polygons on most machines, and the high poly version never leaves the application.. a good thing cause a lot of apps will choke at that level (hence the first method using displacement maps on NURBS surfaces, can ease pressure on the application)

Method 5) Not so much a different method.. But most of the time you don't _have_ to do the entire model in one. Splitting it up and doing just a section at a time will effectively let you have any amount of detail without killing your PC, or yourself :) You can then put all the UV's back together in one or two different UV maps, put all the various images together in photoshop, normalize the normal map, check they merge together correctly and your done.. Disadvantage is you never get to see the full high poly model in one go.

----

As for textures. Again depends on the type of model and how your comfortable doing it.

Personally I'll switch between painting on the model in ZBrush to creating maps in photoshop.. or a mix of both, depends on what im doing.. ZBrush, with a decent UV map setup and a high enough resolution image, you can do quite impressive maps right in there on the model itself.. Cleaning up in photoshop later.. Or paint base maps in photoshop and then go to Zbrush and back again. It's entirely upto you.

One thing though that you do miss out on with getting the maps that way, is you can't bake the underlying ambient lighting (which until realtime engines allow for fulltime radiosity, your gonna want even a little bit, just to help out. So another method is to take your high poly objects into another 3D app, and project the global illumination lighting from it.. Then use that in a layer to darken the texture map UV's just enough to bring out the details, grooves and such where it would naturally be darker, but no realtime engine would be capable of doing (especially with normal maps which can't properly see GI)

again you can do this with parts of the model or the whole thing.


Edit: While I said object-space can't work right (thats what the general thought seems to be from everyone) http://www.3dkingdoms.com/tutorial.htm claims it can work just fine with deformable objects. But I think you might be safer staying to tangent-space.. Even if it is annoying to use and IMO doesn't look half as nice as object-space results do. But meh, maybe someone can write an engine someday that uses object-space properly, and we can all stop ****ing about with carefully made UV maps and concentrate more on the making of high quality objects instead. I can dream :)
 
TDE, I tried to reply to ya but it's 4000 caratchers long ;_;
 
At risk of making a bad impression and infuriating someone who just spent what would appear to be at least 30 minutes on a post for my benefit, most of that went over my head. I was posting this in part to alleviate my own concerns about digital art's long-term career viability before throwing myself into, learning, and loving it. I've done some, and loved it, but I haven't done as much as I'd need to get all the technical-est stuff in that. But thanks TDE! :)
 
BlindTelepath said:
At risk of making a bad impression and infuriating someone who just spent what would appear to be at least 30 minutes on a post for my benefit, most of that went over my head. I was posting this in part to alleviate my own concerns about digital art's long-term career viability before throwing myself into, learning, and loving it. I've done some, and loved it, but I haven't done as much as I'd need to get all the technical-est stuff in that. But thanks TDE! :)

heh no worries, and no didn't offend me, its bound to be useful to someone.. I hope :)
 
The Dark Elf said:
heh no worries, and no didn't offend me, its bound to be useful to someone.. I hope :)
I bookmared it, to come back to when I have a hope of picking up all the details (though, in fairness, I got the broad picture, just not the details). :)
 
Back
Top