Tuesday, February 26, 2008
Media software follow up
Greg posted a link in the comments to a similar article he wrote a year ago. Great minds think alike!
Also, I noticed that I didn't list any 3D software. The reason for that is that I haven't kept up much on that realm lately. I used to LOOOVE Infini-D and Strata Studio Pro back in the day, but many of the current packages are far more complicated than I've had time to really learn.
One that I will point out, since I've just started using it again, is Poser. Poser allows you to manipulate figures (mostly human, but there are add-on packs) into lifelike configurations, complete with fabrics, skins, and realistic joint limits.
Another old favorite that has been apparently left to die is Bryce. Bryce used to be the best landscape generator, and was the source of many arguments with my friends on how digital imaging was art, even though you could use something like Bryce to generate a landscape better than almost any artist is capable. The original driving force behind Bryce was Kai Krause, a legendary Photoshop guru, who wanted to make tools that were more 'experiential' than technical. Bryce is fun to play with. Actually, lots of fun to play with. It has been somewhat marred by the transitions after version 4 or 5 which tried to make it more technical. It still retains some of that fun aspect, though not as much.
3DSMax is the gold standard for 3D on all levels (don't tell the people on the Maya team (also now owned by the same company) Shhhh) This is basically the Photoshop of the 3D world, but I have yet to really work much with it.
A second reason I didn't mention 3D is that there are two main camps, or maybe it's better described as a narrow line in the sand, between 'real time' and rendered content. Thanks to all the pwnage of modern 3D games, the attention has been split between what can be done quickly and what can be done accurately. Many useful solutions involve taking a game engine, like the Quake, Unreal, CryENGINE, etc engines (which deal with how things are rendered, antialiased, filtered and categorizing objects like terrain vs player, plus physics and many other aspects of the visual and audio experience) and then using a 3D package to generate the content that goes into the 3D world using modelers, like the 3DStudio Max Character Studio or more 'simple' programs like Rhinoceros. Then, Photoshop is used to create the textures and bump maps that are skinned on those objects.
(Side note - if you are one of those people that thinks, "Blegh! Video games!" Read the Wikipedia entry on the CryENGINE. The thing is a feat of engineering and imagination, and just pondering how you might go from something like a simple wall to a fully destructible object is interesting in itself. Also, reading through the evolution of the Quake Engine is incredibly interesting. The Quake and Doom engines are GPLed, meaning you can download the source code and mess with them all you want. Do a quick Google for "Quake engine tutorials" or "Quake mod develop" and you'll find 100's of sites dedicated to teaching you how to go from a blank 3D space to a playable game. Like I said before, video games will contribute more to BCIs than most people expect. They are an example of extreme computing optimization, environment interaction, and controlled environment generation (not to mention extensible and modular development), and the ways that these issues were resolved in games will work their way into the base of next generation BCIs.)
Confused? Nah. It isn't that bad, but exhausting enough that unless you really have a great idea and a ton of patience (plus $$$), 'playing' with 3D software is just masochistic.