Siggraph 2003


Barry Ruff Trek Report - July 27 - Aug 1 San Diego, CA

Overview

The industry is alive and kicking. Despite a trimming down in attendance and exhibitors, the annual Mecca of CG and animation was abuzz with activity and healthy competitive business. The atmosphere harkened back to the pre-dotcom era, there was a refreshing lack of venture capitalists, Gucci encrusted sales dweebs and overly tan Hollywood types with sycophant entourage in tow. (but the conference will be held in LA the next two years, so things will probably revert) The exhibit floor was significantly smaller. Noteable absentees: SGI, Sun, Apple(well they did have a small Shake training area, more below), Side Effects. There was a strong new booth prescence of universities and training companies. Nearly a dozen schools had booths, some with nearly as much square footage as the major vendors. I don't have attendance figures, but it's unlikely there were more than 40,000, declining steadily from a high of ~100,000 about 6 years ago. But, it is still The show for CG and the players put on a good show indeed.

This is the first time SIGGRAPH has been held in San Diego. It was a near perfect venue. Plenty of hotels in easy commute/walk range, a stellar new conference center and pleasant SoCal weather conditions. Until the new Boston convention center opens, this will do quite nicely. West coast attendance is always better, the LA draw and a large contingent of Pacific Rim devotees make the ACM very happy. There was some interesting belt tightening also, the conference had a full day Sunday and closed out Thursday so everyone could travel on Friday. For most folks this kicked in Saturday overnight savings and generally cut a day out of most peoples stay. Panels were cut, there was no career/job center, one of the receptions was dropped and the other was on site at the convention center. Signs of the times.

I'll try to break things down into a chronological brain dump, but from Wednesday on things were a blur, so towards the end I'll just give the Reader's Digest version of events...

Sunday

The masses don't descend upon SIGGRAPH till ~Tuesday. Only about 25% of the attendees actually frequent the technical sessions. (the rest just sell stuff ;) ) Sunday and Monday are mostly full and half day courses on specific topics. And so begins the PowerPoint onslaught. I did two half day stints on Sunday. In the morning, Real-time Shading. After a nice overview of the advances in programmable hardware, representatives from the major vendors presented their current take on the technology, where its headed and how you should get there. 3DLabs (now a Creative company) is the big proponent of OpenGL 2.0. Well, the OpenGL Shading Language has now been adopted as EXT extensions for OGL 1.5. All the vendors will be supporting these extensions. Last SIGGRAPH everyone realized hardware shading was coming, this year everyone agrees it's here. GL SLang is a nice common denominator for that technology to build forth on. NVidia continues to pitch Cg (which is equivalent to Microsoft's Direct3D HLSL High Level Shading Language). Cg is however cross platform and NVidia is putting the best resources in the world towards evolving the next generation of cinematic computing. ATI presented a nice HLSL implementation of Uberlight which is a popular Renderman lighting rig. It does, barn doors, gobos, cookies and volumetric lighting. It's also an image projector and has nice parameters for spot rolloff. Very sexy real-time lighting effects. The various manufacturers took gratuitous barbs at one anothers approaches, but basically the 3 shading languages are the same and the GL SLang will be a common denominator for minimal shader functionality.

In the afternoon I took in a class titled: Beyond Blobbies. It examined some of the recent work in implicit surfaces. I've always liked this representation and the recent upsurge in the use of sub-division surface modeling led me here to see what other techniques folks were exploring. A variety of speakers(primarily academic) went into isosurfaces, level sets and deriving curvature from volume data sets. For volume viz there are a lot of pratical tools here, but implicit surfaces remain somewhat analytic and hard to combine with a standard modeling pipeline. The underlying representation is however, deceptively simple and the results look cool. I think there could be some interesting use of these tools to create particle fields and perhaps point clouds representing geometric models.

In the evening, for the second year a new event was held, the Fast Forward Paper Preview. This is fun, each paper presenter is given ~60 seconds to present 3 slides describing the paper they'll present later in the week. With ~100 papers being presented this can get pretty hectic. Those going over the alotted time limit are heckled off stage. So, this is a light introduction to the upcoming week, but it is a decent overview and gives a lot of perspective on which sessions to attend.

Monday

SIGGRAPH is a balancing act. There's always 4-5 things you want to see going on at the same time. It's a huge optimization problem where you try to absorb as much as possible while missing as little as possible. Luckily this year the conference was recording the main rooms and was selling DVDs of these events and their slides. That will arrive in Sept/Oct sometime. But it did allow me to free up some time to attend the more vertical sessions. So, we'll have to wait till October to get the juicy making of Finding Nemo details. Early Monday AM is the keynote speech. Usally given by some industry luminary like Craig Barret or Bill Joy this year however the keynote was given by none other than: Anthony Lasenby. Who the heck is he? Exactly. Turns out he's a cosmologist. You thought writing graphics code was tough? This guy was out there. His talk was titled: "Modeling the Comos: The Shape of the Universe". He presented (a somewhat painless not overly mathy) Geometric Algebra for describing cosmological evolution. His algebra allowed for way cool transformations in many dimensions and holds up under relativistic time scales. It was like quaternions on acid.

I split my morning between two courses. 3D Models from Photos and Video, this was a nice overview of a maturing set of convergent technologies. The application is to generate textured geometry from multiple images with little or no camera information. A blend of computer vision, image processing, photogrammetry and 3D modeling is used to get some very nice results. Features are identified and tracked, corresponding features are associated, a camera model is derived, structure and motion are recreated and a dense model of the scene is created. There are a variety of applications for these techniques. An object or scene can be generated by analyzing hand held video. Synthetic objects can be added into a scene and they remain static. These methods are still hard to pull off, but the compute power is starting to even the playing field. Look forward to this technology in many different guises. Including: clean backplates, stitching panoramas, 3D compositing, image based modeling, geometry from video, camera path extraction and motion stabilization.

Next session: Performance OpenGL - Platform Independent Techniques. Presented by a crew of SGI engineers this was an unbiased look at OGL performance tuning. It was also an excellent walk down the fixed function pipeline of yesterdays hardware and a clear presentation of how those once hardwired components are being replaced and subsumed by programmable GPUs. The real take away here was the incredible flexibility of the graphics pipeline. The presenters gave some great tips on profiling, debugging and tuning for raw performance. By digging a bit deeper into how OpenGL is actually implemented it becomes quite obvious how to make some substantial speed improvements by adjusting application architecture. Taking advantage of the modularity of the pipeline and understanding what and how much data is flowing where helps dramatically to trim away wasted resources and make the best use of the power thats available.

After a healthy $8 lunch consisting of 2 Mrs. Fields cookies and a Coke. (would have had to take out a loan to afford a cheeseburger)...

Afternoon session: More Than RGB, Spectral Color Reproduction. Face it, video people don't appreciate color. They pretend they care, but in the end they compress, clamp, clip and throw away most of any color information they had and then display their end result on an uncalibrated, unstable, low dynamic range, interlaced, analog, curved, phosphor coated, vacuum tube. There are however a number of fields that meet the challenges of reproducing quality imagery. Noteably, the world of high end print and film. This course covered the issues of how spectral color representations can be integrated into standard workflows and examined the benefits and pitfalls of such models. This stuff tickles me. Ten years from now, we're going to look back and be bemused that we tried to create images with only 24 bits of color information. (it's already kind of silly isn't it?) These guys went over the limitations of non-spectral color representations, examined how to characterize display devices and their gamuts, explored mapping between devices and harped on some of the issues of tone reproduction, metamerism, dynamic range and perceptual issues. Great stuff, can't get here soon enough.

Evening: HDRI and Image-Based Lighting. High Dynamic Range Imagery has been a popular topic for the past ~5 years. This session was organized by the name most associated with that technology, Paul Debevec. Well, HDRI has come along very nicely. Accessibility of affordable digital cameras and impressive leaps in compute power have now brought the practicality of HDRI into the daily pipelines of many practitioners. In fact I'd go so far as to say that HDRI is now firmly entrenched as a neccesary component of high quality rendering and effects generation.

Allow me to give a quick overview... Imaging sensors(film/video) today capture a rather small range of image intensity information. The lightness values are then stored in a format which often loses most of the radiometric information the sensor gathered. Shadow detail goes black, highlights saturate and in between we have a very limited range of values to represent the magnitude of star light to sunlight. Automatic gain control, gamma correction and 16bit linear color does little to help. Cinematographers and photographers work within the limitations of the medium, they are masters of how to eek out the tonal range they need using the tools at hand. Well HDRI is a wonderful and neccesary tool. High Dynamic Range Imagery captures images with a full range of darkest dark to lightest light and stores the results with enough precision that operations can be performed across that entire range. With a calibrated camera this then allows us to represent an image using it's actual radiometric values. Not only are we working in a physically meaningful representation, but it becomes simple and accurate to merge physics-based rendering techniques with image data in this format. This leads to stunningly realistic composites and renders.

Where HDRI has especially taken off is in the area of image based lighting. A light probe image is an HDRI where the entire lighting of an environment from a given point is captured by imaging a mirrored sphere at varying exposure rates. The resulting light probe is similar to a spherical environment map, but with high dynamic range imagery. This data can be used directly as the lighting information for the scene. By placing objects inside the light probe they can be rendered inexpensively with the original lighting. The results are stunning and this session went in depth into some of the latest techniques. There was a nice talk on how to take the sun out of a light probe image and to replace it with your own key source. The same discussion had some content on replacing scene geometry with HDRI planes. Next a guy from ILM discussed compositing issues, showing examples from T3 and the Hulk. Just how bright is a nuclear explosion? Then some folks who used HDRI in X-Men2 did the behind the scenes on how the mostly synthetic Mystique was lit and how her transformations were textured. I don't mean to sound like HDRI is a Hollywood effect. It is a very straightforward technique available through any decent rendering system. In fact there are a number of demos on recent GPUs that make use of lightprobe data in real-time. ILM released a new open HDRI image format called OpenEXR. A number of compositing apps have announced support for the format. Second only to programmable hardware, I would say HDRI was the most impactfull technology at SIGGRAPH 2003. It was everywhere, and I'll mention it a few times later.

Late evening: Alias Univeral User Group meeting. Typical schmoozefest. Overly loud music, turning away crowd at the door, very dark trendy gaslamp district club. Maya 5 is definitely cool. Alias|Wavefront recently dropped the Wavefront moniker and is now simply Alias again. (they also got a boring new logo) New support for Cg shaders.

Tuesday

Now, the crowds start to flow. The exhibition floor opened Tuesday morning, but before hitting the show floor I opted for one last half day course. So I spent the morning doing: NURBS. Yes, meat and potatoes are good and you can never know enough about basis functions. Taught by Dave Rogers from the Naval Academy it was a concise overview of Bezier curves, B-Splines, NURBS, patches, surfaces and your typical curvy stuff. Yes, a fun filled morning of control points, continuity and weighting functions. Nothing new here, I just wanted to beef up on some higher order surfaces. It's inevitable that we'll see better hardware support for patches and surfaces. Rogers had some interesting historical perspectives on the evolution of curve representations. (he comes from a naval architecture / CAD background) He had some very biting commentary on the continued use of Bezier curves and how unintuitive UIs for them are. Then went on to some examples of the simplicity and controllability of B-Splines. Rounded things out with a serious excursion into the depths of NURBS and surfacing issues.

On to the show floor. Time to survey the beast. Goal being to make a single pass of the entire floor not getting bogged down by literature, demos or booth babes, scouting out any surprise exhibitors/products and prioritizing for in depth examination. Success! By 6PM I had the whole joint cased. This would have been impossible just a few years ago, when it took days to get through the floor. Less booths, smaller booths, minimal staffing. Well it did make it easier to get a handle on where everybody was. It also meant more time for papers and technical sketches.

Allright, I don't really know how to condense dozens of conversations into a readable synopsis. I'm just going to plunge in with a ton of vendor info, mixed with a tad of analysis...

Wondertouch, they make ParticleIllusion, just released v3.0. Good stuff, eye candy, runs fast, all OpenGL. They licensed their v2 engine to Discreet for Combustion. 3DLabs, new Wildcat has big texture memory 256M but expensivo, first drivers to support OGL shading language. 2d3, small booth, BouJou one of the best camera trackers in the industry, bundling SteadyMove with Adobe Premiere Pro. Speaking of Adobe... Showing AE6 and PremPro. They were also pitching their laughable 3D web solution Atmosphere, utter tripe! No Macs in the booth. The AE demos were crowded. Discreet, modest booth, announced 3ds max 6 and it has mental ray built in. Big announcement Burn on Linux. Burn is their background distributed render tool for Flame/Flint/Inferno. Linux is beginning to be the OS of choice for back room compute power. Tons of companies were showing server and distributed rendering on Linux clusters. A bunch of companies were racking up AMD 64bit dual Opterons. Lots of clusters around the floor. NVidia had a Linux cluster with a QuadroFX3000G in each node. They drove a 3x3 wall of monitors from the cluster doing real-time rendering using Cg and OpenGL. NVidia and ATI were everywhere HP, Intel, Boxx all showing off QuadroFX3000 and the new Fire cards. Eyeon software looking good. Digital Fusion is now bundled with Lightwave3D and they are working closely with SpeedSix the original developers of 5D Monster plugs. D2, a software division inside Digital Domain has released Nuke a node based compositing solution, they were showing in the Photron booth because they bundle Primatte. Kaydara is nearly giving away MotionBuilder. Softimage XSI is looking buff, Avid has no real clue how to leverage this, they continue to sell Symphony against DS. Side Effects Houdini has a new version out, and it looks Really sharp. This has always been a behind the scenes power app, but now it has enough UI to step up and fit into the workflow. They also have integrated mental ray as an internal renderer. XYZ RGB a new Canadian scanner company, they seem to have bought up the rights to the Arius3D scanner. That gives them the highest resolution scanning device in the industry, incredible detail. Pixar had a G5 in their booth rendering frames from Finding Nemo. There was also a small Apple training area that had a G5, they were primarily showing off Shake. Charles River Media has a new book on OpenGL Extensions, looks like a good reference. daVinci has some incredible image restoration software and they continue to be the quintessential color grading solution, they now have a software version of some of their tools. NxN Software have an interesting assett management tool, they also have software version control software and are highly cross platform and Code Warrior friendly. The Web3D consortium released the latest version of X3D which is actually a pretty decent format, it greatly extends VRML97 and is defined using XML

Did I mention there was a hiring frenzy going on in LA? There were a ton of studios recruiting heavily on the floor. All the shops are doing full length CG animated features and they're dying for bodies. Rhythm and Hues, Digital Domain, ESC, Pixar, ILM, PDI/Dreamworks, Blue Sky Studios, Sony Imageworks all had booths and were frothing for talent. There was also a serious games development/recruiting prescense on the floor including Electronic Arts and Midway.

Having completed my floor mission I shuffled off for the last papers of the day. Caught a few presentations on shadows, animation perception and haptics.
Then, back to the swanky DoubleTree to read tommorows papers. (you didn't think I was on vacation did ya? ;) )

Wednesday

In the morning I made it over to the Emerging Technologies exhibit hall. This is always a trendy fun place, chock full of strange gizmos, crazy devices and usually some exciting new stuff. This year lived up to expectations. There was a 10' diameter globe which was actually a static sphere lit from the inside projecting imagery onto it's surface. Using some widgets you spin the globe(the image spins) and could shift time forward and backward making the continents shift positions and watch the icecaps recede. There was a pretty cool vision system watching the entrance to the gallery as people came in, the system would direct a small spotlight onto a person and would adjust it to track their motion as they walked into the room. Once it lost them, it would jump the spotlight back to the next person who entered. Some Japaneese researchers had a food simulator. I watched as a volunteer put the robotic latex clad insturment into his mouth, the device has forcefeedback to emulate the texture of food and sprays a variety of liquids to emulate tastes. Yuck! Ok, so after dawdling around with some virtual characters and examining the other art and ephemera, I came upon the coolest thing, no not the walk through fog screen or the Body Brush 3D painting system, but an honest to goodness High Dynamic Range Display! They basically ripped the backlights out of some nice LCD displays and then used a lightvalve projector to pump grayscale (really bright) luminance data through the colored panels. Nice! Made a conventional LCD look muddy and dim.

Back to the floor, time to focus. Sat in on demos and did product scouting: Maya, XSI, Houdini, Digital Fusion, Shake. Interesting new compositor: Tornado from a few guys called Digital Phenomena, good looking 2D, nice particles, well done UI. Trolled the hardware, scoping latest card features: NVidia, ATI, 3DLabs. Scoured the Adobe booth, AE and PremPro demos somewhat status quo but solid, no sign of PS8 or new Illust11. Cool interactive modeling app called InDex from a company named Digital Artforms. Had Very unique 2 handed 6 DOF input. You hold a controller in each hand and move and twist them, makes for some incredibly intuitive gestural input, very fast, very easy. They also had some stunning real-time union/intersection operations. Vicon was showing great wireless motion capture. Virtools, game development libraries and APIs, interesting development tools very visual, nice stuff if you ever want to add dynamics. There were a number of companies showing software for controlling many autonomous characters, crowds, herds, flocks. Most used state machines to cycle through and between behaviors, but allowed for massive interaction and full repeatability, highly programmable down to the individual level these simulations could then be baked out to standard DCC animation scenes and edited and rendered in your package of choice. Interoperability seemed to be in vogue this year. After years of gouging one another and ignoring customer pleas for compatability the vendors, now forced to sell their lookalike wares at 20% of what they used to, have begun to open up a bit. In fact most are in a desperate struggle to do anything to make their packages more attractive, so now they're even willing to stoop to compatability, data sharing, open standards and partnering.

Some of the most fascinating technical sketches were presented midday, a series titled: The Matrix Revealed. This was an in depth examination of the techniques used on Reloaded. The pipeline developed for this film and the final Matrix Revolutions is to say the least, stunning. Much of the technology demonstrated never made it to the screen in a form that could be fully appreciated. But that such overwhelmingly realistic techniques were used definitely showed through in the final results. The technical directors at ESC sent fabric samples out for a full sampling of the reflectance properties of the clothing worn by the principle characters Neo and Agent Smith. Cloth models were developed for the specific fabrics which produce physically accurate self shadowing have proper tensile and elastic properties and when rendered are virtually indistinguishable from the original fabrics. Facial animation capture and rendering: Gollum, Aki, Yoda, Dobby, very impressive but they were none the less characters. The facial work done by ESC is the first that truly crosses the line into the realm of fully realized, indistinguishable from reality, digital perfromances. The skin, hair and lighting on this project was impeccable. If you want details let me know. Suffice to say, the tools and pipeline developed at ESC are the most advanced to date and they have made watershed leaps in production image synthesis. This, for me, was the best stuff shown at SIGGRAPH.

I don't think I mentioned, but there was a lot of sub-surface scattering going on. A gaggle of papers, tech sketches and behind the scenes production notes described or hinted at their use of SSS techniques. Made for some very nice skin, soft marble, and general translucency. Last year's research, today's tools. Look forward to these features coming to a renderer near you. (and no more faking it, most implementations are taking a physically driven approach. light makes right! and physics rules)

Late afternoon, hands on Advanced Cg Programming. Had to sign up well in advance for this Cg lab course. Picked up some good info and techniques. Got the low down on a nice shadowing method that will do real-time selfshadowing on any geometry. Interesting overview of the CgFX file format and some cool debugging strategies. The course was taught on some new Toshiba laptops with GeForce FX Go videocards, most fantastic 3D power on a laptop I've seen, very impressive. Sony recently announced a Vaio with this card. Many folks currently compile shaders on the fly, which is fine for development, there's no big speed hit. But, there are some cross platform and storage advantages to pre-compiling the shaders. There are additionally some multipass methods to take advantage of if you adopt CgFX. Glad I caught this session.

I made a last dash over to the floor and then off to the Electronic Theatre. The ET is one of the highpoints of the SIGGRAPH experience. It's a 2 hour compilation of juried work from the past year representing the most innovative CG animiation, effects, visualization and art presented in large screen splendor. This year the showing was at the Civic Theatre, a very opulent venue in the center of downtown SD. It was a nice blend of Hollywood magic, comic shorts, artsy banter and superb storytelling. Awarded best short was "Eternal Gaze" by Sam Chen, a story about artist Alberto Giacometti. Two and a half years in the making a very striking personal work, look for it as a Best Animated Short Oscar nominee. Another first this year, a cinematic piece created in real-time, NVidia's Dawn demo played beautifully in her big screen debut. There was some awesome microscopic scale visualizations one of protein folding and another of a human egg from a Bjork video titled "Nature is Ancient". There was a cinematic from Blizzard's Warcraft3 and some awesome film visuals from T3, Hulk, LOTR, Matrix and X2. A couple other brilliant story pieces Tim Tom and Chainsmoker. We'll be presenting the Electronic Theatre in December here in Boston so plan on catching all this great animation.

Thursday

Last day. Information overload starting to set in. Breakfast meeting with Paul Lipsky, professor at NYIT. Paul is starting up a very interesting program on "Convergent Media Design". They have a fibre infra-structure which resembles the bandwidth and connectivity that will be available to homes and offices in the future. Their intention is to work closely with corporate sponsors to drive directed research in future network connectivity and as part of that project Paul will be assembling a studio for content creation/media distribution and developing applications to highlight the new abilities afforded by such an infrastructure. He is currently in a funds gathering phase and is seeking out support for the new program.

I made my last trek through the show floor. Grasping for last minute details and picking up some of the heavy stuff I didn't want to carry around all week. A lot of attendees had already left. The floor was much easier to circumnavigate and booth staff had the 1000 yard stare as they counted down till closing. Caught a few more demos, got an in depth peak at Houdini stuff, talked to some of the Maya folks and pressed one of the Adobe guys about their future Mac support. Mesmer has a new training DVD on mental ray and there were a few good books released on MEL and OpenGL.

Off to sketches... Asset management is a huge topic. Caught a couple presentations from Mainframe Entertainment and Dreamworks on how they manage their pipeline. Then Digital Domain spoke about how they convert animation resources between packages. Next, ILM gave the low down on how they tore up the Hulk's clothing. They had a very cool pre-shredding technique. So, they basically ripped the clothes apart, put them back together and held them really tight, then they would slowly loosen up the seams and the clothing would tear apart predicatably, controlled and repeatable. This was a strong theme across many talks over the week. Physical simulation and dynamics are becoming intimately integrated into the common workflow. Even at the game level, rag doll physics and real-time collision detection are pervasive.

For the last sessions of the day I attended a series of talks on Faces. Henrik Wann-Jensen who recently started up a graphics program at UCSD had a great overview of applying subsurface scattering for face rendering. He did a beautiful image for a National Geographic story. Great lighting. Then a paper on wrinkle generation, reacted quite naturally with expression, but was a bit too symetric for me. Lastly a couple talks on segmenting regions of the face for seperate animation. One of these was automatic and had quite nice results.

Wrapup

That wraps up my SIGGRAPH 2003 rant. If you have any specific questions, just give me a shout. See you next year in LA!

Barry Ruff is Chief Scientist at BorisFX in Boston and has been doing 3D computer graphics and animation for a while now. This was his 16th SIGGRAPH and any opinions expressed are most certainly solely his own. Barry is currently vice chair of New England SIGGRAPH and you can reach out to him at: barry@ruff.net