Post date: Mar 27, 2011 3:4:12 AM
Adding a square to the Painting With Pixels installation at GDC 2011.
Game Developer's Conference 2011 was an intense several days of lectures, portfolio reviews, and classic postmortems. Faculty and Students who attended encountered a wealth of technical knowledge, industry insight, and networking opportunities.
The following are excerpts from my notes that I found to be the most relevant.
Iwata: Specialism is leading to the people on a given project not being able to understand the big picture of the game they are working on.
Nice to hear someone saying this. Maybe it will take hold. On some of the games I worked on I feel I was one of the few to see the big picture of the game and the ip. It is so important for everyone to understand how their part fits into the greater whole. Otherwise they create things in a vacuum that ultimately don't make sense in the world of that game.
On getting your game noticed:
Game must capture the audiences attention immediately.
Game must be simply understood and described.
If these two things are accomplished the game will sell itself.
Nintendo is afraid of Apple and the smart phone.
This is the impression I got from some comments he made during his speech. Mobile gaming is going to explode over the next few years. The processor speeds and GPUs are getting to the point where it's possible to make stunning 3D graphics. Price to entry for developers is practically nothing. And most importantly they get to keep most of the money they earn. Fully featured game engine like Unreal Development Kit now run on phones and they just lowered the amount of royalties they expect. This is a great time to be an independent game developer.
Revealed some minor details about the Nintendo 3DS
New stereoscopic Mario game
Netflix streaming
New Zelda Trailer (heavily featuring the ability to slice through objects with your sword)
Ability to shoot stereoscopic video.
This lead me to check out the 3DS on the show floor later. To be honest, I was very impressed by the stereoscopic gameplay and motion tracking features of the demo I played. Nintendo continues to lead innovation in gaming hardware.
Nintendo 3DS, Pilotwings demo at GDC 2011
What to look for in new hires: Communication, problem solving, must be artists first, programmers second.
Top skill = Hire people who have the ability to learn.
Generalization is preferred to specialization, plus you won't get stuck doing the same thing all the time.
Presentation of a "product" is key to artists adopting a new tool or new way to do something.
Specific and relevant documentation works better than broad overviews.
Very important to use the tools you create from beginning to end. Otherwise you won't truly understand they work the way they need to.
Disguise your tools so they look familiar "Hide the dog pill in the hot dog."
Epic called in June and asked them to create an iOs game using Unreal in five months(!). Had only three weeks to get demo ready for Apple presentation with Steve Jobs.
There are 2.2 billion cell phones and 500 million smart phones in use in the world
(That's what I wrote down, but according to the chart below, there are currently over 5 billion mobile subscribers and 394 million smart phone and tablet users).
This beats the Playstation 2, the largest selling game console, at 120 million.
(According to this list it is at 150 Million now).
Plenty of RAM on the iPad and iPhone is the key to making visuals that surpass the PS2.
Smartphone hardware capability will surpass x-box360 within 3 years.
The four pillars of mobile gameplay:
Ability to play game with one finger.
Short session core gameplay (2 minutes).
Original, device specific design.
Easy to grasp, difficult to master - skill based gameplay.
Characters:
4,000 Polys.
Low verts, huge textures.
Low bone count.
2 weights per vert.
1 draw call.
Tried to use alpha channel of diffuse to create specular, but still too much memory, so they used luminance instead.
Environments:
Only 50 objects (or draw calls). They combined meshes together.
Custom painted flat cards to fill distance. No post processing. Built castle set, then took screen grab and cut apart in Photoshop to make several flat parallaxing layers.
Baked Lighting:
No dynamic shadows, but big light-maps created crisper shadows. 5 or 6 maps at 1024 and several smaller ones.
Added another channel into the light-map so they could add custom painted texture details into them(!).
Other Optimizations:
Pre-compiled shaders.
Created flip-book of animated textures and particles.
Added back in precomputed visibility sets.
Team switched to Maya from Max for Gears 3. Having no strong pipeline created a challenge.
There are only 6 animators on Gears 3.
Goals for Developing Facial Rig
Easy to animate = most important.
Good workflow with outsourcers.
Transferable rig and data.
Decided to distribute via an executable. Used Smart Install Maker to consolidate rig, data, icons, and scripts. Used Dropbox to distribute data.
Gears 2 used a separate mesh for cinematic and game characters. gears 3 is using the same for both.
The Rig:
Poly cage of morph targets.
Uses Facial Action Coding System taken from biomedical field. 32 action units that relate to muscles of the face and can be used to create any expression.
"Oh wow!" moment: They could easily transfer the rig from one character to another using one morph target.
Cinematic Mesh has 42 bones, game mesh has 16.
Pucker needed a "corrective morph." Used Z-Brush to create custom normal maps for specific expressions. Blended between them during animations.
In the future they plan to use a 'Universal Head." Removes having to weight skin for each character. Results in it taking only 30 minutes to weight a new character.
Slides for this presentation can be found here
I went to a few Classic Game Postmortems. These were some the groundbreaking games that influenced me to enter the industry and create my own games.
Some highlights:
Pac-Man, Toru Iwatani
Was created to attract female players. Arcades were the playground for boys (dirty and smelly).
Very important to keep your game simple to understand, and do everything possible to make the player feel powerful.
Another World - Eric Chahi (ubisoft)
Polygon Dogma - He was obsessed with polygons. Inspired by Dragon's Lair, but he wanted to create the same look entirely with polygons.
Created the game in chronological order without knowing exactly what he was going to do next.
Close up of "Friend" character was shown only once. That was enough for player to fill in the details from their imagination.
Foreground "Cinematics" occur during gameplay. Clever way to keep the player playing, while showing more things happening around him.
Doom - Tom Hall, John Romero
All the weapons were toys that they digitized with a video camera.
Inspired by D&D and the Aliens movies.
While working on Doom, 20th Century Fox asked them to do Aliens. They decided they didn't want to give up creative control.
First game to use BSP (Binary Space Partitioning Trees).
Links
Slides and videos for several presentations can be found in the free section of the GDC Vault.
GDC 2012 will be March 5 -9, which will line up perfectly with Spring Break next year.
At GDC, I had three goals: get meaningful professional feedback on student work, attend as many curriculum relevant sessions as possible, and percolate ideas for an awesome booth for Ringling College at the show next year.
GDC 2011 was a breakout year for Ringling College’s Game Art and Design students. We had a significant number of students (~30) who attended the conference and this was the first time instructors brought samples of students' 3D work for professional review. All the professional colleagues that I shared student work with expressed interest in the program and were impressed by the work, despite it being presented in printed form. In upcoming years, the students' work would be best shown through an electronic viewing device, like an iPad.
Professionals seemed less concerned with poly limits, so that is a gamble in the program that paid off. We have not focused much on optimizing 3D content because the technology is a moving target and the optimization needs are different per game project.
Professionals encouraged us to push for higher poly work with even greater attention to detail. These details will have to come from tuning students eyes to carefully observe the world around them and through focused exercises on capturing surface definitions in great detail and with realistic light effects. Next year, we will have an assignment in the Fall Junior 3D class that better caters to this need. The assignment will be based on the Dutch Masters still lifes, e.g. Vanitas, Momenti Mori. There will also be a renewed requirement for atleast one HERO asset in each of the Realistic and Fantastic locations assignments to push students to have higher poly assets in their portfolios. This adjustment is justified based on art tests the students and graduating seniors have been administered from Game Studios. As more students encounter art tests for employment, and I learn more about the goals of these skills test, I will willing make more adjustments to the class assignment in Junior 3D to better prepare the students to match employers' needs.
Lastly, many of the professionals, about half, had a strong and positive reaction to work that had a whimsical and fantastic look. Samples that featured hand painted textures instead of using photo source. This point will be emphasized more strongly in the fall Fantastic environment assignment and students will be allowed to do hand painted textures on subsequent assignments in Junior Spring 3D.
As with last year, I had the opportunity to partake in over twelve informative sessions and network with dozens of old and new contacts. It was a very busy event for networking starting in the morning and ending late at night. I hope we put our program positively in people's minds and that they will visit us next year at the school's both. I think we will have a compelling story to tell and great samples to share.
My two favorite sessions were Matt Luhn’s talk on Character Arcs and Nvidia’s detail review of Epic’s Samaritan Demo and its use of nVidia’s solution for real-time Cloth physics.
Pixar Masterclass: Matthew Luhn - Story Images and Character Arcs
For 2011, Matthew did a snippet of the Pixar Masterclass on the subject of Character Arcs and little bit invocation to create vivid starting point for a story.
Matthew hit on the basics of inner and outer conflict and how inner conflict should be the crux of your character’s story.
Outer Conflict
Got to give an important speech
giant man-eating great white shark
Inner Conflict
Character stutters, doesn’t feel up to the job
fear of the ocean, but has save his town.
He also mentioned you can’t have good heroes without villains that pull out the best of a hero. He briefly hit upon Robert McKee work story development and emphasized the film structure of: Exposition, Inciting incident, progressive complications, crisis, climax and resolution.
He summed up most movies either being a movie with happy ending or a tragedy, but he said how the character a change at the end is what helps illustrates the theme. Matt suggested that most of the character’s jobs in film is to help us cope with the world around us, and through their story the characters typically learn to care or learn to have courage. To traits most well adjusted moviegoers strive to do in their own lives and look to the movies for escape. Sometimes, there are characters like James Bond or Indiana Jones that don’t learn anything and where the spectator may live vicariously through the character, which is ok, cause the spectator is looking to be entertained … again to escape.
So must character arc in film will look like this:
Matt stated a well defined character makes developing stories easy, because you know how the character will react to any situation. He stated a fear or deeply rooted passion should drive your character choices in the situations that occur in the story. I took away the story should give rise to opportunities that would make these choice difficult for the character but compelling for the viewer.
Matt outlined a simple way to describe a well defined character. He said you should answer these questions for the character:
What is the character’s fear, a fear the character had from childhood to today?
What are the strengths of the character but be careful not to confuse this with talents?
Weaknesses: What do people say about the character behind their back?
What is the character’s dark side, what is the worse thing they would do, their lowest low?
What is the character’s greatest flaw, the trait that get them in trouble?
What are the traits the character most admire’s in others (again not talents)?
Using these questions he outlined Woody from Toy Story movie.
Fear: Being replaced and abandoned
Strengths: Leader, smart and caring (not talents)
Weaknesses: Bossy, arrogant
Darkside: Lie, hurt and steal
Traits that get him in trouble: worrier
Traits that he admires: Confidence and being admired by kids … like Buzz.
Here is Marlin from Finding Nemo
Fear: Losing loved ones
Strength: loving dad, protective
Weakness: pessimistic, over protective
Darkside: Resentful and Hateful
Flaw: Worrier, overprotective
Admired: Optimistic, care free ... like Dory
To me these exercises are little simpler than the character profile used in CA but the do get the mind working and looking for good contrasts to the character and situations that really push the character buttons.
The last part of Matt's lecture was on Invocation ... how to get an original idea .. or story image. Not what I expected when they story image on the session write up but I get it. Its an image you conjure up in your mind to help develop stories. Matt gave an examples us ing a car.
He suggested make a list for five cars, toys, shoes ... the first ones that come to your mind.
Then think about where you are in that car? Are you in or are you out of the car?
If in, which seat? What time of Day? Why are you there? How old are you? What are you doing? Who else is there?
So this exercise is something he used to help conjure up characters that were unique (never been done before) for Toy Story 3. I tried something like this with a student in preproduction and I got a completely unexepected answer which I think can help break ideas from the trite or see it a million times list.
NVIDIA APEX Clothing
“APEX Clothing lets artists quickly generate characters with dynamic clothing to create an ultrarealistic interactive gaming experience.”
This technology advancement was part of Epic’s love letter to the Xbox and Playstation manufacturers, to indicate the level of visual fidelity their future game console should support. Frtunately for our students they will have access to the technology to create game art that features this ultrarealistic cloth solution as soon as we get the Maya plugin rolled out. Supposedly, the December release of the Unreal Development Kit supports the data being generated in the APEX Clothing tool.I am looking forward to investigating this advancement in the near future and I have been working with IT to get access to the plugin on a test station. It will take some time to get familiar with the technology but I hope we can create cloth simulation as convincing as Epic’s in the near future.
Attending NVidia’s talk on Apex gave me confidence we can pull this off in the classroom and upon initial investigation of the availability of the new tech, it seems very well documented. I believe it is going to be pressing demands of instructors time that impedes our students from including this cloth solution plus many other physics simulation solution in our upcoming projects. Other APEX Framework advancements included improvement to realtime destruction, higher particle counts including particle effects that can be react to turbulence cause by virtual physical object, like a person walking through mist. NVidia is looking to get APEX supported for current consoles and will also provide mobile support with their TEGRA line of mobile graphics cards.
More details on the APEX Clothing tool is available here:
http://developer.nvidia.com/object/apex_clothing.html
The Technology Behind the DirectX 11 Unreal Engine"Samaritan" Demo
This was a very busy demo, filled with mostly engineering and graphics programmers. It reminded me how much I miss working with programmers. The presentation was very technical but was presented in very simple terms with lost of documentation for future reference. To me, the lecture highlighted my feelings of uncertainty in regards to the Unreal engine being able to compete against next generations engine despite it being the most widely used development engine of this console cycle. The uncertainty is in regards forward rendering and deferred rending systems. Each approach having their pros and cons, with some notable graphical pleasing games utilizing deferred rendering methods; Uncharted 2, CryEngine3, and Killzone 2. http://www.guerrilla-games.com/publications/dr_kz2_rsx_dev07.pdf Further highlighting this concern is a recent interview with Kevin "Decline" Cline and what he had to say about their upcoming release from Irrational Games. This studio has continued to heavily modify Unreal to fit their needs. which was supported by hiring five of the brilliant senior programmers from my last game team. One of them heads up their central tech team. "To meet the aesthetic goals of our art team, our rendering gurus had to write a whole new renderer for BioShock Infinite based on Deferred Lighting (a technique used in ), and on top of that they’ve developed a proprietary per-pixel dynamic relighting scheme that allows characters and dynamic objects to receive global illumination" http://gamer.blorge.com/2010/11/21/bioshock-infinite-development-is-ps3-focused-and-uses-uncharted-2-tech/
These types of changes don't come cheap and will not likely be readily available to student users. That said, it wa s very encouraging to see the Samaritan demo include several GBuffer techniques to achieve the final image quality. Hopefully, these advancements will keep the engine competitive which in turn keep of our students' realtime render work competitive to the industry. Other advancements that will be of interest to our artists is billboard reflections, improved depth of field, including Bokeh effects and screenspace subsurface scattering (SSSSS, SR5). These Direct X11 features may require a graphics card upgrade in the labs.http://www.geforce.com/#/Hardware/GPUs/geforce-gtx-590The coolest thing about NVIDIA's presentation was the inclusion references to other papers which to my mind brings credibility to their work. While other presenters at GDC, like the guys who on Brink's character talk, presented content as if they developed the concepts from scratch with no attribution to previous games or development efforts. Grrrrr.