
HoloLens — A Developer’s Perspective
“It’s really hard.” Now that our friends in Redmond have been more open with peeks into the HoloLens program and those involved are free to...
Invent with us.
NavigationKaiser Permanente partnered with Vectorform to build a new tool for use in autism assessment, helping make the experience fun for kids and easier for clinicians. Structured around speech, occupation, and physical therapy mini-games, the Microsoft Kinect system was combined with therapist input to help track performance and improvement. As a key part of the […]
Kaiser Permanente partnered with Vectorform to build a new tool for use in autism assessment, helping make the experience fun for kids and easier for clinicians. Structured around speech, occupation, and physical therapy mini-games, the Microsoft Kinect system was combined with therapist input to help track performance and improvement. As a key part of the experience we created a 3D animated character to act as guide, coach, and friend; Marty the Monkey.
Backgrounds, icons, and the original 2D character were all illustrated by James Anderson. Because a large number of character animations were required, we decided it would be more efficient to animate the character in 3D, allowing us to make character changes more easily after primary animation completion, and reuse specific animations with different camera angles when needed.
Based on a front/side/back character sheet, everything was modeled in Lightwave using box and sketch modeling techniques and Catmull-Clark subdivision surfaces.
Facial features that work in 2D do not always translate well to 3D, especially smile shapes on a round object. It took a number of revisions and modifications, but I worked hard to retain as much of the spirit of the character as possible. The final model was kept as low-poly as possible, using edge sharpness to control details while keeping everything optimised for fast animation and smooth curves. Subdivision surfaces allow for very flexible geometry resolution, and for rendering the divisions were simply increased until discernible polygon edges were no longer visible.
Image maps were avoided, instead modeled shapes were used to divide surfaces, with separate shaders applied to each area. This bypassed the entire UV mapping process, and resulted in edge sharpness free of raster limitations. This also become important during the style development process, as skin, eyes, and fur were easily shaded with different node setups. All surfaces for the character were built in the surface node editor using grayscale values and scalar math, with specific color palettes applied at the very end of the network.
Lightwave’s sub surface scattering shader solves several issues common in designing cartoon surfaces. From the technical side, SSS interpolates even the grainiest dome lights and soft shadows before further nodes are applied, allowing for sharp cell edges (via logic, gradient, or other stepped nodes) with no extra oversampling, something that’s entirely impossible using a Lambert diffuse shader. From the artistic side, SSS groups and blends areas much like an artist simplifies and combines broader shapes while overlooking unnecessary geometry details.
I chose to use Phong specular shading for the eyes due to the bigger, slightly offset, and more stylised look over the smaller, harsher, and more accurate Blinn shader (I fully admit, sometimes the Phong specular shader is fun just because it’s so classic, such as when working on the Tron: Legacy project).
Though using a gradient node would allow for multi-step cell shading and more granular control over luminosity ranges, I opted for two simple scalar nodes and a mixer. The smooth step node is constantly used for controlling value ranges, and the curve at either end of the range results in richer contrast. Mixed with a simple logic node set to return 0.0 or 1.0 based on a threshold value, I created sharp cell shading for the nose and furry areas, while retaining a small amount of shading for better depth and to lessen the harshness of the lighting.
Of note, the skin uses no such combination, relying only on the smooth step node for a softer, flatter look. I used combinations of this smooth and sharp cell shading throughout the project, depending on the feel or style needed for each material. These values were then remapped using multiplication and addition nodes, so that I could add further details in the shadow areas.
The eye surface uses only a logic node and the Phong shader to create a bright, flat highlight. Because a dome light would still result in unresolvable noise, I implemented a split lighting scheme in Layout; a soft dome light enabled only for diffuse (affecting only the SSS nodes), and a distant light enabled only for specular (affecting only the Phong node).
Using weight maps, when possible, can be much simpler than fussing with UV mapping, and significantly faster and smoother than trying to calculate ambient occlusion at render time. In this case it also allowed for art direction, applying shading to specific areas I wanted to darken, while leaving other areas bright, regardless of physical accuracy. This weight map was then combined with the diffuse shading by simple multiplication.
To better match the original illustrations, I needed to improve the definition between areas of the character that were separated by depth but not by screen space, especially around the face and ears. Essentially outlining only those areas that needed extra separation, without looking like a continuous stroke. Given Lightwave’s poor native edge rendering and the inability to push edges behind geometry, that wasn’t going to be an option.
Rendering an edge effect using the depth map in post, either via Denis Pontonnier’s free image filter nodes or custom effects in your compositing application of choice, has the advantage of screen space thickness controls, but unfortunately this isn’t helpful for inclusion within the shader pipeline unless you render out pieces separately and combine in post. For a shader setup like this, it would be excruciatingly overcomplicated, and there’s always the issue of depth map aliasing requiring renders at much higher resolutions or significant antialiasing work in post.
Using an additional light positioned in front of the camera and using RGB channels to create separate diffuse light sets at render time would be one possible solution (for example, a 100% red SSS node and a 100% red dome light, along with a 100% green Lambert node and a 100% green point light), and while combining these in the node editor isn’t a problem, the Layout setup is a pain to deal with. I just wasn’t happy with the results, wanting something a little more automated and built strictly within the shader itself.
I finally settled on creating a custom edge projection setup using the RayCast node and a bit of math. The trick here is that it’s essentially hard-coded for this character and this scene (see below for a more universal solution). By taking the Ray Source from a Spot Info node, I grabbed the location of the current viewpoint, scaled it down, and shifted it up slightly. This provided a direction for the rays that would converge well in front of the camera; any pieces of geometry visible from the camera that were much further behind other pieces of geometry would return a measured hit instead of empty air. Combined with a logic node to filter out -1 (no hit at all) and a smooth step node to limit and normalise the distance traveled (and fading out results larger than I was looking for), I was able to render black lines around geometry edges.
For a more universally useful solution, a combination of radial offsets is preferable. Previous to Lightwave 11.6 an array like this was simply too unwieldy, but using the Compound node (introduced during Siggraph 2013, prerelease available for all registered users) makes this much simpler, compiling large node networks into a single group. You can download a sample collection of nodes at the end of this article.
Using the grayscale values from each shader network as an input for mixers, colors from the original illustrations were selected and mixed to create the desired highlights and shadows. This sort of absolute control over shadow tinting and hue shifts can result in rendered output that feels far more illustrative than 3D.
The eyes were feeling a little dead, so at the suggestion of a coworker I mixed additional white around the eyes using another weight map.
The character was rigged with automated bounce built into extremities (such as the ears and modeled tufts of hair), and a dual skeleton system that allowed me to use a combination of motion capture data and keyframed positions. This allowed the character to have a very specific start and end position, making the pre-rendered animation assets mix and match seamlessly and consistently within the game experience.
Motion capture was recorded, tracked, and targeted using iPi Soft’s motion capture solution, and lip-sync was animated using Papagayo (importing the data files using Mike Green’s LScript).
Render queues were managed using a simple Finder service in OSX, with all shots tracked in a spreadsheet, detailing progress from voice over recording (also completed in-house at Vectorform) through final image sequence delivery.
Because everything was rendered straight out of Lightwave without any need for compositing, image sequences were simply converted to the desired size and format for the development team, who then implemented the animations in the final experience.
The feedback from Kaiser Permanente has been absolutely fantastic, and the experience is already being used in clinical situations with great results. For more information, you can read about the full project on the Vectorform blog post.
ProjectedEdgeOutlines (compatible only with Lightwave 11.6 and newer)
“It’s really hard.” Now that our friends in Redmond have been more open with peeks into the HoloLens program and those involved are free to...
Thanksgiving… typically a time for reflection. For Vectorform Seattle however, it is a time to look forward. Our shortened holiday week provided the perfect opportunity...
Interested in learning more? Let’s start a conversation.