Imitating the Canvas Engine (0): Overview

I think I'm at the point where I can share what I've been doing over the past few months, amid figuring out how to convert models from MikuMikuDance to XNA and writing these posts.

Yeah, that title is a pretty bold statement, so let me just say that what I'm doing is a really basic imitation of a graphical effect I saw in Valkyria Chronicles. Despite what I thought of the gameplay, the Canvas engine genuinely interested me, and so I decided I'd try and figure out how they did it. There could be a little to a lot more to the Canvas engine than what I'm going into here, but at the very least this might give some people out there a general idea of (how I think) it's done.

For those of you who don't know Valkyria Chronicles... well, you could look it up, but suffice to say that the game's graphical engine was made to make everything look like scenes drawn and colored by hand on a sketchbook or canvas. When playing through the game, I came to the conclusion that half the effect was the models and textures that were being used, but the other half was some postprocessing done on the scene itself.

What I observed from the game:

  • The scene is rendered on top of a screen-filling canvas texture.
  • The edges of the screen don't contain any color from the scene, taking the color of bare, unpainted canvas.
  • The base canvas texture seeps through to the colored portion to make it look like it was drawn on paper medium.
  • Model edge detection and edge rendering is applied to the scene, though these edges are drawn beyond the colored section of the screen and out to the screen edges, unlike the scene color.
  • Model edges also have some texture to them to make it feel as though they were penciled on a rough sheet of paper, and aren't just solid colors rendered onto the scene.
  • Rather than lowering the intensity of light on a pixel when shadowing, a shadow texture is overlaid on the scene where shadows fall.
  • In Book Mode, shadows are offset a bit so that they hang off the edge of the model.
  • Distant objects are out of focus. Strangely, their edges are not, so you can see some distant models bleed past their edges.
  • Grass always rotates to face the camera, regardless of angle.

I didn't really get into the last one there - my scene is sparse, and I definitely don't have a full forested scene primed and ready to go.

But for the rest, it looks like there are a bunch of different postprocessing techniques being used.

  • Edge detection
  • Shadow mapping
  • Blurs, blurs, blurs

Of these, the edge detection can be simple or hard depending on what edge detection technique you use. I'll still go over what I did, as simple as it was, but it's really a matter of where in the imitation Canvas engine you apply the effect. Shadow mapping was the most complex part, first because I was dealing with directional lights, and second because I wanted to be able to change the direction of the light dynamically. The blurs... well, blurring is easy. So easy, I was second guessing myself on how easily it could be coded.

So now that I'm finally writing this, here's how I broke down the canvas renderer.

  • Render scene, using cel shading to convey volume but not shadows.
  • Create normal depth map.
  • Apply depth blur to the rendered scene.
  • Apply canvas textures to the rendered scene.
  • Draw edges onto the rendered scene.
  • Create shadow map.
  • Translate shadow map to screen space, adding Lambertian lighting values to it.
  • Directional blur, then Gaussian blur screen space shadow map.
  • Blend shadow texture onto the scene.

I had some help along the way, and a bunch of very kind people took some very kind time to post up tutorials and such of basic rendering techniques. In particular, the XNA tutorial on nonphotorealistic rendering was really great. It was probably the most useful tutorial I came across. Shadow mapping I mostly got from the XNA tutorial and the Wikipedia article, and I occasionally got to look at GPU Gems for a few minor blur effects.

I tried converting a bunch of models to FBX format from MikuMikuDance, but a lot of them didn't really convert that well. Some of them had little vertex glitches, a few of them had too many bones, and a lot of them had texturing problems. I did find a few that I liked, though.

Yowane Haku: S.H.S. Haku, by Hatsuki (http://type997.blog136.fc2.com/)
Akita Neru: D. Neru, by Rummy (http://rummy.at.webry.info/)
Miku Hachune: Hachune (distribution discontinued), by Enamel (http://blog.goo.ne.jp/hachi800)

Next Post:
Imitating the Canvas Engine (1): Basic Shading Effects - Memories of Melon Pan