by ferris
So this writeup is a bit late, but better late than never, no? :)
This week the Logicoma crew finished our latest prod, Elysian, and with it, won the 64k compo at TRSAC! Needless to say, this is a huge milestone for our group, our tech, and me personally as well; I think without a doubt this is the best intro(/demo) I've ever done, both stylistically and technically (at least if you include the tooling).
So, in this post, I'd like to shed some light on how some of the different bits and bobs work. Since some of the intended audience for this post is not the usual everyweeks-readers and may lack some context I usually assume a reader has, I'll likely go into more detail on some things I've gone over before, or perhaps I won't. Either way, it should be a fun read for those of you curious about how this thing was made. :)
Note: This is going to be pretty scatterbrained. I've been working on bits of this project for months and I've lost a bit of track over what happened when etc, and I'm also writing this a bit after the prod was finished. I'd love to do a more proper thing where I go through each bit in detail for someone completely new to this, but I don't think that's realistic. Best I can do is to splat a bunch of my brains out now and leave it a mess, and hope the audience finds it charming or something :) Anyways, feel free to contact me on reddit or twitter or email or something if you're really curious. I'm always happy to chat about intros and tech. :)
We'll start where this particular toolset started, sometime back in March. Well, actually, if you include all the tooling, it goes back to ~spring 2012 or so. So maybe that's not the best way to organize this :)
Let's perhaps enumerate some of the major parts of the intro instead:
I want to talk about audio first, as it's the oldest piece of tech in this prod by far. It's also content-wise the first thing we had done. H0ffman (a fantastic musician who's worked with this synth countless times before) did the track, and was smart enough to do so ~3 weeks before the party. After a few messages back/forth to decide on a genre, most of the soundtrack was done in the following 3 days or something (nuts!), and with that, we had the mood set for the intro quite early on. Of course, the track was also quite big, but we'll get to that shortly :)
Before I talk more about the audio content, I have to talk a bit about the tech. The audio for this intro is generated by [fairly standard, really] softsynth called WaveSabre featuring subtractive synthesis, fm synthesis, a few distortion units, a 3-band eq/filter, a delay, a reverb, a compressor, and a sampler. Most of the synth bits are fairly run-of-the-mill, but we have a pretty nice way of orchestrating these that I've explained many times, including in this extensive talk that you should check out if you haven't before. Especially fun is the part where I say C# is the best programming language in the world. Oh, silly, young me :) . Apart from that, some high-level details about the synth are that it's written in C++, most of the work was done in late 2012 to early 2014, it's a collection of audio modules wrapped in VST plugins that you orchestrate with Ableton Live or Reaper and we parse the resulting project file to get routing/patch info that we stuff in the exe, and that we (h0ffman and I and a few others) have done lots of executable music entries with it for various parties (for example Reverence and SabreWulf, and the first track ever produced, i maed a synth), but it never really made it into any decent intros, just a few small ones I produced here and there (the fast-made varia part 1 and part 2). Everything else is covered in that talk, and we didn't make any changes to it for this intro, so I best not spend too much more time talking about that bit of tech here.
Specifically for this intro, this song/style of music relies very heavily on the sampler unit. The sampler uses a GSM codec built-in on Windows to compress the samples, which in this case consist of the kick, clap, a few clicks/pops, and a couple vocal sounds. Even with that, altogether the synth+music data in the final executable take up over half of the entire intro size - at least 30+kb (I don't have exact numbers as we never tested that bit in isolation). This is, in a word, hugenormous, and we knew very early on it was quite big. But because we hadn't really tested our visual tooling yet, we made the call to just go with it and not worry about its size unless we absolutely had to (spoiler alert: the final intro size is 56kb; we didn't have to)!
I can't really say more about the music itself as I didn't write it, but before I move on I must say hats off to H0ff for his fantastic (and extremely fast) work. This guy has tons of experience making all kinds of tunes, and it really shows! He's also a wizard with quirky 64k softsynths, but I'm sure you're aware of that by now ;)
Since this was the first size-optimized demo done in Rust to my knowledge, I'd like to say a few words on how we got everything small. Essentially, the trick is to not use the standard lib, which really isn't that limiting, as Rust's core
lib has a lot of what we need for memory/pointer stuff, and we can fill in the rest with some basic platform-specific bindings and intrinsics. One of the things that obviously helps here is the fact that even though our tooling is cross-platform, the intro itself only really needs to target Windows. So, we only bind the OpenGL/win32 API bits we need (using custom bindings as the official packages pull in std
again, which we can't have), and use a few intrinsics for some of the floating point stuff. As long as that stuff is really tight and we know at all times what we're linking against, the rest is basically just using the MSVC ABI, as that way what little things we do use of the runtime lib (which are mostly used by the synth, which is compiled with MSVC) can come from dynamic linking against the old VC6 lib, which is system-standard since basically forever. Apart from that, we use kkrunchy for compression, which is very good for compression, but bad for virus scanner false-positives. I did actually spend a couple weeks researching our own packer in the last few months, which I plan to continue, but we just didn't have enough time to finish that and the visual tooling to a point where they were both usable, so we had to prioritize. Anyways, fun stuff. :)
There's really a couple sides to the visuals; there's the tooling which I've been working on (and writing about) for weeks, and there's the visuals specific to this intro. Let's start with some words on the tooling:
For building scenes and wrangling the GPU, I've been working on a visual tool (mostly in Rust, but there's a thin UI layer in C++/Qt on top of that) that allows us to iterate very quickly. This has gone through a few iterations by now, and is a bit different than the tool we had 3 months ago for Almost Infinite. Basically, we realized that a huge bottleneck of working with the previous tool was changing CPU code required very slow recompiles, which was especially painful when we wanted to change very trivial things like buffer formats etc. Especially compared to changing GPU code, which is as simple as watching a few files for changes and reloading them dynamically. So, we went with a drastic rethink/reimplementation, and what we arrived at looks something like this:
On the right, you can see emoon/TBL's rocket editor, which we use for time control/sync. That's basically an off-the-shelf editor for laying out very simple animation curves on a timeline, and is the heart of the sequencing in this intro. On the left you can see our tool: At the top of the window, we can see a viewport, which is a direct rendering of what the prod looks like at a given time. Beneath that is an operator graph, which is sort of like a visual imperative programming language. Execution starts at the "master pos" (marked with a red rectangle), and execution flows downwards from there. There are many different operators, some of them represent assets (such as the "rt" operators, which create render targets), control flow operators (such as cl/sb which call and define a subroutine respectively, if to conditionally execute the stack below, rp to repeat it, etc), imperative commands (clear screen, draw quad, draw cube), bind shader pairs, etc. The UI supports basic navigation of the graph, as well as facilities to create/destroy/select/move operators. When selected, the lower portion of the screen reveals some text boxes where a number of expressions can be entered (not shown in this screenshot) which, depending on the operator and the expression, can either represent expressions that will evaluate to floating-point numbers at runtime, or refer to assets or buffer formats or anything, really. These operators are compiled on-the-fly while editing into a bytecode format, which is then also compiled on-the-fly to x86 in-process in the tool. All of this is connected to a very simple engine that abstracts certain OpenGL features (shaders, textures, render targets, ...). This allows very rapid turnaround times for changing CPU code effectively (changes are effectively instant; I haven't even profiled how fast compilation is because I don't even feel it), and it also allows for some nice compile-time checking of references, buffer formats, scope issues, etc. On export, this bytecode, bytecode->x86 compiler, and the engine are all shipped in the intro (assuming the bytecode will compress really well, which it appears to, but this intro doesn't actually use as much of it as we'd like, so we'll have to test more) and is compiled on startup. I've gone over most of the details on how this works in previous posts, so I'll refer to those for more info:
At some point I'll likely do a stream where I go over the separate bits of what's there and show how it all fits together; it's quite difficult to write it all out, but it'd be very easy to show, so stay tuned for that :)
So, now I want to write some stuff about the actual visual content of the intro. Most of this was done in the last week or so before the party, with a few bits (such as the modelling for the ball scene) done a bit before that as we needed a test scene to test the rendering pipeline etc. The final operator graph looks like this:
Since it's the first time we've really used it for anything more than simple tests, the graph is quite a lot bigger than in the other shots, but it was a good test to show that "hey, this thing actually scales!" and that things can stay organized. Basically, the right half is the rendering pipeline, and the left half is the intro itself.
The rendering pipeline started out as more or less a direct port from the tech we had in Almost Infinite. I wrote a lot about some of the post FX (which is really most of the tech there) in that post, so I'll leave that info there (aside from some stuff I'll mention in a moment). But essentially, we're doing deferred rendering where we can select between a bunch of different scenes (for different geometry) and shadings (for different lighting environments). This is kindof a hacky way to get around the fact that drawing light volumes takes a bit of thought, and it was easier for me to throw together a few different lighting environments in full screen single-shader passes than that (given we only used a few lights for speed, which was the case), so that's what we ran with.
For lighting, we used simple point lights and ggx specular mixed with some blinn diffuse and some hacky metallic/shininess material stuff we found from various places. I'm being intentionally vague as it's kindof a mess and because we were using point lights, it was pretty hard to work with as well, and really limited which kinds of materials we could use that would actually still look good. But because we were short on time (and I was admittedly having more fun playing with materials than tweaking the lighting pipeline) we stuck with it, just to see how it would go. :)
Apart from that, there's not really much to it. There are no shadows, no AO, nothing fancy like that; just well-placed lights on tweaked materials, additive colors, framing objects, and a lot of intentional motion and focus pulls to tie it all together. In a way I'd much prefer a piece's visual style to be consistently simple than back-and-forth, and given the timeframe we had and our lack of experience with the new tool, I think keeping things simple was the right choice for sure.
Ah, one more thing. I did tweak the DOF since the last prod; we now process the far field in full-res, not half-res. This makes the bokeh shapes MUCH clearer and reduces a lot of flickering artifacts at the cost of some performance, but this intro is pretty fast as it is (for what it does at least) so we certainly had some wiggle room there.
Essentially, each scene is really just a collection of simple primitives splatted around with different transforms/shaders. For example, the ball scene is a bunch of flattened cubes wrapped around a sphere shape with offsets determined in the vertex shader, the tunnel scene a bunch of cubes with patterns drawn in the pixel shader, etc. This way of working was pretty effective speedwise, and was at least quite fun for me as a coder to mess around with. It also meant that I was always the one driving when wobble and I were doing visuals, but at least we were both able to "direct" how stuff would go, which was helpful. Various textures/time offsets/etc were applied using some shader-based fbm with various inputs (usually an index based on some looped primitive from the CPU side, but also often based on world pos/original vertex pos, etc), and a lot of values were exported to the sync tracker so we had direct control over them over time.
To match the "clicky" feel of the music and the sync that we wanted to achieve, I did some work with overlaying basic 2D shapes over the 3D content. This was pretty simple distance field code in the shaders for basic primitives (circle, ring, rectangle) and I show/hide them at different intervals/periods, all in one of the pixel shaders. That same shader also performs "masking" where the image is flipped horizontally/vertically in certain regions defined by simple shapes, which was basically the only reason our simple "lazers" scene worked at all, and also provided a lot of fun sync opportunities. The last overlay we did was our logo, which luckily, can also be made by these simple 2D primitives. We also mask that with some 3D fbm where we use (pixel x, pixel y, time) as input, so we can make the mask "morph" over time. Noise-heads will instantly recognize the pattern, as we didn't do much to hide it (beyond some pow
), but I really liked how it looked "raw" so we stuck with it :)
The last things we do to the visual signal are to tonemap from HDR to LDR (standard affair) and do some final artistic adjustments. Particularly, we bring up the bottom of the image, so that instead of being black, it's a very dark grey. This helps to nail the "filmic" look we were after, and is quite easy to do (basically color = color * 0.99 + 0.01
more or less). The last thing we do is to apply a slight exponential curve to the green and blue color channels. This is a stupid simple "grade" that can hardly be called a grade really, but looks quite nice in our case and matches the warm music pretty well. Effectively that's just color.gb = pow(color.gb, 1.12)
or something. But as simple as it is (just like with the rest of the intro), it's the blend of all of these simple things on top of one another that make the whole thing better than the sum of its parts. There are lots of small details like this sprinkled throughout the prod that bring it together, even if the scenes etc are rather simple. :)
Rocket. Rocket rocket rocket. Rocket is really good for this. Use rocket. :)
Most of this work was done at the party. We use rocket. Rocket is good for this. Use rocket. We had the scenes more or less ready by the time we left that we knew which scenes went with which part of the music etc, and we had also layed out some test camera angles and sequences outlining the kinds of motion we wanted for the different scenes. But basically, this is just hammering out a bunch of values in a tracker for a few hours with 3 guys huddled around a small laptop. Since the tracker is bpm-aligned with the music, this is pretty easy; the rest is exposing lots of parameters to play with and generally thinking about what kinds of motion we wanted. From the beginning, we wanted very sharp, deliberate, small movements that fit the "clickiness" of the music (I keep coming back to that term, but it really drove a lot of the prod artistically). Rocket is especially good for this, as its curves are fairly weak, but that kind of motion relies more on hard cuts rather than specific, smooth curves. So, we won out a lot there just by leveraging what our tool was good at and using it to add lots of small, detailed changes in motion.
This is one of the parts of the intro I'm happiest with, because we were able to spend a lot more time on this part than we usually get to, and we were able to achieve the feel that we wanted from the start without many compromises. In the end there are a couple motions I think might be a bit too rough, but I think it gives the intro character and doesn't take away too much.
So this was pretty scatterbraihned as I had anticipated, but hey, a messy writeup is better than no writeup. At least we get some kind of post-mortem about all of this. But all in all, I'm very pleased with what we were able to accomplish artistically. I want to do demos that look like motion graphics, and I think we've achieved a characteristic look/feel here that resonates heavily with that and how I would like logicoma productions to look/feel like in the future. Technically I think we've got a ways to go, but the tech is moving forward at a good pace, and at this point the vast majority of the technical problems we have to solve are behind us, so now we should be able to specialize and push this tech to see how far it can go, which is very exciting. I have some ideas for future scenes etc, but you'll just have to wait to see those later :)
And here's a bunch of random WIP images saved at various stages of production. I didn't really categorize these or anything; just saved them to my desktop periodically while working on the prod :)
Last Edited on Sat Oct 29 2016 13:02:15 GMT-0400 (EDT)