by Sunjammer
So last week's goal was to investigate CEGUI as a possible general purpose GUI framework for my game, and the conclusion there is a little nuanced. I've been trying to use more libraries instead of constantly getting stuck in a rut writing everything myself, but I think with CEGUI I found some greater understanding of what it is that I actually feel here.
CEGUI is simply too much. Half a day was spent wrangling CMake to get at a working build, and when that was attained the sheer amount of dependencies and concerns made the whole endeavor of writing haxe bindings or even writing a game specific C lib wrapping CEGUI incredibly daunting in a way that made it feel like embarking on a new project entirely.
Game GUI is an interesting thing because its scope and also its capability is limited as a rule by the requirements of the game engine. For instance, restricting yourself to a 2D canvas is artificial: There is no clear boundary between floating elements that respect the depth of a 3d scene, dropdown menus and floating tooltips, and for the most part these are not things you actually want to concern yourself with as a creative limitation. You want to make an image that presents information, that is all.
General purpose GUI engines tend to be the thing you build the entire application in and perhaps extend with a custom renderer here and there. They are less a rendering context and more an application context.
The more I looked at CEGUI and its examples, the more I realized it simply solves too much in too biased a fashion and as a result actually impose limitations on my work. Instead of exciting me, the CEGUI example UIs served as a series of indications that my intent was not aligned with what the library tries to do.
On a less cerebral note, I'm also not going to introduce 5+ megs of dependencies and associated licenses to my game when several of the dependencies solve issues I already have solutions for.
Baggers mentioned NanoGUI to me on several occasions, and after reading its documentation I found myself increasingly drawn to NanoVG, a very small, very pretty immediate mode vector graphics rendering library with OpenGL3 support and font rendering via stb_truetype. I already use stb_image so this just felt extremely well aligned with what I was currently doing. I spent about half a day messing with it and building some rudimentary Haxe bindings before I discovered the job had already been done for me. Note to self, Google the things.
The lovely thing about NanoVG is it is a fantastic neighbor. It took me less than 20 minutes to replace all my debug vector drawing routines with NanoVG calls, getting text rendering as an afterthought. This was extremely encouraging: The key requirements I have of a GUI renderer are vector art with color and image fills and robust-enough text rendering, and NanoVG delivers exactly this.
For actual UI concerns, I'm already propagating keyboard and mouse buffers up a graph, so doing the same to a UI tree is trivial. Years ago I threw together a GUI framework for use with Haxe/OpenFL/Flash. It is dogshit pooly maintained, and my work the past years have taught me a lot of things about GUI that I didn't know back then, but I already have a good basis for getting fundamental components up and playing ball. All Flash vestiges have to be torn out of it and formal update/consume and render steps need to be added, but current prognosis is I should have HXComp rendering through NanoVG in my engine sometime this week.
Now before you stop me let me stop myself: Why reimplement much of what NanoGUI already does? Because I would have to reimplement much of NanoGUI in Haxe anyway, and I would much prefer to write this part of the code in Haxe since it is likely to require more rendering and GL state poking than NanoGUI easily gives me.
So for this week, the GUI adventure continues! I also want to take a GUI break at some point and mess about with vehicle physics some more. I keep procrastinating around this: Physics math fuck with my head and I always keep finding reasons to do other things. I think what I need to do is the world's dumbest naivest implementation just to have a placeholder until I can dedicate a week to it. For now though, there definitely needs to be more game and less engine in my days.
Some quick insight into how I hook GUI into the game event loop. Every fixed game update the following things take place (abbreviated):
"Input events" in my engine are semantic bindings. For instance, WalkLeft(amount)
can be a controller axis, an arrow key, a dpad button, a mouse delta etc. When these things occur, a WalkLeft is emitted.
The new game update injects a GUI update step between 2 and 3, allowing GUI to consume events like mouse clicks and keydowns with natural implicit priority before the common input mapper kicks in. The GUI functions as a secondary input mapper that adds its own class of semantic bindings to the list of inputs passed to the game state.
I'll leave the actual GUI render step up to the game state to decide.
Note that in this engine all graphics and resources go through static managers. The only rendering the engine itself does is bootstrapping, GPU resource management and flipping the buffers, giving each game state nearly complete control.