Endgame Features: Procedurally Generated and Real-Time Streamed Geometry and Textures


In response to this video, I am posting my work-in-progress proposal for a game changing feature set that Armory3D could be a could candidate for utilizing.


With the introduction of Epic’s and Unreal’s “Nanite” engine, the previous limitations placed upon maximum geometry for models were largely removed, or made less meaningful. And, while this means that less powerful hardware can access much higher levels of geometric detail, it opens the gates to flooding storage devices with massive volumes of data dedicated to supplying those upper levels of detail.

Procedural methods have a low initial storage footprint and have an arbitrary maximum detail range. However, they can also bloat both memory and storage, since they need somewhere to store and manage the generated content (not as a rule, but as a potential).

Combining the two will allow storage size to be smaller (massive understatement) than using traditional methods (pre-Nanite), while still providing a flexible upper limit of detail; a true Best of Both Worlds scenario.

This idea also makes some measure of unique sense for Armory, because of its use of Blender and how Blender has both growing development being put into geometry nodes and has an existing texture node set along with the material nodes.

Instead of the normal hard constraints, the concern will likely focus on something resembling “bandwidth” as the bottleneck. As such, it may pay to look into other areas of software development for inspiration, such as audio signals processing and real-time digital synthesizers or real-time node-based compositing (such as was proposed for Blender, years ago (and just recently added!)).

Since some situations (combinations of hardware, game, and specific in-game context) may benefit much more from traditional techniques, there should be built-in options to pre-generate and store the results of some or all generated assets. The “some” is to allow users to have some control over how much storage is occupied by these resources by picking the ones that would benefit from this the most and leave the rest. This scenario should be made clear to players before installing. Like, maybe Steam, or another store, could provide a range for the total game size (eg 150Mb-95Gb).

It may be necessary to do some form of “Procedure Caching” in a somewhat similar way to “Shader Caching” to be able to utilize details without going down the sequence in a linear manner. And, like shader caches, these could be created at the beginning of launching the game or on an “as needed” basis.

This technology can be applied nicely to create vastly more detailed worlds, such as generating semi-convincing interiors for every single building in a city (from houses and shacks to skyscrapers) and having massive caves and tunnel networks. (This part is what the video above showed already being used)

One of the best stress tests would be fully destructible assets/environments. Perhaps destruction can be accounted for and can rely on calculating & displaying the “fundamentals” first and be able to later accurately calculate & display the “harmonics,” or higher details, when able and when closer inspection is taking place.

Like with the above, this would need to be able to overcome the (current) limitation of Nanite on only using static meshes. This way, it can be used for anything and everything, including characters. This is already doable for standard procedurally generated 3D content, but this would need to also be able to happen at the dynamic sampling end (where Nanite does its magic).

The second aspect, one that doesn’t have a mainstream implementation (that I’m aware of), is the same process, but with textures. This will enable “infinite” levels of detail when getting close to or zooming in on a texture, such as a worn out sign or threads of clothing.

A concept, Artist-Directed Procedural Generated Texturing (name is up for refinement), can be applied to improve control and creativity when creating textures. It refers more to a technique than technology; a workflow. The idea is to have input in more artistic and intuitive ways over both the process and the end result, such as using low-resolution bitmaps to draw general detail patterns or flows that would be somewhat hard to describe with math formulas from scratch, or with what’s available (e.g. nodes and settings), and other methods. (This can make it easier to use procedural generation with stylized content and implement organic designs)

Combine this with next-gen media codecs (jxl, opus, etc.) and real time music playback, via soundfonts or trackers, means the potential to create games with more and bigger content than ever that are also much, much smaller in total file size than the current average. It also will scale to have greatly diminishing costs, storage-wise, instead (the more content you have, the more space you will proportionately save). For this to be true for the amount of detail, it will require the usual optimization efforts.

I have wanted, for much of my teen years, to be the one to introduce some groundbreaking improvement in video games, or movies, to improve realism and leave everyone, including those in the industry, in awe. I was never able to do that, and we’ve been squeezing against the ceiling for quite a while now, so I had resigned myself to go down with this regret and just stop caring as much as I could…But, if I could make my mark as the guy who sparked the inception of this new innovation, I will have realized my dream, albeit in a much more realistic way than I often envisioned.

First, I know the pool of (even potential) contributors is already very small for Armory. And, although I don’t want to make any assumptions about the level or degree of talent, I understand that “ain’t nobody got time for that.” So, I am making this thread not as a request, but as a place to share this idea and to gather feedback and post anything that might help move this towards feasibility (like finding and sharing whitepapers and anyone else’s research into aspects related to this).

Second, I already think Armory can be a “killer app” kind of engine when Grease Pencil support is added back, since HTML5 exporting will allow it to be the resurrection and true successor to Flash (unlike “Adobe Animate,” which nobody ever mentions or cares about) and could help revive the internet of the 2000s (oh, Newgrounds, how I love(d) thee).
But, with games regularly taking many tens of gigabytes, like DOOM: Eternal taking up freaking 80 all on its own (rounding up and you have a tenth (⅒) of a terabyte. Ten copies of DOOM and your drive is full… where’s your (strikethrough)God(/strikethrough) storage now?!), it’s clear something needs to be done. And I’ve been interested in procedurally generated content since I saw a behind-the-scenes talk about Spore and discovering the notorious “demoscene” (ah, werkkzeug, I’m so glad to find out, just now when I was looking up how to spell you, that you were released as open source, finally). So, the formations of these ideas have been on my mind for a good many years, now, and seemed a natural solution to the file size problem.

Also, let it be known that I was writing these down long before these Unreal features were released and, probably (since I haven’t followed Epic’s demo videos), before they were even announced:

I hope people find this idea inspiring and would like to help steer it toward being more realistic. If it gets first implemented in or by Armory, that would be fantastic. Otherwise, well, I guess people will benefit regardless.

(I edited this to remove the ranting elements and make it sound more professional, so that any readers can feel free to focus more on the actual meat of the idea and content than my feelings about Epic/Unreal and my personal life)

A great example of the procedural end of this technology is a classic made with the Werkkzeug3 engine called .kkrieger (download link w/ screenshots).
Here is a playthrough video if you’re not on Windows or WINE doesn’t play well with it:

And here is a behind-the-scenes look courtesy of the Nostalgia Nerd:

It’s not just impressive that they fit a lot of content into 96kb, but that it also looks very high quality (esp. for the time), as well. The sound and gameplay are obviously lacking a fair bit. But, it was intended as an interactive tech demo more than anything.

Edit: After watching the Nostalgia Nerd’s video and reading some of the comments, I have a few more points to add.

In the video, he claims the game engine is doing what I was proposing and generates the textures and their details realtime, which I don’t think is yrue. I have yet to read any official documentation, so I’m just guessing. But, it seems that all assets are generated at the start during an extended initial “loading” stage and is stored in RAM for the duration of the gameplay.

One of the top comments is a point-by-point mass reply by one of the devs to the YouTube comment section and he discusses, among many things, the role of procedural texturing in asset creation for games. He makes a point that, while tools like Substance Designer utilize this effectively, it’s much easier for artists to go outside and take a picture of a rock (his example) and run it through Photoshop than to spend hours tediously going through settings and numbers/values just to make one texture or material.

It sounds like Armor Lab is intended to take care of the material generation part by using ML algos to derive the details from the image/photo itself. I’m thinking that, given how “recreational” implementations of ML research, like StableDiffusion, are developing, it wouldn’t be unreasonable to expect an add-on to be developed in the not-too-distant future for Blender that could “reverse engineer” a node setup from an image. That setup could then reworked for either quality or just efficiency, but might provide a starting point.

I think an independent add-on for Blender would make more sense than yet another thing for the contributors to the Armor(y) Suite to develop and maintain. This is in part because that could be beneficial to some Blender users’ workflows, maybe. Idk. Maybe not a lot, since you’d already have the image file and whatnot. It definitely would be more helpful for games situations where storage is more of a concern than when making a project to be rendered out. But, maybe the merits of procedural texturing may make it more a part of those people’s workflows.

Also, regarding speed, the dev does mention that (the current implementation of procedurally generated texturing) is very slow compared to just streaming data from existing assets. I believe there could be a different way to approach the process to make this viable for both textures and models, but it seems likely that there would be measure of overhead that would limit it in some ways compared to traditional approaches. That’s another reason why I included the possibility to just pre-generate the assets, in the proposal (it would still save on initial download, and subsequent update, size).