In response to this video, I am posting my work-in-progress proposal for a game changing feature set that Armory3D could be a could candidate for utilizing.
With the introduction of Epic’s and Unreal’s “Nanite” engine, the previous limitations placed upon maximum geometry for models were largely removed, or made less meaningful. And, while this means that less powerful hardware can access much higher levels of geometric detail, it opens the gates to flooding storage devices with massive volumes of data dedicated to supplying those upper levels of detail.
Procedural methods have a low initial storage footprint and have an arbitrary maximum detail range. However, they can also bloat both memory and storage, since they need somewhere to store and manage the generated content (not as a rule, but as a potential).
Combining the two will allow storage size to be smaller (massive understatement) than using traditional methods (pre-Nanite), while still providing a flexible upper limit of detail; a true Best of Both Worlds scenario.
This idea also makes some measure of unique sense for Armory, because of its use of Blender and how Blender has both growing development being put into geometry nodes and has an existing texture node set along with the material nodes.
Instead of the normal hard constraints, the concern will likely focus on something resembling “bandwidth” as the bottleneck. As such, it may pay to look into other areas of software development for inspiration, such as audio signals processing and real-time digital synthesizers or real-time node-based compositing (such as was proposed for Blender, years ago (and just recently added!)).
Since some situations (combinations of hardware, game, and specific in-game context) may benefit much more from traditional techniques, there should be built-in options to pre-generate and store the results of some or all generated assets. The “some” is to allow users to have some control over how much storage is occupied by these resources by picking the ones that would benefit from this the most and leave the rest. This scenario should be made clear to players before installing. Like, maybe Steam, or another store, could provide a range for the total game size (eg 150Mb-95Gb).
It may be necessary to do some form of “Procedure Caching” in a somewhat similar way to “Shader Caching” to be able to utilize details without going down the sequence in a linear manner. And, like shader caches, these could be created at the beginning of launching the game or on an “as needed” basis.
This technology can be applied nicely to create vastly more detailed worlds, such as generating semi-convincing interiors for every single building in a city (from houses and shacks to skyscrapers) and having massive caves and tunnel networks. (This part is what the video above showed already being used)
One of the best stress tests would be fully destructible assets/environments. Perhaps destruction can be accounted for and can rely on calculating & displaying the “fundamentals” first and be able to later accurately calculate & display the “harmonics,” or higher details, when able and when closer inspection is taking place.
Like with the above, this would need to be able to overcome the (current) limitation of Nanite on only using static meshes. This way, it can be used for anything and everything, including characters. This is already doable for standard procedurally generated 3D content, but this would need to also be able to happen at the dynamic sampling end (where Nanite does its magic).
The second aspect, one that doesn’t have a mainstream implementation (that I’m aware of), is the same process, but with textures. This will enable “infinite” levels of detail when getting close to or zooming in on a texture, such as a worn out sign or threads of clothing.
A concept, Artist-Directed Procedural Generated Texturing (name is up for refinement), can be applied to improve control and creativity when creating textures. It refers more to a technique than technology; a workflow. The idea is to have input in more artistic and intuitive ways over both the process and the end result, such as using low-resolution bitmaps to draw general detail patterns or flows that would be somewhat hard to describe with math formulas from scratch, or with what’s available (e.g. nodes and settings), and other methods. (This can make it easier to use procedural generation with stylized content and implement organic designs)
Combine this with next-gen media codecs (jxl, opus, etc.) and real time music playback, via soundfonts or trackers, means the potential to create games with more and bigger content than ever that are also much, much smaller in total file size than the current average. It also will scale to have greatly diminishing costs, storage-wise, instead (the more content you have, the more space you will proportionately save). For this to be true for the amount of detail, it will require the usual optimization efforts.
I have wanted, for much of my teen years, to be the one to introduce some groundbreaking improvement in video games, or movies, to improve realism and leave everyone, including those in the industry, in awe. I was never able to do that, and we’ve been squeezing against the ceiling for quite a while now, so I had resigned myself to go down with this regret and just stop caring as much as I could…But, if I could make my mark as the guy who sparked the inception of this new innovation, I will have realized my dream, albeit in a much more realistic way than I often envisioned.
First, I know the pool of (even potential) contributors is already very small for Armory. And, although I don’t want to make any assumptions about the level or degree of talent, I understand that “ain’t nobody got time for that.” So, I am making this thread not as a request, but as a place to share this idea and to gather feedback and post anything that might help move this towards feasibility (like finding and sharing whitepapers and anyone else’s research into aspects related to this).
Second, I already think Armory can be a “killer app” kind of engine when Grease Pencil support is added back, since HTML5 exporting will allow it to be the resurrection and true successor to Flash (unlike “Adobe Animate,” which nobody ever mentions or cares about) and could help revive the internet of the 2000s (oh, Newgrounds, how I love(d) thee).
But, with games regularly taking many tens of gigabytes, like DOOM: Eternal taking up freaking 80 all on its own (rounding up and you have a tenth (⅒) of a terabyte. Ten copies of DOOM and your drive is full… where’s your (strikethrough)God(/strikethrough) storage now?!), it’s clear something needs to be done. And I’ve been interested in procedurally generated content since I saw a behind-the-scenes talk about Spore and discovering the notorious “demoscene” (ah, werkkzeug, I’m so glad to find out, just now when I was looking up how to spell you, that you were released as open source, finally). So, the formations of these ideas have been on my mind for a good many years, now, and seemed a natural solution to the file size problem.
Also, let it be known that I was writing these down long before these Unreal features were released and, probably (since I haven’t followed Epic’s demo videos), before they were even announced:
I hope people find this idea inspiring and would like to help steer it toward being more realistic. If it gets first implemented in or by Armory, that would be fantastic. Otherwise, well, I guess people will benefit regardless.
(I edited this to remove the ranting elements and make it sound more professional, so that any readers can feel free to focus more on the actual meat of the idea and content than my feelings about Epic/Unreal and my personal life)