The Virtualized Rendering Revolution (Episode 1)

Why game engines are rewriting the rules of rendering ?

 

Preamble:
This blog post was born from one of those geek-fueled conversations that often spark at SKYREAL during lunch breaks.
Benjamin Ray, SKYREAL’s CTO, decided to take things further and to dive into the topic, tracing its history, and connecting the dots. After that, he brought it all together in a presentation to share and synthesize these reflections with the team.

(And Yes: Ray is his real name! which feels particularly appropriate ; raylly appropriate, hu-hu-hu)

In this article, we are sharing some key insights from that internal presentation by Benjamin. It’s an exploration of why the incredible advances in video game technology are essential to driving progress in other industries, whether it’s through digital transformation or the transition to Industry 4.0.

Virtualized rendering, and more specifically the disruptive role of game engines, isn’t just a technical topic for us. It’s deeply rooted in our culture: many of us are gamers (some, proudly hardcore*), and our solutions are built on Unreal Engine, a platform we’ve explored deeply: as passionate gamers who’ve experienced its power, and as developers who’ve stretched its capabilities to meet industrial needs.
It’s a subject that genuinely excites us, fuels our debates, and actively shapes our vision and roadmap; and honestly, we just love that 🙂

This is just the beginning of a series of articles about virtualized rendering and the disruptive impact of game engines.
Our goal is to make things understandable and approachable to shed light on complex concepts without oversimplifying. We’ll start here with a bit of historical context, before diving into more technical aspects.

From Raster to Ray‑Tracing: 30 Years of Rendering Evolution

 

From few thousand triangles on a PlayStation 1 to trillion‑triangle cityscapes in UE5, what really changed under the hood?

Benjamin Ray

Three decades ago, real‑time 3D graphics felt like magic.
Today, they are.
We’re now seeing ray-traced reflections running smoothly on a €500 GPU, architectural walkthroughs with the visual quality of Hollywood films, and VR environments capable of streaming terabytes of data without choking.
This article unpacks the technical leaps from the first hardware rasterizers in the mid‑1990s all the way to the pixel‑centred pipelines of 2025 and explains why each milestone mattered to gamers, artists, engineers, and many others.
It also shows how SKYREAL builds on that immense computing power to design solutions that truly serve its users.


Glossary (skip if you’re a pixel aficionado)

  • Rasterization: converting 3D triangles into 2D pixels.
  • Shader: a tiny program that runs on the GPU to transform vertices or shade pixels.
  • API (Application Programming Interface): the contract between your engine and the graphics driver (think Direct3D, OpenGL, Vulkan).
  • RT Cores: dedicated hardware blocks that accelerate ray–scene intersection tests.
  • Virtualisation (textures, meshes, shadows): treating large resources as paged datasets so only the visible bits stay in memory.

1990‑1999: The Hardware Raster Boom

If the 1980s gave us vector‑lined flight sims, the 1990s slammed the gas pedal on textured 3D. Two inventions converged:

  1. Fixed‑function graphics cards (3dfx Voodoo, S3 ViRGE, SGI RealityEngine). Graphic hardware companies took important steps like texture mapping and Z-buffering away from the CPU and put them into a special chip that can handle many pixels at the same time.
  2. Early consumer APIs. OpenGL 1.1 (1994) and Direct3D 3 (1996) standardized how software talked to graphics chips.
    Developers no longer had to write different code for each hardware brand, they could use one standard system for all.
    OpenGL quickly became the standard for industrial applications, especially CAD software, unlike DirectX, which was more commonly used for video games.

The result? Games like Quake could run at 30 FPS on a Pentium II, and Gran Turismo could show 300,000 textured shapes (polygons) on the PlayStation’s custom drawing system (rasterizer). Artists had to manually create simpler versions of each 3D model (Level of Detail or LoD), and lighting wasn’t real, it was painted directly onto the models (vertex colours) or stored as images (lightmaps).
But two key ideas from that time are still used today: checking which objects are in front (Z-buffer depth tests) and using triangles (the triangle primitive) to build all 3D graphics.

2000‑2009: Programmable Shaders & “The Xbox Moment”

In 2001, NVIDIA’s GeForce 3 graphics card introduced “shaders”, small custom programs (vertex and pixel shaders) that let artists control how 3D graphics looked. At first, they were written in low-level code (assembly), and later in special shader languages (HLSL/GLSL). This changed everything.

Thanks to shaders, artists could:

  • Make lighting much more detailed, pixel by pixel (instead of per vertex), which removed the old “banded” look (Gouraud shading).
  • Combine many types of surface information, like texture, shine (gloss), height, and color (albedo), into one material.
  • Fake complex effects like depth (parallax mapping), mirror-like reflections (environment cubemaps), and glass-like bending of light (real-time refraction).

At the same time, the original Xbox console used a version of this same chip and DirectX 8, so game developers on console had the same tools as PC developers.
This was a big deal.
It helped improve the tools for making shaders, created easier interfaces (GUIs), and made asset creation more like the modern approach called PBR (Physically Based Rendering).

Meanwhile, ATI (now AMD) took another big step. Instead of having separate hardware for different shader types (vertex and pixel), they made one flexible system (unified shaders) that could do both, depending on what the game needed.

But this progress had a downside. Game engines that used lots of materials and objects found that the CPU was struggling to keep up with all the instructions (draw calls). The system spent too much time checking things (API validation), and everything slowed down. The solution? Let developers have more direct control over the hardware.

2010‑2015: Parallelism & Compute

1. Multi‑core Everywhere

Consoles started using 8-core processors, and PCs got hyper-threading. With more CPU threads available, game engines like Frostbite, id Tech 6, and Unreal Engine 4 began preparing GPU tasks (called command buffers) in parallel, making things faster.

2. Compute Shaders and General-Purpose GPU (GPGPU)

Direct3D 11 (2009) and OpenCL let developers run “general” code on GPUs. This allowed for effects like realistic particles, better lighting, and ambient shadows to be handled more efficiently. It also enabled background tasks like blur, image analysis, and AI to run at the same time as regular rendering.

3. Upscaling and Tile-Based Rendering (Driven by Mobile Growth)

To improve visual quality without overloading the hardware, consoles like the PS4 Pro used techniques like checkerboard rendering and early Temporal Anti-Aliasing (TAA) to fill in missing pixels and simulate higher resolution at lower cost. At the same time, smartphones (where power and memory are limited) popularized tile-based deferred rendering, a method that processes small sections (“tiles”) of the screen to reduce bandwidth usage. The rise of mobile gaming and 3D apps pushed GPU makers to focus on doing more work in parallel and minimizing data movement, a trend that influenced the whole industry throughout the decade.
It was also strongly driven by the TV industry, especially at the time when 4K was emerging.

2015‑2020: Low-Level APIs, Hardware Ray Tracing & Asset Virtualisation

Low-Level APIs: DX12, Vulkan, & Metal

AMD created a simpler API (Mantle) to remove performance bottlenecks. Microsoft used those ideas for DirectX 12, while Vulkan and Apple’s Metal followed. These APIs gave developers more control over the GPU, , reduced wasted processing time, and allowed games to draw many more objects per frame without slowing down.
This also enabled a deep redesign of the graphics pipeline, leading to significant optimization opportunities.

The birth of RTX: NVIDIA and AMD pioneered hardware Ray Tracing

Starting with NVIDIA’s RTX 20 series (2018) and AMD’s RDNA2 (2020), graphics cards got special hardware called RT cores. These help the GPU handle ray tracing (like reflections and shadows) much faster by speeding up how it checks where rays hit objects. This made it possible to have more realistic lighting effects in real time, not just fake ones. Ray tracing is still demanding, but now it’s easier to control how much it costs in performance.

At the same time, new AI-powered upscaling technologies like DLSS (NVIDIA), FSR (AMD), and XeSS (Intel), allowed games to render images at a lower resolution (like 1440p) and then upscale them to look like 4K. This saved over 50% of the work needed for shading each pixel, keeping high image quality with better performance.

2020‑Today: The UE5 Paradigm: Virtualising Everything

Epic didn’t just sprinkle RT on top; they virtualised the content pipeline and completely changed how content is handled behind the scenes:

  • Virtual Texture Streaming only stream texture tiles actually sampled this frame.
  • Nanite Mesh Virtualization lets the engine load small chunks of detailed 3D models when needed, automatically hiding level-of-detail (LoD) transitions.
  • Virtual Shadow Maps work like textures, loading only the parts of shadows that are currently in view.

All of this works with Lumen, a lighting system that mixes screen-based tricks with real ray tracing for more realistic results. Thanks to these tools, even super-detailed scenes with billions of polygons (like those made with Quixel Megascans) can run at 60 FPS on a mid-range graphics card. Instead of worrying about the number of triangles, performance now depends more on how many pixels are on screen.

So this is a fundamental shift:
triangle limits are no longer a concern => performance now scales with pixels on screen, not the number of assets in the scene.

Why Each Era Mattered (And Still Does)

Era Killer Feature Artist Impact Engineering Challenge
90s Fixed‑function raster Could finally texture everything Micro‑optimise triangle order to beat tiny caches
00s Programmable shaders Custom materials & per‑pixel lighting Tooling for non‑coder artists, shader permutations explode
10s Low‑level APIs + compute Massive material variety, particle sims Multithreaded render graph orchestration
20s RT cores + virtualisation Film‑quality assets & lighting in editor I/O bandwidth, asynchronous streaming, denoising

 

The Road Ahead: Pixel‑Centric Everything

When triangles, textures, and shadow map resolutions are virtualized, the only thing that affects performance is screen resolution. This means:

  • Performance now scales directly with the number of pixels. For example, 1080p uses about 55% of the resources needed for 4K, with no sudden spikes in performance.
  • Art teams can create assets once, no need for different levels of detail (LoDs), baked lightmaps, or fake objects.
  • Storage and network speeds (like NVMe and 5G-edge streaming) become the main limiting factors.

With the introduction of DLSS 4, AI-enhanced upscaling can help mitigate performance hits at higher resolutions, enabling even more detailed, pixel-perfect experiences without the typical performance trade-off.
In the future, expect more advanced techniques like mesh-shader pipelines, better tools for managing detail levels, and AI-driven ways to create materials without needing to manually make textures.

TL;DR (quick overview of the last 30 years)

Rendering didn’t improve in a straight path; it went back and forth between offering more flexibility (programmable shaders) and more efficiency (special hardware for ray tracing, low-level APIs). In the 2020s, we’re seeing both come together, flexible tools with hardware support for faster ray tracing and material creation, plus virtualized assets. This gives creators a nearly perfect “what you see is what you get” workflow and for SKYREAL users “What you import is what you see”.
Welcome to the pixel‑centric era, where triangle count disappears from user experience and imagination becomes the new bottleneck.

 

* Coucou pixel warriors @Tyriaax @Alejandro @Elouan @Glendelay @m003t @JVax