Back to glossary

Optimisation

Optimisation, in the context of real-time 3D experiences, is the process of preparing a 3D model and its associated assets so that they perform efficiently in a real-time environment. It involves removing or reducing unnecessary data that would otherwise slow rendering, increase loading times, or cause the experience to run below the standard required for a smooth, high-quality buyer presentation. Optimisation is the first step in the real-time production process, not a finishing touch applied at the end. A well-optimised experience loads quickly, navigates fluidly, and runs without interruption. An unoptimised one does not, regardless of how good it looks in a still image.

What is optimisation in real-time 3D?

Optimisation is the technical process of reducing the computational complexity of a 3D model and its assets, without degrading the visual quality that is actually visible to the viewer, so that the model performs well in a real-time rendering environment.

A 3D model contains geometry, textures, materials, lighting data, and other assets. All of this must be processed continuously by the rendering engine as the viewer moves through the environment. Unnecessary data, such as polygon detail that is never visible from the viewing distance, or texture files larger than the display can resolve, consumes processing resources without contributing to what the buyer sees.

Optimisation identifies and removes or reduces that unnecessary data, freeing processing capacity for the elements that are visible and impactful. A well-optimised model can look visually identical to an unoptimised one in practice, because the removed data was never contributing to the visible result. The difference shows in performance, not appearance.

Why is optimisation necessary for real-time 3D but not for static renders?

A static render engine has unlimited time to process a scene. A single frame can take minutes or hours to produce. There is no performance constraint, and the model can be as data-heavy as required.

Real-time 3D operates under an entirely different constraint. The rendering engine must produce a new frame many times per second, typically sixty or more, for the experience to feel smooth. It has a fraction of a second to process the entire scene for each frame. Every unnecessary piece of data in the model adds to the processing load for every frame, for every second, for the entire session.

When the scene exceeds the engine's processing budget, performance drops. The experience stutters. Frame rates fall. Navigation loses its fluency. The sense of moving through a real space, which is the foundation of spatial presence, is interrupted.

Architects and visualisation studios typically deliver models prepared for static rendering. These models contain far more geometric detail and larger texture files than a real-time environment requires. Critically, the textures that come with a static render model are rarely suitable for real-time use at all. In a real-time production workflow, textures are created and applied directly within the game engine, to the correct specification for that environment. The textures from a static pipeline are therefore among the first things to be removed when optimisation begins.

Optimisation is consequently the opening step of the real-time production process: before any game engine work begins, the incoming model is assessed and stripped down to a clean, efficient foundation. Importing an unoptimised model directly into a game engine such as Unreal Engine and building on top of it without this step produces poor performance and unnecessary complexity throughout the rest of production.

What does optimisation involve in practice?

The process begins with a thorough assessment of the incoming model. Textures delivered with the model are removed, since engine-native materials will be created and applied directly in the game engine at the correct resolution and specification for real-time rendering. Starting with the engine's own material system, rather than inheriting assets from a static pipeline, produces cleaner, more consistent, and better-performing results.

Polygon reduction follows. The geometric complexity of objects is reduced, particularly for elements that are small, distant, or rarely viewed at close range. A door handle modelled in high detail for a static close-up render does not require its full geometric complexity in a navigable real-time environment. Removing that invisible detail reduces the processing load without any visible effect on the experience.

Texture optimisation, applied to any textures that are retained or referenced, addresses file size and memory usage. Texture files are sized to the appropriate resolution for the display and viewing distance. Reducing them to the correct specification significantly reduces loading times and memory pressure.

Material consolidation simplifies how surfaces are calculated in the engine. Reflections, shadows, and surface properties are set up in a way that is efficient for real-time computation rather than optimised for a single offline render.

Occlusion culling configures the engine to avoid rendering objects not currently visible to the viewer. Asset instancing sets up repeated elements, trees, furniture, fixtures, to render as a single object replicated many times rather than as separate objects each consuming their own allocation.

Together, these steps produce an experience that is significantly smaller in file size, faster to load, and more responsive during navigation than its unoptimised source would have allowed.

What are the consequences of poor optimisation in a property sales experience?

Loading time is the first visible consequence. An unoptimised experience may take an uncomfortable amount of time to load before the buyer can begin exploring. In a sales gallery context, this creates dead time at the start of a presentation. In an online deployment context, many buyers will abandon the experience before it is ready.

Stuttering and frame rate drops follow from a scene that exceeds the engine's processing budget. Navigation loses its smoothness. The environment jerks rather than flows. The buyer becomes aware of the technology rather than the space, which is precisely what a well-designed immersive experience should prevent.

The premium perception problem is the most commercially damaging consequence. A stuttering, slow, or unresponsive experience communicates poor quality to the buyer, regardless of the visual ambition of the content. In a premium sales context, this is the opposite of the impression the developer is trying to create. The quality of the experience is a statement about the quality of the development. A clunky experience makes a clunky statement.

For pixel streaming deployments, an unoptimised scene increases the processing load on the server, which affects streaming quality and increases operational costs.

What is the difference between optimisation and visual quality reduction?

This distinction matters for a non-technical audience, and it is often misunderstood.

Visual quality reduction means removing or simplifying data that is visible: lowering the resolution of textures that are clearly seen, removing geometry that affects the appearance of objects, or simplifying materials in ways the viewer notices. This changes how the experience looks.

Optimisation means removing data that does not affect the visual result as experienced by the viewer. The polygons removed are below the threshold of visibility at the relevant viewing distance. The textures from the static pipeline are removed not because they look wrong but because they belong to a different production system entirely, and engine-native materials will produce better results. The assets consolidated are identical in appearance whether instanced or separate.

A well-executed optimisation process is invisible. The experience looks the same but performs significantly better. The skill lies in identifying the boundary between data that contributes to the visible result and data that does not. This requires both technical knowledge and visual judgement.

What should developers consider regarding optimisation when commissioning an immersive experience?

Include optimisation explicitly in the production brief, and confirm that it is treated as the first step in the production process rather than a refinement applied later. A studio that optimises at the end of production is working against the grain. A studio that optimises at the start builds everything that follows on a clean, efficient foundation.

Define performance targets clearly: the required frame rate, acceptable loading time, and the hardware the experience will run on. These targets give the production team a standard to optimise toward and a basis for testing and sign-off.

Understand the source model. If architectural drawings or models are being provided by a third-party design team, confirm whether they have been prepared for real-time use or for static rendering. Unoptimised source models require assessment and preparation before any game engine work can begin. That step should be scoped and budgeted from the outset rather than discovered mid-production.

Test on deployment hardware before the sales centre opens or the event begins. Performance that is acceptable on a high-end production workstation may be inadequate on the actual hardware configuration of the sales gallery. Optimisation should be validated in the real deployment context, not only in the production environment.

Optimisation should always be followed by a quality control pass on the deployment hardware. The goal of optimisation is invisible improvement: the scene should look the same as the unoptimised version while running at the required frame rate. If the optimisation process has introduced visible quality degradation, it has not been completed correctly and should be reviewed before the experience reaches buyers.

Optimisation is not a visible feature of a finished experience. Its absence, however, is immediately felt.

Find out how Virtuelle builds and optimises real-time 3D environments to perform at their best, on any hardware and in any deployment context.