Building Realistic Training Environments in a Virtual World
- Christ Mongelli

- 5 days ago
- 3 min read

Architecture is a passion of mine, I've been at MXTReality for 10 years now, and we've created a lot of virtual experiences, yet being able to make the virtual world mirror the real world is fascinating. When we see clients say "wow - it's so realistic", it brings a smile to my face.
Realism is more than visual fidelity, it’s a critical component of effective training and an immersive experience.
Read on to find out our process of creating digital realities, that are optimized for VR and multi-screen driving simulators.
Recreating Real Locations for Driver Training
To ensure drivers feel immediately familiar with the environment, we begin with real-world reference data.
We use Google Maps as a spatial and layout reference to establish:
Accurate building placement
Correct scale and proportions
Road layouts, lanes, and intersections
Instead of directly projecting street imagery, which often includes trees, vehicles, shadows, and obstructions... we use this data as a foundation for clean, production-ready geometry.

Whitebox Modeling for Scale and Accuracy
Once reference is established, we move into whitebox modeling.
This early stage allows us to:
Validate real-world dimensions
Confirm turning radii and lane transitions
Test visibility, spacing, and driver sightlines
By locking in scale and flow early, we ensure the simulator behaves like the real training yard before visual detail is added.

Optimized 3D Modeling for VR Performance
Because DOT runs in virtual reality and multi-display simulator setups, performance is critical.
Our modeling approach focuses on:
Low poly counts
Reusable geometry
Reduced draw calls
Many warehouse buildings are never viewed at close range. Instead of over-modeling, we use smart geometry and textures to create convincing detail from realistic viewing distances.
This ensures smooth performance while maintaining immersion.

Efficient Texturing with Trim Sheets and Texture Atlases
After modeling, assets are unwrapped and textured using trim sheets and shared texture atlases.
We use:
Substance Painter
Substance Sampler
AI-assisted tools for decals and surface cleanup
This approach allows us to avoid messy street photo projections while generating:
Base color (diffuse) maps
Normal maps
Roughness and metallic maps
Ambient occlusion
The result is fewer materials, cleaner assets, and better runtime performance.

Final Assembly in Unreal Engine 5
All assets are brought into Unreal Engine 5, where we:
Fine-tune lighting for outdoor realism
Adjust material response for distance viewing
Test performance in VR and simulator configurations
This stage brings everything together into a cohesive, real-world training experience.

Why Environmental Accuracy Matters
For bus operators, familiarity builds confidence.
By recreating real training facilities, Drivers of Tomorrow helps drivers focus on:
Safe maneuvering
Proper lane usage
Complex intersections and transitions
This reduces cognitive load and improves real-world transfer of skills.

About MXTReality
MXTReality specializes in high-fidelity simulation environments for training and education. Our Drivers of Tomorrow (DOT) platform helps transit agencies prepare operators through immersive, performance-optimized virtual environments built with Unreal Engine 5.

Authored by the MXTReality 3D Graphics & Simulation Team
Drivers of Tomorrow (DOT) — Virtual Training Systems


Comments