Speakers
- Manuel Rodrigo Cabello Malagon (Principal AI Research Engineer at Plain Concepts)
- Javier Carnero Iglesias (Senior Research Manager at Plain Concepts)
Abstract
This tutorial demonstrates how to create, train, and analyze photorealistic 3D environments
using Radiance Field techniques, with a particular focus on Gaussian Splatting. Attendees will be
guided through the full workflow, from data acquisition and preprocessing, through model
training and optimization, to semantic analysis and AI-driven scene understanding.
A key part of the tutorial will focus on how AI models can extract insights from trained radiance
fields, enabling the detection, localization, and interpretation of objects within reconstructed
environments. Participants will learn how to combine 3D scene representation with computer
vision and deep learning techniques to build intelligent spatial systems capable of understanding
the content and structure of a scene.
Using Evergine, a high-performance 3D engine developed by Plain Concepts, we will showcase
a live interactive demo where a reconstructed environment is rendered and semantically
interpreted in real time, allowing AI models to “see” and reason about the 3D world.
The session blends theory, practical guidance, and live demonstrations, bridging the gap
between AI-based 3D reconstruction and intelligent scene analysis. It is ideal for researchers,
engineers, and practitioners working in computer vision, graphics, and AI for spatial computing.
Target Audience
The tutorial is designed for:
- Researchers and professionals in AI, computer vision, 3D graphics, and mixed reality.
- Graduate students and developers exploring neural rendering, scene reconstruction, or AI-based visualization.
Expected prior knowledge:
- Basic understanding of computer vision and deep learning concepts.
- Familiarity with Python and Deep Learning frameworks (e.g., PyTorch).
- No prior experience with Gaussian Splatting or Radiance Fields required.
Outline and Description of the Tutorial
1. Introduction to Radiance Fields
- Overview of NeRFs and Gaussian Splatting techniques.
- Comparison of approaches: accuracy, efficiency, and scalability
2. Data Acquisition and Preprocessing
- Capturing data. Best practices.
- Data cleaning, calibration, and normalization.
3. Model Training and Optimization
- Hyperparameters tunning.
- Optimization techniques for speed and visual quality
4. Extracting Semantic and Spatial Information
- Integrating object detection and segmentation models
- Building semantic layers for AI-driven spatial understanding
5. Real-Time Visualization with Evergine
- Rendering Gaussian Splatting scenes in Evergine
- Interacting with AI-understood environments live
6. Wrap-up and Discussion
- Future research directions
- Resources and community tools
Reading List
Introductory Material:
- Mildenhall et al., NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, ECCV 2020.
- Kerbl et al., 3D Gaussian Splatting for Real-Time Radiance Field Rendering, SIGGRAPH 2023.
Recommended Reading before the tutorial:
- Introduction to Gaussian Splatting: 3D Gaussian SplattingPlain Concepts
- Object detection: DINOv3
- 3D Gaussian Splatting as Markov Chain Monte Carlo: [2404.09591] 3D Gaussian Splatting as Markov Chain Monte Carlo
- Introduction to nerfstudio: https://docs.nerf.studio/
Further reading:
- Official Evergine documentation: Evergine Documentation | Evergine Doc
- Introduction to 3D rendering methods: Digital Twins: Precision of 3D Gaussian SplattingPlain Concepts
Vertical
Cutting-edge AI Research
Timeline
2 hours


