- Companies
-
-
Los Angeles, California, United States
-
- Categories
- Market research Sales strategy Mobile app development Software development Artificial intelligence
Achievements



Latest feedback
Recent projects

Dynamic Scene Capture and Rendering for AR/VR
Project Overview: This research and development internship aims to explore and prototype real-time 3D scene reconstruction and rendering pipelines based on the latest advancements in Gaussian Splatting (GS) techniques. Interns will investigate and integrate modern methods such as MOsTR3R (for dynamic scene modeling), 4D-TAM (for dynamic temporal fusion), and others (e.g., Neuralangelo, 3DGS, Gaudi, Splatting Transformers). Interns will capture both static and dynamic world geometry (using RGB-D or video input) and render these using multi-view, photo-realistic, and time-coherent Gaussian Splatting representations , ultimately generating interactive 3D experiences.

Simulation and Rendering of Structure Fires and Wildfires Using AI and Novel Rendering Techniques
This project tasks a team of student interns with developing a dual-mode fire simulation and visualization system—one focused on structure fires inside buildings, and the other on wildfires across natural and semi-urban landscapes. The goal is to combine emerging rendering technologies and physically-informed simulation models to create visually compelling, data-driven, and computationally efficient representations of fire behavior in different environments. For the structure fire module , students will simulate how fire and smoke propagate through a building using architectural geometry and material properties as key inputs. Fire behavior will be influenced by the location and type of fuels encountered (e.g., drywall, wood flooring, fabric furniture), airflow between rooms, and barriers like closed doors. Smoke and flame spread will be animated using efficient volumetric or particle methods, and enhanced with modern techniques such as Gaussian splatting or neural texture synthesis to achieve realistic effects suitable for mobile or AR deployment. The wildfire module will focus on modeling fire progression across large-scale outdoor terrain. Students will incorporate available environmental data—such as terrain elevation, vegetation types, satellite fire perimeter observations, and weather forecasts (e.g., wind, humidity)—to simulate wildfire behavior over time. The team will integrate propagation models (either rule-based or data-driven) and visualize the output in a way that clearly communicates risk zones, direction of spread, and burn intensity. Rendering will be optimized to handle large areas while maintaining immersive quality, potentially leveraging AI models for dynamic smoke and fire visualization at scale. Throughout the project, students will learn to combine physics-informed modeling, real-time graphics techniques, and AI-driven rendering to prototype tools that could support decision-making or situational awareness in firefighting, training, or public safety AR applications. They will work collaboratively to build, test, and document modular components, potentially using Unity or similar engines, and investigate performance trade-offs between visual realism and computational efficiency.

Project CORE (Capture–Operate–Render–Execute): A Low-Level Distributed Architecture for Immersive and Robotic Intelligence
The project aims to develop a foundational architecture for immersive and robotic systems, focusing on distributed real-time data capture, low-latency decentralized computation, and dynamic multimodal output generation. The architecture will support collaborative AR/VR environments, human-robot teaming, and ambient computing networks. Key features include ultra-low latency, low power design, and a hardware-agnostic input layer that accepts data from various sources such as sensors, wearables, and edge cameras. The modular compute layer will perform logic, physics, and agent-based reasoning on distributed nodes, while the flexible output layer will generate spatial data, 3D geometry, synthesized video, or action commands. The system will also incorporate AI-enhanced configuration using LLMs and orchestrator agents to dynamically assign tasks and balance loads. Use of existing computing architecture like the Robotic Operating System (ROS) will be explored and considered

Bliss+ Sensor Module Development
The Bliss+ Sensor Module Development project is designed to create an innovative, cost-effective sensor module that attaches to smartphones, enabling real-time health data collection. The goal is to enhance user wellbeing by providing insights into their physical and emotional health without the need for expensive wearables. By integrating accessible sensor technology, the module will measure key health metrics such as heart rate, heart rate variability, blood oxygen level, and skin temperature. The project also explores the potential inclusion of electrodermal activity measurement as an advanced feature. Participants will evaluate and integrate open-source libraries or SDKs for processing this data on mobile devices. The project aims to deliver a functional prototype and a simple mobile app that visualizes the collected data, providing a proof-of-concept for future development.