Week 12: Initial Testing on Mesh Generation

This week, our main focus was on mesh generation and understanding how to turn point-based or radiance field outputs into clean, usable 3D assets. We experimented with multiple mesh extraction methods, and spent time tuning parameters like density, smoothing, and decimation to see how they affect both visual quality and downstream usability. Instead of treating mesh as a “black box” step at the end of the pipeline, we dug into how surface reconstruction actually works and what kinds of inputs (noise level, point distribution, normal estimates) lead to stable outputs. This deeper understanding is already helping us recognize why some scenes reconstruct well while others collapse into artifacts or holes.

On the integration side, we started wiring these mesh outputs back into our VOXEL workflow so that meshes are not just generated in isolation but produced as a repeatable stage in the pipeline. We compared meshes across different methods for the same scene, checked how well they preserve fine details, and considered how they will behave once textured and visualized in tools like Unity. As we refine this stage, we can clearly see the full pipeline coming together: capture → segmentation → reconstruction → mesh → (soon) texturing and interactive viewing. At this point, we feel we are only two key steps away from an initial end-to-end pipeline by the end of the semester, and this week’s mesh work was a critical bridge between our earlier 3DGS experiments and a product-ready asset workflow.

Leave a Reply

Your email address will not be published. Required fields are marked *