Skip to main content

SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural Radiance Fields

Published onMay 28, 2024
SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural Radiance Fields
·

ABSTRACT

The widespread adoption of implicit neural representations, especially Neural Radiance Fields (NeRF) as detailed by [1], highlights a growing need for editing capabilities in implicit 3D models, essential for tasks like scene post- processing and 3D content creation. Despite previous efforts in NeRF editing, challenges remain due to limitations in editing flexibility and quality. The key issue is developing a neural representation that supports local edits for real-time updates. Current NeRF editing methods, offering pixel-level adjustments or detailed geometry and color modifications, are mostly limited to static scenes. This paper introduces SealD-NeRF, an extension of Seal-3D for pixel-level editing in dynamic settings, specifically targeting the D-NeRF network [2]. It allows for consistent edits across sequences by mapping editing actions to a specific time frame, freezing the deformation network responsible for dynamic scene representation, and using a teacher-student approach to integrate changes. The code and the supplementary video link are available at https://github.com/ZhentaoHuang/SealD-NeRF.

Month: May

Year: 2024

Venue: 21st Conference on Robots and Vision

URL: https://crv.pubpub.org/pub/yx042z6o


Comments
0
comment
No comments here
Why not start the discussion?