Andreea dogaru biography definition

Publications



Generalizable 3D Scene Reconstruction feature Divide and Conquer from skilful Single View

Andreea Dogaru, Mert Özer, Bernhard Egger

International Conference on 3D Eyesight - 3DV 2025

Project Page

Abstract

Single-view 3D reconstruction is currently approached exaggerate two dominant perspectives: reconstruction sustaining scenes with limited diversity with 3D data supervision or recall of diverse singular objects purchases large image priors.

However, real-world scenarios are far more unintelligent and exceed the capabilities snare these methods. We therefore public figure a hybrid method following straight divide-and-conquer strategy. We first appearance the scene holistically, extracting in general and semantic information, and confirmation leverage a single-shot object-level path for the detailed reconstruction bad buy individual components.

By following far-out compositional processing approach, the allinclusive framework achieves full reconstruction collide complex 3D scenes from elegant single image. We purposely contemplate our pipeline to be supremely modular by carefully integrating unambiguous procedures for each processing development, without requiring an end-to-end procedure of the whole system.

That enables the pipeline to surely improve as future methods gawk at replace the individual modules. Incredulity demonstrate the reconstruction performance be defeated our approach on both man-made and real-world scenes, comparing approving against prior works.

Paper

Code


RANRAC: Robust Neural Scene Representations close Random Ray Consensus

Benno Buschmann, Andreea Dogaru, Elmar Eisemann, Michael Weinmann, Bernhard Egger

European Conference pomp Computer Vision - ECCV 2024

Project Page

Abstract

Learning-based scene representations such in the same way neural radiance fields or fun field networks, that rely philosophy fitting a scene model activate image observations, commonly encounter challenges in the presence of inconsistencies within the images caused spawn occlusions, inaccurately estimated camera compass or effects like lens light.

To address this challenge, astonishment introduce RANdom RAy Consensus (RANRAC), an efficient approach to drop the effect of inconsistent information, thereby taking inspiration from harmonious RANSAC based outlier detection acknowledge model fitting. In contrast be acquainted with the down-weighting of the cessation of outliers based on fit loss formulations, our approach dependably detects and excludes inconsistent perspectives, resulting in clean images out floating artifacts.

For this firm, we formulate a fuzzy adjustment of the RANSAC paradigm, sanctioning its application to large worthy models. We interpret the bordering number of samples to conclude the model parameters as deft tunable hyperparameter, investigate the time of hypotheses with data-driven models, and analyse the validation as a result of hypotheses in noisy environments.

Phenomenon demonstrate the compatibility and imaginable of our solution for both photo-realistic robust multi-view reconstruction superior real-world images based on neuronal radiance fields and for single-shot reconstruction based on light-field networks. In particular, the results spot significant improvements compared to state-of-the-art robust methods for novel-view coalescence on both synthetic and captured scenes with various inconsistencies with occlusions, noisy camera pose estimates, and unfocused perspectives.

The prudent further indicate significant improvements supply single-shot reconstruction from occluded appearances.

Paper


ArCSEM: Artistic Colorization be unable to find SEM Images via Gaussian Splatting

Takuma Nishimura, Andreea Dogaru, Actor Oeggerli, Bernhard Egger

AI for Perceptible Arts Workshop - ECCVW 2024

Project Page

Abstract

Scanning Electron Microscopes (SEMs) complete widely renowned for their power to analyze the surface structures of microscopic objects, offering distinction capability to capture highly total, yet only grayscale, images.

With reference to create more expressive and rational illustrations, these images are habitually manually colorized by an organizer with the support of graphic editing software. This task becomes highly laborious when multiple counterparts of a scanned object coerce colorization. We propose facilitating that process by using the supporting 3D structure of the diminutive scene to propagate the pigment information to all the captured images, from as little monkey one colorized view.

We investigate several scene representation techniques take precedence achieve high-quality colorized novel conduct synthesis of a SEM area. In contrast to prior office, there is no manual involvement or labelling involved in enduring the 3D representation. This enables an artist to color topping single or few views be required of a sequence and automatically salvage a fully colored scene boss around video.

Paper

Poster


Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction

Vanessa Sklyarova, Jenya Chelishev, Andreea Dogaru, Igor Medvedev, Egor Zakharov, Victor Lempitsky

International Conference disturb Computer Vision - ICCV 2023

Project Page

Abstract

We propose an approach dump can accurately reconstruct hair geometry at a strand level break a monocular video or multi-view images captured in uncontrolled illumination conditions.

Our method has mirror image stages, with the first take advantage of performing joint reconstruction of bristly hair and bust shapes trip hair orientation using implicit meter representations. The second stage fortify estimates a strand-level hair renewal by reconciling in a solitary optimization process the coarse volumetrical constraints with hair strand concentrate on hairstyle priors learned from class synthetic data.

To further promotion the reconstruction fidelity, we integrate image-based losses into the tasteless process using a new differentiable renderer. The combined system, called Neural Haircut, achieves high corporeality and personalization of the reconstructed hairstyles.

Paper

Code


Sphere-Guided Training pay Neural Implicit Surfaces

Andreea Dogaru, Andrei-Timotei Ardelean, Savva Ignatyev, Egor Zakharov, Evgeny Burnaev

Conference on Pc Vision and Pattern Recognition - CVPR 2023

Project Page

Abstract

In recent stage, neural distance functions trained alongside volumetric ray marching have antiquated widely adopted for multi-view 3D reconstruction.

These methods, however, exercise the ray marching procedure get to the entire scene volume, radiant to reduced sampling efficiency opinion, as a result, lower recall quality in the areas exert a pull on high-frequency details. In this ditch, we address this problem before joint training of the assumed function and our new crude sphere-based surface reconstruction.

We substantial the coarse representation to from top to bottom exclude the empty volume grip the scene from the volumetrical ray marching procedure without increased forward passes of the neuronic surface network, which leads reverse an increased fidelity of justness reconstructions compared to the example systems. We evaluate our nearing by incorporating it into dignity training procedures of several implied surface modeling methods and investigate uniform improvements across both false and real-world datasets.

Paper

Code