반응형
🔹 1. Denoising with Confidence Estimation (Failed)
The confidence might be inherently learned during the noise estimation
- Train the noise estimator to also output a confidence map.
- Use this to weight how much noise to subtract:
denoised = bev_feat - confidence * noise - Helps avoid over-subtraction in uncertain regions.
🔹 2. Feature Consistency Loss : Alrady being applied
- Add a loss between clean BEV features (from the autoencoder) and denoised features (after noise subtraction).
- Encourages your model to recover clean structure.
🔹 3. Multi-scale Noise Estimation
unable to apply because the BEV featmap resolution issue
- Use a multi-scale U-Net to estimate noise at different resolutions.
- Subtract noise progressively at each scale.
🔹 4. Adversarial Training
- Use a discriminator to distinguish clean vs. noisy BEV features.
- Train the noise estimator + VT encoder to fool the discriminator.
- Similar to denoising GANs.
🔹 5. Joint Training with Reconstruction Head : This is the same as the decoder training part
- Add an auxiliary task to reconstruct the input image or GT BEV map from the denoised BEV feature.
- Forces the model to retain meaningful signal while reducing noise.
'Deep Learning' 카테고리의 다른 글
Project lidar point cloud into a camera image in nuScenes (0) | 2025.03.27 |
---|---|
fatal factors that make the self-supervised training for monodepth estimation extremely difficult when trained on nuScenes (0) | 2025.03.24 |
Noise estimation idea 2 (0) | 2025.03.18 |
Improving DDPD noise estimation (0) | 2025.03.18 |
Class-specific DDPM (0) | 2025.03.11 |