Skip to content

vccimaging/End2endImaging

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

810 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

End2endImaging

Python PyTorch Nature Communications 2024 License

End2endImaging is an open-source framework for end-to-end differentiable computational imaging. It models the full camera pipeline — differentiable optics, sensor simulation, and neural image processing — as a single computation graph in PyTorch, enabling optics-algorithm co-design through gradient-based optimization.

End2endImaging pipeline: differentiable optics, sensor simulation, and neural network for end-to-end computational imaging

DocsCommunityCitation

End-to-End Differentiable Pipeline

The core of End2endImaging is the Camera class, which composes a lens model and a sensor into a fully differentiable imaging pipeline. Gradients flow from downstream task losses (reconstruction, detection, classification) back through the neural network, sensor noise model, ISP, and into the optical design parameters — enabling hardware-software co-optimization.

Differentiable Optics (DeepLens)

The deeplens/ module provides differentiable lens models for optical simulation and design:

  • GeoLens — Multi-element refractive lens via differentiable ray tracing. Supports Zemax/Code V/JSON I/O, automated lens design, Seidel aberration analysis, and tolerancing.
  • HybridLens — Refractive lens + diffractive optical element (DOE). Coherent ray tracing + Angular Spectrum Method propagation.
  • DiffractiveLens — Pure wave-optics lens using diffractive surfaces and scalar diffraction.
  • PSFNetLens — Neural surrogate wrapping a GeoLens with an MLP for fast PSF prediction.
  • ParaxialLens — Thin-lens model for depth-of-field and bokeh simulation.

Sensor & ISP Simulation

Physically accurate sensor simulation with Bayer CFA, read/shot noise model, and a composable ISP pipeline (black level, white balance, demosaicing, color correction, gamma, tone mapping). Each stage is a differentiable torch.nn.Module.

Neural Networks

Built-in reconstruction networks (NAFNet, UNet, Restormer) for restoring clean images from degraded sensor captures, plus PSF surrogate networks (MLP, SIREN) for fast PSF prediction during training.

Additional features (available upon inquiry):

  • Kernel Acceleration. >10x speedup and >90% GPU memory reduction with custom GPU kernels (NVIDIA & AMD).
  • Distributed Optimization. Distributed simulation for billions of rays and high-resolution (>100k) diffractive computations.

Applications

1. End-to-End Optics-Algorithm Co-Design

Jointly optimize lens optics and neural reconstruction networks using final image quality (or classification/detection/segmentation) as the training objective. Gradients flow end-to-end from task loss into optical design parameters.

paper

End-to-end optics-algorithm co-design: jointly optimizing lens and neural network

2. Automated Lens Design

Fully automated lens design from scratch using curriculum learning and differentiable optimization. Design camera lenses, cellphone lenses, and AR/VR optics with gradient descent. Try it with AutoLens!

paper quickstart

Automated lens design optimization with differentiable ray tracing Curriculum learning for ab initio lens design

3. Hybrid Refractive-Diffractive Lens Design

Design hybrid refractive-diffractive lenses (DOEs, metalenses) with a differentiable ray-wave model combining geometric ray tracing and wave optics propagation.

report

Hybrid refractive-diffractive lens design with differentiable ray-wave simulation

4. Implicit Lens Representation

Neural surrogate network for fast aberration-aware image simulation, enabling real-time PSF prediction for computational photography applications.

paper link

Neural implicit lens representation for fast PSF prediction

Installation

Clone this repo:

git clone https://github.com/vccimaging/End2endImaging
cd End2endImaging

Create a conda environment:

conda create -n end2end_env python=3.12
conda activate end2end_env

# Linux and Mac
pip install torch torchvision
# Windows
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128

pip install -r requirements.txt

or

conda env create -f environment.yml -n end2end_env

Run the demo code:

python 0_hello_deeplens.py

Project Structure

End2endImaging/
│
├── end2end_imaging/
│   ├── camera.py          # Camera: composes lens + sensor into a differentiable pipeline
│   ├── deeplens/          # Differentiable optics (lens models, surfaces, ray tracing)
│   ├── sensor/            # Sensor simulation (Bayer CFA, noise, ISP pipeline)
│   └── network/           # Neural networks (reconstruction, PSF surrogates, losses)
│
├── 0_hello_deeplens.py    # Code tutorials
├── ...
└── write_your_own_code.py

Community

Join our Slack workspace and WeChat Group (singeryang1999) to connect with our core contributors, receive the latest industry updates, and be part of our community. For any inquiries, contact Xinge Yang (xinge.yang@kaust.edu.sa).

Contribution

We welcome all contributions. To get started, please read our Contributing Guide or check out open questions. All project participants are expected to adhere to our Code of Conduct. A list of contributors can be viewed in Contributors and below:

Citation

If you use this project in your research, please cite the paper. See more in History of DeepLens.

@article{yang2024curriculum,
  title={Curriculum learning for ab initio deep learned refractive optics},
  author={Yang, Xinge and Fu, Qiang and Heidrich, Wolfgang},
  journal={Nature communications},
  volume={15},
  number={1},
  pages={6572},
  year={2024},
  publisher={Nature Publishing Group UK London}
}

About

End-to-end differentiable computational imaging framework.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages