End2endImaging is an open-source framework for end-to-end differentiable computational imaging. It models the full camera pipeline — differentiable optics, sensor simulation, and neural image processing — as a single computation graph in PyTorch, enabling optics-algorithm co-design through gradient-based optimization.
The core of End2endImaging is the Camera class, which composes a lens model and a sensor into a fully differentiable imaging pipeline. Gradients flow from downstream task losses (reconstruction, detection, classification) back through the neural network, sensor noise model, ISP, and into the optical design parameters — enabling hardware-software co-optimization.
Differentiable Optics (DeepLens)
The deeplens/ module provides differentiable lens models for optical simulation and design:
- GeoLens — Multi-element refractive lens via differentiable ray tracing. Supports Zemax/Code V/JSON I/O, automated lens design, Seidel aberration analysis, and tolerancing.
- HybridLens — Refractive lens + diffractive optical element (DOE). Coherent ray tracing + Angular Spectrum Method propagation.
- DiffractiveLens — Pure wave-optics lens using diffractive surfaces and scalar diffraction.
- PSFNetLens — Neural surrogate wrapping a GeoLens with an MLP for fast PSF prediction.
- ParaxialLens — Thin-lens model for depth-of-field and bokeh simulation.
Physically accurate sensor simulation with Bayer CFA, read/shot noise model, and a composable ISP pipeline (black level, white balance, demosaicing, color correction, gamma, tone mapping). Each stage is a differentiable torch.nn.Module.
Built-in reconstruction networks (NAFNet, UNet, Restormer) for restoring clean images from degraded sensor captures, plus PSF surrogate networks (MLP, SIREN) for fast PSF prediction during training.
- Kernel Acceleration. >10x speedup and >90% GPU memory reduction with custom GPU kernels (NVIDIA & AMD).
- Distributed Optimization. Distributed simulation for billions of rays and high-resolution (>100k) diffractive computations.
Jointly optimize lens optics and neural reconstruction networks using final image quality (or classification/detection/segmentation) as the training objective. Gradients flow end-to-end from task loss into optical design parameters.
Fully automated lens design from scratch using curriculum learning and differentiable optimization. Design camera lenses, cellphone lenses, and AR/VR optics with gradient descent. Try it with AutoLens!
Design hybrid refractive-diffractive lenses (DOEs, metalenses) with a differentiable ray-wave model combining geometric ray tracing and wave optics propagation.
Neural surrogate network for fast aberration-aware image simulation, enabling real-time PSF prediction for computational photography applications.
Clone this repo:
git clone https://github.com/vccimaging/End2endImaging
cd End2endImaging
Create a conda environment:
conda create -n end2end_env python=3.12
conda activate end2end_env
# Linux and Mac
pip install torch torchvision
# Windows
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128
pip install -r requirements.txt
or
conda env create -f environment.yml -n end2end_env
Run the demo code:
python 0_hello_deeplens.py
End2endImaging/
│
├── end2end_imaging/
│ ├── camera.py # Camera: composes lens + sensor into a differentiable pipeline
│ ├── deeplens/ # Differentiable optics (lens models, surfaces, ray tracing)
│ ├── sensor/ # Sensor simulation (Bayer CFA, noise, ISP pipeline)
│ └── network/ # Neural networks (reconstruction, PSF surrogates, losses)
│
├── 0_hello_deeplens.py # Code tutorials
├── ...
└── write_your_own_code.py
Join our Slack workspace and WeChat Group (singeryang1999) to connect with our core contributors, receive the latest industry updates, and be part of our community. For any inquiries, contact Xinge Yang (xinge.yang@kaust.edu.sa).
We welcome all contributions. To get started, please read our Contributing Guide or check out open questions. All project participants are expected to adhere to our Code of Conduct. A list of contributors can be viewed in Contributors and below:
If you use this project in your research, please cite the paper. See more in History of DeepLens.
@article{yang2024curriculum,
title={Curriculum learning for ab initio deep learned refractive optics},
author={Yang, Xinge and Fu, Qiang and Heidrich, Wolfgang},
journal={Nature communications},
volume={15},
number={1},
pages={6572},
year={2024},
publisher={Nature Publishing Group UK London}
}




