[ECCV2024] Watching it in Dark: A Target-aware Representation Learning Framework for High-Level Vision Tasks in Low Illumination
Yunan Li
Yihao Zhang
Shoude Li
Long Tian
Dou Quan
Chaoneng Li
Qiguang Miao
Xidian University; Xi'an Key Laboratory of Big Data and Intelligent Vision
Code | Paper | Supp
-
First Release.
-
Release Code of Image Classification.
- ResNet18 on CODaN
- ResNet50 on COCO&ExDark
-
Release Code of Object Detection.
-
Release Code of Action Recognition.
|
| Object Detection |
|---|
| Pre-Trained YOLOv5m |
| CUT Darken Model |
| Image Classification |
| Our Pre-trained Model |
| ResNet-18 Baseline |
| CUT Darken Model |
| Action Recognition |
| ... |
We utlized the VOCO dataset(part of the VOC dataset and COCO dataset. The exact composition can be found in Supplementary, and we will also be uploading our training data in a few days), you can apply the Zero-DCE method to enhance low-light data in the test night folder. For darkening the data in the train folder, you may either train the CUT model yourself using unpaired normal-light and low-light data for darkening the normal-light data, or directly use our pre-trained model parameters.
We use the pre-trained YOLOv5m model. You can directly place it in the ./weights folder.
You can run the following command to train the model:
python train_byol.py --weights weights/yolov5m.pt --cfg models/yolov5m.yaml --data data/Exdark_night.yaml --batch-size 8 --epochs 30 --imgsz 608 --hyp data/hyps/hyp.scratch-high.yaml --back_ratio 0.3 --byol_weight 0.1. --back_ratio is used to specify the background occlusion ratio. --byol_weight is used to specify the weight for contrastive learning.
You can run the following command to validate the model:
python val.py --data data/Exdark_night.yaml --batch-size 8 --weights runs/train/exp1/weights/best.pt --imgsz 608 --task test --verboseWe utilized the CODaN dataset, you can apply the Zero-DCE method to enhance low-light data in the test_night folder. For darkening the data in the train folder, you may either train the CUT model yourself using unpaired normal-light and low-light data for darkening the normal-light training data, or directly use our pre-trained model parameters.
Of course, you can download our preprocessed CODaN dataset directly and put it under ./classification/resnet18/data/. In this version, the test_night_zdce folder contains data that has been enhanced for low-light conditions using Zero-DCE, and the train_day2night folder contains data darkened.
We use a pre-trained ResNet-18 as the baseline, which you can download and place in ./classification/resnet18/checkpoints/baseline_resnet.
Run train.sh in ./classification/resnet18 or the following command to start training:
python train.py --use_BYOL \
--checkpoint 'checkpoints/baseline_resnet/model_best.pt' \
--experiment 'your_own_folder'Use --checkpoint to specify the pre-trained model and --experiment to set the storage location for model checkpoints and logs.
Our training log is provided in ./classification/resnet18/checkpoints/our_train_log.txt, which you can use as a reference.
Run test.sh in ./classification/resnet18 to evaluate the model's performance or to validate our pre-trained model.
python test.py --checkpoint 'checkpoints/train/model_best.pt'coming soon.
coming soon.
If our work is useful for your research, please consider citing:
@inproceedings{li2025watching,
title={Watching it in Dark: A Target-Aware Representation Learning Framework for High-Level Vision Tasks in Low Illumination},
author={Li, Yunan and Zhang, Yihao and Li, Shoude and Tian, Long and Quan, Dou and Li, Chaoneng and Miao, Qiguang},
booktitle={ECCV},
year={2024}
}
This work is heavily based on CIConv, YOLOv5, ARID and Similarity Min-Max. Thanks to all the authors for their great work.

