2018-07-24 Welcome guest,  Sign In  |  Sign Up
Chin. Opt. Lett.
 Home  List of Issues    Issue 01 , Vol. 16 , 2018    10.3788/COL201816.013501

Fusion of the low-light-level visible and infrared images for night-vision context enhancement
Jin Zhu, Weiqi Jin, Li Li, Zhenghao Han, and Xia Wang
School of Optoelectronics, [Beijing Institute of Technology], Beijing 100081, China

Chin. Opt. Lett., 2018, 16(01): pp.013501

Topic:Other areas of optics
Keywords(OCIS Code): 350.2660  040.3780  100.2980  110.3080  

For better night-vision applications using the low-light-level visible and infrared imaging, a fusion framework for night-vision context enhancement (FNCE) method is proposed. An adaptive brightness stretching method is first proposed for enhancing the visible image. Then, a hybrid multi-scale decomposition with edge-preserving filtering is proposed to decompose the source images. Finally, the fused result is obtained via a combination of the decomposed images in three different rules. Experimental results demonstrate that the FNCE method has better performance on the details (edges), the contrast, the sharpness, and the human visual perception. Therefore, better results for the night-vision context enhancement can be achieved.

Copyright: © 2003-2012 . This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

 View PDF (669 KB)


Posted online:2017/12/5

Get Citation: Jin Zhu, Weiqi Jin, Li Li, Zhenghao Han, and Xia Wang, "Fusion of the low-light-level visible and infrared images for night-vision context enhancement," Chin. Opt. Lett. 16(01), 013501(2018)

Note: This work was supported by the National Natural Science Foundation of China (No. 61231014), the Foundation of Army Armaments Department of China (No. 6140414050327) and the Foundation of Science and Technology on Low-Light-Level Night Vision Laboratory (No. BJ2017001).


1. Y. Wang, F. Xie, and J. Wang, Chin. Opt. Lett. 14, 122801 (2016).

2. Z. Qin, G. Xie, J. Ma, P. Yuan, and L. Qian, Chin. Opt. Lett. 15, 111402 (2017).

3. A. Toet, J. K. IJspeert, A. M. Waxman, and M. Aguilar, Displays 18, 85 (1997).

4. A. Toet, Opt. Eng. 51, 010901 (2012).

5. Z. Liu, and R. Laganière, Signal Image Video Process. 1, 293 (2007).

6. S. Liu, Y. Piao, and M. Tahir, Opt. Eng. 55, 123104 (2016).

7. Z. Zhou, M. Dong, X. Xie, and Z. Gao, Appl. Opt. 55, 6480 (2016).

8. S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, Inf. Fusion 33, 100 (2017).

9. F. Kou, W. Chen, C. Wen, and Z. Li, IEEE Trans. Image Process. 24, 4528 (2015).

10. K. He, J. Sun, and X. Tang, IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397 (2013).

11. R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE 2009) (2009), p.?1597.

12. A. Toet, IEEE Trans. Pattern Anal. Mach. Intell. 33, 2131 (2011).

13. S. Li, X. Kang, and J. Hu, IEEE Trans. Image Process. 22, 2864 (2013).

14. J. Ma, C. Chen, C. Li, and J. Huang, Inf. Fusion 31, 100 (2016).

15. C. S. Xydeas, and V. Petrovic, Electron. Lett. 36, 308 (2000).

16. Y. Han, Y. Cai, Y. Cao, and X. Xu, Inf. Fusion 14, 127 (2013).

17. Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganiere, and W. Wu, IEEE Trans. Pattern Anal. Mach. Intell. 34, 94 (2012).

18. K. He, and J. Sun, “Fast guided filter,” arXiv: 1505.00996 (2015).

Save this article's abstract as
Copyright©2018 Chinese Optics Letters 沪ICP备05015387