Journal of Shanghai University >
HDR image style transfer technique based on generative adversarial networks
Received date: 2018-05-21
Online published: 2018-08-31
In view of the complex and time-consuming synthetic process of the high dynamic range (HDR) images, a novel HDR image transfer technique based on the generative adversarial network has been proposed. The process is as follows: first to build two training sets of the generative adversarial network---ordinary images and low-exposure HDR images; ordinary images and high exposure HDR images. Then, through the training of the generative adversarial networks, the two generative models of ordinary images to low exposure HDR images and ordinary images to high exposure HDR images are established. Finally, a picture is put into the model, the high and low exposure images and the original images are combined to synthesize HDR files, and the tone mapping forms the image after the final HDR style transfer. This method not only solves effectively the problem of HDR image style transfer, but also proves the advantages of the generative adversarial network in processing image editing.
XIE Zhifeng, YE Guanhua, YAN Shuqi, HE Shaorong, DING Youdong . HDR image style transfer technique based on generative adversarial networks[J]. Journal of Shanghai University, 2018 , 24(4) : 524 -534 . DOI: 10.12066/j.issn.1007-2861.2058
| [1] | Portilla J, Simoncelli E P . A parametric texture model based on joint statistics of complex wavelet coefficients[J]. International Journal of Computer Vision, 2000,40(1):49-70. |
| [2] | Gatys L A, Ecker A S, Bethge M . A neural algorithm of artistic style [EB/OL]. [2018-06-26]. https://arxiv.org/abs/1508.06576. |
| [3] | Gatys L A, Ecker A S, Bethge M . Image style transfer using convolutional neural networks[C] //IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016: 2414-2423. |
| [4] | Justin J, Alexandre A, Li F F . Perceptual losses for real-time style transfer and super-resolution[C] // European Conference on Computer Vision. 2016: 694-711. |
| [5] | Goodfellow I J, Pouget-Abadie J, Mirza M , et al. Generative adversarial networks [EB/OL]. [2018-06-26]. https://arxiv.org/abs/1406.2661. |
| [6] | Mirza M, Osindero S . Conditional generative adversarial nets [EB/OL]. [2018-06-26]. https://arxiv.org/pdf/1411.1784v1.pdf. |
| [7] | Springenberg J T . Unsupervised and semi-supervised learning with categorical generative adversarial networks[C] //. International Conference on Learning Representations (ICLR). 2016: 2172-2180. |
| [8] | Chen X, Duan Y, Houthooft R , et al. InfoGAN: interpretable representation learning by information maximizing generative adversarial nets [EB/OL]. [2018-08-12]. https: llaxiv.org/pdf/1606.03657.pdf. |
| [9] | Isola P, Zhu J Y, Zhou T H , et al. Image-to-image translation with conditional adversarial networks[C] // IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017: 5967-5976. |
| [10] | Liu M Y, Tuzel O . Coupled generative adversarial networks[C] // Conference on Neural Information Processing Systems. 2016: 469-467. |
| [11] | Zhu J Y, Park T, Isola P , et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C] // IEEE International Conference on Computer Vision. 2017: 2242-2251. |
| [12] | Ulyanov D, Vedaldi A, Lempitsky V . Instance normalization: the missing ingredient for fast stylization [EB/OL]. ( 2018-06-01)[2018-06-26]. https://arxiv.org/pdf/1607.08022.pdf. |
| [13] | 李策, 赵新宇, 肖利梅 , 等. 生成对抗映射网络下的图像多层感知去雾算法[J]. 计算机辅助设计与图形学学报, 2017,29(10):1835-1843. |
| [14] | 刘玉杰, 窦长红, 赵其鲁 , 等. 基于条件生成对抗网络的手绘图像检索[J]. 计算机辅助设计与图形学学报, 2017,29(12):2336-2342. |
| [15] | Duan J, Bressan M, Dance C , et al. Tone-mapping high dynamic range images by novel histogram adjustment[J]. Pattern Recognition, 2010,43(5):1847-1862. |
| [16] | 曹志义, 牛少彰, 张继威 . 基于半监督学习生成对抗网络的人脸还原算法研究[J]. 电子与信息学报, 2018,40(2):323-330. |
| [17] | Xie Z F, Tang S, Huang D J , et al. Photographic appearance enhancement via detail-based dictionary learning[J]. Journal of Computer Science & Technology, 2017,32(3):417-429. |
/
| 〈 |
|
〉 |