数字影视技术

基于生成对抗网络的HDR图像风格迁移技术

展开
  • 1. 上海大学 上海电影学院, 上海 200072
    2. 上海大学 上海电影特效工程技术研究中心, 上海 200072

收稿日期: 2018-05-21

  网络出版日期: 2018-08-31

基金资助

国家自然科学基金资助项目(61303093);国家自然科学基金资助项目(61402278);国家自然科学基金资助项目(61472245);上海市科委科技攻关资助项目(16511101300)

HDR image style transfer technique based on generative adversarial networks

Expand
  • 1. Shanghai Film Academy, Shanghai University, Shanghai 200072, China
    2. Shanghai Engineering Research Center of Motion Picture Special Effects, Shanghai University, Shanghai 200072, China

Received date: 2018-05-21

  Online published: 2018-08-31

摘要

针对高动态范围(high dynamic range, HDR)图像较为复杂耗时的合成流程, 提出一种基于生成对抗网络的 HDR 图像风格迁移技术. 首先, 构建两个生成对抗网络的训练集: 普通图片与低曝光 HDR 图片, 普通图片与高曝光 HDR 图片; 然后, 通过生成对抗网络训练, 得到普通图片到低曝光 HDR 图片和普通图片到高曝光 HDR 图片两个生成模型; 最后, 将模型输出的高低曝光图像和原图合成 HDR 文件, 再通过色调映射形成最终 HDR 风格迁移后的图像. 实验结果表明, 这种方法不仅有效解决了 HDR 图像风格迁移问题, 也充分表明了生成对抗网络在图像编辑中的优越性.

本文引用格式

谢志峰, 叶冠桦, 闫淑萁, 何绍荣, 丁友东 . 基于生成对抗网络的HDR图像风格迁移技术[J]. 上海大学学报(自然科学版), 2018 , 24(4) : 524 -534 . DOI: 10.12066/j.issn.1007-2861.2058

Abstract

In view of the complex and time-consuming synthetic process of the high dynamic range (HDR) images, a novel HDR image transfer technique based on the generative adversarial network has been proposed. The process is as follows: first to build two training sets of the generative adversarial network---ordinary images and low-exposure HDR images; ordinary images and high exposure HDR images. Then, through the training of the generative adversarial networks, the two generative models of ordinary images to low exposure HDR images and ordinary images to high exposure HDR images are established. Finally, a picture is put into the model, the high and low exposure images and the original images are combined to synthesize HDR files, and the tone mapping forms the image after the final HDR style transfer. This method not only solves effectively the problem of HDR image style transfer, but also proves the advantages of the generative adversarial network in processing image editing.

参考文献

[1] Portilla J, Simoncelli E P . A parametric texture model based on joint statistics of complex wavelet coefficients[J]. International Journal of Computer Vision, 2000,40(1):49-70.
[2] Gatys L A, Ecker A S, Bethge M . A neural algorithm of artistic style [EB/OL]. [2018-06-26]. https://arxiv.org/abs/1508.06576.
[3] Gatys L A, Ecker A S, Bethge M . Image style transfer using convolutional neural networks[C] //IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016: 2414-2423.
[4] Justin J, Alexandre A, Li F F . Perceptual losses for real-time style transfer and super-resolution[C] // European Conference on Computer Vision. 2016: 694-711.
[5] Goodfellow I J, Pouget-Abadie J, Mirza M , et al. Generative adversarial networks [EB/OL]. [2018-06-26]. https://arxiv.org/abs/1406.2661.
[6] Mirza M, Osindero S . Conditional generative adversarial nets [EB/OL]. [2018-06-26]. https://arxiv.org/pdf/1411.1784v1.pdf.
[7] Springenberg J T . Unsupervised and semi-supervised learning with categorical generative adversarial networks[C] //. International Conference on Learning Representations (ICLR). 2016: 2172-2180.
[8] Chen X, Duan Y, Houthooft R , et al. InfoGAN: interpretable representation learning by information maximizing generative adversarial nets [EB/OL]. [2018-08-12]. https: llaxiv.org/pdf/1606.03657.pdf.
[9] Isola P, Zhu J Y, Zhou T H , et al. Image-to-image translation with conditional adversarial networks[C] // IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017: 5967-5976.
[10] Liu M Y, Tuzel O . Coupled generative adversarial networks[C] // Conference on Neural Information Processing Systems. 2016: 469-467.
[11] Zhu J Y, Park T, Isola P , et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C] // IEEE International Conference on Computer Vision. 2017: 2242-2251.
[12] Ulyanov D, Vedaldi A, Lempitsky V . Instance normalization: the missing ingredient for fast stylization [EB/OL]. ( 2018-06-01)[2018-06-26]. https://arxiv.org/pdf/1607.08022.pdf.
[13] 李策, 赵新宇, 肖利梅 , 等. 生成对抗映射网络下的图像多层感知去雾算法[J]. 计算机辅助设计与图形学学报, 2017,29(10):1835-1843.
[14] 刘玉杰, 窦长红, 赵其鲁 , 等. 基于条件生成对抗网络的手绘图像检索[J]. 计算机辅助设计与图形学学报, 2017,29(12):2336-2342.
[15] Duan J, Bressan M, Dance C , et al. Tone-mapping high dynamic range images by novel histogram adjustment[J]. Pattern Recognition, 2010,43(5):1847-1862.
[16] 曹志义, 牛少彰, 张继威 . 基于半监督学习生成对抗网络的人脸还原算法研究[J]. 电子与信息学报, 2018,40(2):323-330.
[17] Xie Z F, Tang S, Huang D J , et al. Photographic appearance enhancement via detail-based dictionary learning[J]. Journal of Computer Science & Technology, 2017,32(3):417-429.
文章导航

/