Research Articles

Axis rules of VR film editing of scenes

Expand
  • 1. Shanghai Film Academy, Shanghai University, Shanghai 200072, China
    2. Shanghai Engineering Research Center of Motion Picture Special Effects, Shanghai University, Shanghai 200072, China

Received date: 2019-04-23

  Online published: 2019-12-31

Abstract

A virtual reality (VR) image viewpoint over-axis evaluation experiment was designed to explore the application feasibility of ordinary film montage design elements in VR movies. By analysing the objective data of eye tracking and spatial markers as well as the subjective data of visual continuity perception, spatial perception, immersion, and comfort, the influence of a VR image over-axis lens assembly on the audience is studied; the feasible axis rules are also summarized. The experimental results show that the VR image is not limited by the axis rule of traditional film. VR images have a unique set of rules. As such, when viewing VR images of different angles and different speeds, the objective spatial marker deviation, visual tracking degree, and subjective score can be maintained within the acceptable limits. This demonstrates that it is possible to cross the traditional axis to shoot in VR formats. No discomfort is caused to any users that took part in this research.

Cite this article

Tingting ZHANG, Yan ZHANG, Feng TIAN . Axis rules of VR film editing of scenes[J]. Journal of Shanghai University, 2019 , 25(6) : 888 -897 . DOI: 10.12066/j.issn.1007-2861.2160

References

[1] 张千子茜 . 新世纪以来奥斯卡最佳剪辑获奖影片剪辑艺术研究[D]. 重庆: 重庆大学, 2018.
[2] Valuch C, Ansorge U, Buchinger S, et al. The effect of cinematic cuts on human attention [C]// ACM International Conference on Interactive Experiences for TV and Online Video. 2014: 119-122.
[3] Valuch C, Ansorge U . The influence of color during continuity cuts in edited movies: an eye-tracking study[J]. Multimedia Tools & Applications, 2015,74(22):10161-10176.
[4] Karl R, Rieger J, Karl C , et al. Sensory and cognitive contributions of color to the recognition of natural scenes[J]. Current Biology, 2000,10(13):805-808.
[5] Hansen T, Gegenfurtner K R . Independence of color and luminance edges in natural scenes[J]. Visual Neuroscience, 2009,26(1):35-49.
[6] Wichmann F A, Sharpe L T, Gegenfurtner K R . The contributions of color to recognition memory for natural scenes[J]. Journal of Experimental Psychology; Learning, Memory, and Cognition, 2002,28(3):509-520.
[7] Kristjánsson A, Campana G . Where perception meets memory: a review of priming in visual search[J]. Attention Perception & Psychophysics, 2010,72(1):5-18.
[8] Hannus A, Berg R V D, Bekkering H , et al. Visual search near threshold: some features are more equal than others[J]. Journal of Vision, 2006,6(4):523-540.
[9] Rutishauser U, Koch C . Probabilistic modeling of eye movement data during conjunction search via feature-based attention [J]. Journal of Vision, 2007, 7(6): 5(1-20).
[10] Nuthmann A, Malcolm G L . Eye guidance during real-world scene search: the role color plays in central and peripheral vision[J]. Journal of Vision, 2016,16(2):3.
[11] 宫勇, 张三元, 刘志方 , 等. 颜色对图标视觉搜索效率影响的眼动研究[J]. 浙江大学学报(工学版), 2016,50(10):1987-1994.
[12] Nguyen C, Diverdi S, Hertzmann A , et al. Vremiere: in-headset virtual reality video editing [C]// CHI Conference on Human Factors in Computing Systems. 2017: 5428-5438. [13]Mu T J, Sun J J, Martin R R. A response time model for abrupt changes in binocular disparity[J]. The Visual Computer, 2015,31(5):675-687.
[14] Mac Quarrie A, Steed A. Cinematic virtual reality: evaluating the effect of display type on the viewing experience for panoramic video [C]// IEEE Virtual Reality. 2017: 45-54.
[15] Sitzmann V, Serrano A, Pavel A , et al. Saliency in VR: how do people explore virtual environments?[J]. IEEE Transactions on Visualization and Computer Graphics, 2018,24(4):1633-1642.
[16] Silva R, Feijó B, Gomes P B, et al. Real time 360$^{\circ}$ video stitching and streaming [C]// ACM SIGGRAPH 2016 Posters. 2016: 70.
[17] Lee J J, Kim B, Kim K , et al. Rich360: optimized spherical representation from structured panoramic camera arrays[J]. Acm Transactions on Graphics, 2016,35(4):1-11.
[18] 杜承垚, 袁景凌, 陈旻骋 , 等. GPU 加速与 L-ORB 特征提取的全景视频实时拼接[J]. 计算机研究与发展, 2017,54(6):1316-1325.
[19] Perazzi F, Sorkinehornung A, Zimmer H , et al. Panoramic video from unstructured camera arrays[J]. Computer Graphics Forum, 2015,34(2):57-68.
[20] Jiang W, Gu J. Video stitching with spatial-temporal content-preserving warping [C]// IEEE Computer Vision and Pattern Recognition Workshops. 2015: 42-48.
[21] 张洋, 李庆忠, 臧风妮 . 一种多摄像机全景视频图像快速拼接算法[J]. 光电子·激光, 2012,23(9):1821-1826.
[22] Gaddam V R, Ngo H B, Langseth R, et al. Tiling of panorama video for interactive virtual cameras: overheads and potential bandwidth requirement reduction [C]// IEEE Picture Coding Symposium. 2015: 204-209.
[23] Gaddam V R, Langseth R, Griwodz C, et al. Scaling virtual camera services to a large number of users [C]// ACM Multimedia Systems Conference. 2015: 93-96.
[24] Gaddam V R, Riegler M, Eg R , et al. Tiling in interactive panoramic video: approaches and evaluation[J]. IEEE Transactions on Multimedia, 2016,18(9):1819-1831.
[25] Muhammad A S, Sang C A, Hwang J I. Effect of using walk-in-place interface for panoramic video play in VR [C]// ACM Symposium on Spatial User Interaction. 2016: 203.
[26] Thatte J, Boin J B, Lakshman H, et al. Depth augmented stereo panorama for cinematic virtual reality with head-motion parallax [C]// IEEE International Conference on Multimedia and Expo. 2016: 1-6.
[27] Huang J W, Chen Z L, Ceylan D G, et al. 6-DOF VR videos with a single 360-camera [C]// IEEE Virtual Reality. 2017: 37-44.
[28] Serrano A, Sitzmann V, Ruiz-Borau J , et al. Movie editing and cognitive event segmentation in virtual reality video[J]. ACM Transactions on Graphics, 2017,36(4):1-12.
[29] Chen T Y, Liu H C, Hsu C Y. Film language analysis in society news-a case study of The New York Times [C]// Pacific Neighborhood Consortium Conference & Joint Meetings. 2017: 63-68.
[30] 姚争 . 影视剪辑教程 [M]. 杭州: 浙江大学出版社, 2007.
Outlines

/