DeepSpace: Mood-Based Image Texture Generation for Virtual Reality From Music

Misha Sra, Prashanth Vijayaraghavan, Ognjen (Oggi) Rudovic, Pattie Maes, Deb Roy; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 41-50

Abstract


Affective virtual spaces are of interest for many VR applications in wellbeing, art, education, and entertainment. Creating content for virtual environments is a laborious task involving skills like 3D modeling, texturing, animation, lighting, and programming. One way to facilitate content creation is to automate sub-processes like assigning textures and materials. To this end, we introduce the DeepSpace approach that automatically creates and applies textures to objects in procedurally created 3D scenes. The main novelty of our approach is that it uses music to automatically create kaleidoscopic textures for virtual environments designed to elicit emotional responses in users. Specifically, DeepSpace exploits the modeling power of deep neural networks, which have shown great performance in image generation tasks, to achieve mood-based image generation. Our study results indicate the virtual environments created by DeepSpace elicit positive emotions and achieve high presence scores.

Related Material


[pdf]
[bibtex]
@InProceedings{Sra_2017_CVPR_Workshops,
author = {Sra, Misha and Vijayaraghavan, Prashanth and (Oggi) Rudovic, Ognjen and Maes, Pattie and Roy, Deb},
title = {DeepSpace: Mood-Based Image Texture Generation for Virtual Reality From Music},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}