Eye-tracking enables researchers to conduct complex analysis on human behavior. With the recent introduction of eye-tracking into consumer-grade virtual reality headsets, the barrier of entry to visual attention analysis in virtual environments has been lowered significantly. Whether for arranging artwork in a virtual museum, posting banners for virtual events or placing advertisements in virtual worlds, analyzing visual attention patterns provides a powerful means for guiding visual element placement.
In this work, we propose a novel data-driven optimization approach for automatically analyzing visual attention and placing visual elements in 3D virtual environments. Using an eye-tracking virtual reality headset, we collect eye-tracking data which we use to train a regression model for predicting visual attention. We then use the predicted gaze duration output of our regressors to optimize the placement of visual elements with respect to certain visual attention and design goals. Through experiments in several virtual environments, we demonstrate the effectiveness of our optimization approach for predicting visual attention and for placing visual elements in different practical scenarios. Our approach is implemented as a useful plug-in that level designers can use to automatically populate visual elements in 3D virtual environments.
@inproceedings{vr19, author={Alghofaili, Rawan and Solah, Michael and Huang, Haikun and Sawahata, Yasuhito and Pomplun, Marc and Yu, Lap-Fai}, title={Optimizing Visual Element Placement via Visual Attention Analysis}, booktitle={IEEE Virtual Reality}, year=2019, location={Osaka, Japan} }
This research is supported by the National Science Foundation under award number 1565978. We thank the anonymous reviewers for their constructive comments.