Abstract: Finding one’s way is a fundamental daily activity and has been widely studied in the field of geospatial cognition. Immersive virtual reality (iVR) techniques provide new approaches for investigating wayfinding behavior and spatial knowledge acquisition. It is currently unclear, however, how wayfinding behavior and spatial knowledge acquisition in iVR differ from those in real-world environments (REs). We conducted an RE wayfinding experiment with twenty-five participants who performed a series of tasks. We then conducted an iVR experiment using the same experimental design with forty participants who completed the same tasks. Participants’ eye movements were recorded in both experiments. In addition, verbal reports and postexperiment questionnaires were collected as . The results revealed that individuals’ wayfinding performance is largely the same between the two environments, whereas their visual attention exhibited significant differences. Participants processed visual information more efficiently in RE but searched visual information more efficiently in iVR. For spatial knowledge acquisition, participants’ distance estimation was more accurate in iVR compared with RE. Participants’ direction estimation and sketch map results were not significantly different, however. This empirical evidence regarding the ecological validity of iVR might encourage further studies of the benefits of VR techniques in geospatial cognition research.
To cite this article: Dong, W.H., Qin, T., Yang, T.Y., Liao, H., Liu, B., Meng, L.Q., Liu, Y., Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments? Ann. Am. Assoc. Geogr., 21.
Abstract: Mobile phone data help us to understand human activities. Researchers have investigated the characteristics and relationships of human activities and regional function using information from physical and virtual spaces. However, how to establish location mapping between spaces to explore the relationships between mobile phone call activity and regional function remains unclear. In this paper, we employ a self-organizing map (SOM) to map locations with 24-dimensional activity attributes and identify relationships between users’ mobile phone call activities and regional functions. We apply mobile phone call data from Harbin, a city in northeast China, to build the location mapping relationships between user clusters of mobile phone call activity and points of interest (POI) composition in geographical space. The results indicate that for mobile phone call activities, mobile phone users are mapped to five locations that represent particular mobile phone call patterns. Regarding regional functions, we identified nine unique types of functional areas that are related to production, business, entertainment and education according to the patterns of users and POI proportions. We then explored the correlations between users and POIs for each type of area. The results of this research provide new insights into the relationships between human activity and regional functions.
To cite this article: Weihua, D., Shengkai, W., Yu, L., 2021. Mapping relationships between mobile phone call activity and regional function using self-organizing map. Computers, Environment and Urban Systems 87, 101624.
Abstract: Augmented reality (AR) navigation aids have become widely used in pedestrian navigation, yet few studies have verified their usability from the perspective of human spatial cognition, such as visual attention, cognitive processing, and spatial memory. We conducted an empirical study in which smartphone-based AR aids were compared with a common two-dimensional (2D) electronic map. We conducted eye-tracking wayfinding experiments, in which 73 participants used either a 2D electronic map or AR navigation aids. We statistically compared participants’ wayfinding performance, visual attention, and route memory between two groups (AR and 2D map navigation aids). The results showed their wayfinding performance did not differ significantly. Regarding visual attention, the participants using AR tended to have significantly shorter fixation durations, greater saccade amplitudes, and smaller pupil sizes on average than the 2D map participants, which indicates lower average cognitive workloads throughout the wayfinding process. Considering attention on environmental objects, the participants using AR paid less visual attention to buildings but more to persons than the participants using 2D maps. Sketched routes results revealed that it was more difficult for AR participants to form a clear memory of the route. The aim of this study is to inspire more usability research on AR navigation.
To cite this article: Weihua Dong , Yulin Wu , Tong Qin , Xinran Bian , Yan Zhao , Yanrou He , Yawei Xu & Cheng Yu (2021): What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding?, Cartography and Geographic Information Science.
Abstract: Navigation service is a widespread geoinformation service and can be embedded in an augmented reality (AR). In this work-in-progress, we aim at a user interface of AR-based indoor navigation system, which could not only guide users to destinations quickly and safely, but also improve users’ spatial learning. We designed an interface for indoor navigation on HoloLens, gathered feedback from users, and found that arrows are an intuitive aid of orientation. Semantic meanings embedded in icons are not self-explaining, but icons with text can serve as virtual landmarks and help with spatial learning.
To cite this paper: Liu, B., & Meng, L. (2020, June). Doctoral Colloquium—Towards a Better User Interface of Augmented Reality Based Indoor Navigation Application. In 2020 6th International Conference of the Immersive Learning Research Network (iLRN) (pp. 392-394). IEEE.
Her work is about Towards a Better User Interface of Augmented Reality based Indoor Navigation Application. The work-in-progress paper is available online: ieeexplore.ieee.org/abstract/document/9155198
摘要：传统的地图交互方式主要包括鼠标键盘控制和触摸设备控制，对根据人类视觉通道、利用眼动控制进行地图交互的研究还很少。眼动数据可以透露出人的心理状态和兴趣，因此将其作为地图交互的输入信息可以增加应用的可靠性和便利性。首先利用Tobii EyeX 眼动仪获取眼动数据，将该数据作为地图交互的输入信息，提出了注视点过滤算法和注视多边形定位算法；然后设计了眼控控件及其交互响应方式，解决了地图眼控交互中存在的注视点冗余、交互反馈延迟问题，并开发了一套眼动控制的交互式地图原型系统；最后通过比较用户分别使用眼动控制和鼠标控制完成相同地图浏览任务的耗时及表现，对地图眼控交互算法和原型系统进行了评价。
引用本文：朱琳, 王圣凯, 袁伟舜, 等. 眼动控制的交互式地图设计[J]. 武汉大学学报信息科学版, 2020, 45(5): 736-743.
ABSTRACT:Indoor wayﬁnding is an important and complex daily activity. In this study, we aimed to explore the indoor wayﬁnding performance of pedestrians of diﬀerent genders under time pressure.We conducted a way ﬁnding experiment in a real-world subway station in Beijing using eye-tracking and verbal protocol methods and analyzed wayﬁnding eﬃciency, strategies and eye movement data from 38 participants. The results indicated that both male and female participants experienced more diﬃculty reading maps under time pressure. We also found that males consistently had higher eﬃciency when they searched for information and could extract information from signage more eﬃciently than females when they were not under time pressure. Males were more adventurous and preferred to take risks under time pressure, while females consistently maintained a conservative strategy. These ﬁndings contribute to the understanding of gender diﬀerences in indoor wayﬁnding and cognition.
TO cite this paper:
Yixuan Zhou, Xueyan Cheng, Lei Zhu, Tong Qin, Weihua Dong & Jiping Liu (2020) How does gender affect indoor wayfinding under time pressure?, Cartography and Geographic Information Science, 47:4, 367-380, DOI: 10.1080/15230406.2020.1760940
ABSTRACT: This research is motivated by the widespread use of desktop environments in the lab and by the recent trend of conducting real-world eye-tracking experiments to investigate pedestrian navigation. Despite the existing significant differences between the real world and the desktop environments, how pedestrians’ visual behavior in real environments differs from that in desktop environments is still not well understood. Here, we report a study that recorded eye movements for a total of 82 participants while they were performing five common navigation tasks in an unfamiliar urban environment (N = 39) and in a desktop environment (N = 43). By analyzing where the participants allocated their visual attention, what objects they fixated on, and how they transferred their visual attention among objects during navigation, we found similarities and significant differences in the general fixation indicators, spatial fixation distributions and attention to the objects of interest. The results contribute to the ongoing debate over the validity of using desktop environments to investigate pedestrian navigation by providing insights into how pedestrians allocate their attention to visual stimuli to accomplish navigation tasks in the two environments.
TO cite this paper:
Weihua Dong, Hua Liao, Bing Liu, Zhicheng Zhan, Huiping Liu, Liqiu Meng & Yu Liu (2020) Comparing pedestrians’ gaze behavior in desktop and in real environments, Cartography and Geographic Information Science, DOI: 10.1080/15230406.2020.1762513
ABSTRACT: Maps based on virtual reality (VR) are evolving and are being increasingly used in the fifield of geography. However, the advantages of VR based on the map use processes of users over desktop-based environments (DEs) are not fully understood. In this study, an experiment was conducted in which 120 participants performed map use tasks using maps and globes in VR and DE. The participants’ eye movements and questionnaires were collected to compare the map use performance di erences. We analyzed the general metrics, information searching and processing metrics of participants (e.g. response time, RT; average fifixation duration, AFD; average saccade duration, ASD; saccade frequency, SF, etc.) using maps and globes in di erent environments. We found that the participants using VR processed information more effiffifficiently (AFDDE = 233.34 ms, AFDVR = 173.09 ms), and the participants using DE had both a signifificantly shorter response time (RTDE = 88.68 s, RTVR = 124.05 s) and a shorter visual search time (ASDDE = 60.78 ms, ASDVR = 112.13 ms; SFDE = 6.30, SFVR = 2.07). We also found similarities in accuracy, satisfaction and readability. These results are helpful for designing VR maps that can adapt to human cognition and reflflect the advantages of VR.
TO cite this paper:
(2020) How does map use differ in virtual reality and desktop-based environments?, International Journal of Digital Earth, DOI: 10.1080/17538947.2020.1731617