Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments?

Abstract: Finding one’s way is a fundamental daily activity and has been widely studied in the field of geospatial cognition. Immersive virtual reality (iVR) techniques provide new approaches for investigating wayfinding behavior and spatial knowledge acquisition. It is currently unclear, however, how wayfinding behavior and spatial knowledge acquisition in iVR differ from those in real-world environments (REs). We conducted an RE wayfinding experiment with twenty-five participants who performed a series of tasks. We then conducted an iVR experiment using the same experimental design with forty participants who completed the same tasks. Participants’ eye movements were recorded in both experiments. In addition, verbal reports and postexperiment questionnaires were collected as . The results revealed that individuals’ wayfinding performance is largely the same between the two environments, whereas their visual attention exhibited significant differences. Participants processed visual information more efficiently in RE but searched visual information more efficiently in iVR. For spatial knowledge acquisition, participants’ distance estimation was more accurate in iVR compared with RE. Participants’ direction estimation and sketch map results were not significantly different, however. This empirical evidence regarding the ecological validity of iVR might encourage further studies of the benefits of VR techniques in geospatial cognition research.

To cite this article: Dong, W.H., Qin, T., Yang, T.Y., Liao, H., Liu, B., Meng, L.Q., Liu, Y., Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments? Ann. Am. Assoc. Geogr., 21.

DOI: 10.1080/24694452.2021.1894088

Mapping relationships between mobile phone call activity and regional function using self-organizing map

Abstract: Mobile phone data help us to understand human activities. Researchers have investigated the characteristics and relationships of human activities and regional function using information from physical and virtual spaces. However, how to establish location mapping between spaces to explore the relationships between mobile phone call activity and regional function remains unclear. In this paper, we employ a self-organizing map (SOM) to map locations with 24-dimensional activity attributes and identify relationships between users’ mobile phone call activities and regional functions. We apply mobile phone call data from Harbin, a city in northeast China, to build the location mapping relationships between user clusters of mobile phone call activity and points of interest (POI) composition in geographical space. The results indicate that for mobile phone call activities, mobile phone users are mapped to five locations that represent particular mobile phone call patterns. Regarding regional functions, we identified nine unique types of functional areas that are related to production, business, entertainment and education according to the patterns of users and POI proportions. We then explored the correlations between users and POIs for each type of area. The results of this research provide new insights into the relationships between human activity and regional functions.

To cite this article: Weihua, D., Shengkai, W., Yu, L., 2021. Mapping relationships between mobile phone call activity and regional function using self-organizing map. Computers, Environment and Urban Systems 87, 101624.

DOI: 10.1016/j.compenvurbsys.2021.101624

What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding?

Abstract: Augmented reality (AR) navigation aids have become widely used in pedestrian navigation, yet few studies have verified their usability from the perspective of human spatial cognition, such as visual attention, cognitive processing, and spatial memory. We conducted an empirical study in which smartphone-based AR aids were compared with a common two-dimensional (2D) electronic map. We conducted eye-tracking wayfinding experiments, in which 73 participants used either a 2D electronic map or AR navigation aids. We statistically compared participants’ wayfinding performance, visual attention, and route memory between two groups (AR and 2D map navigation aids). The results showed their wayfinding performance did not differ significantly. Regarding visual attention, the participants using AR tended to have significantly shorter fixation durations, greater saccade amplitudes, and smaller pupil sizes on average than the 2D map participants, which indicates lower average cognitive workloads throughout the wayfinding process. Considering attention on environmental objects, the participants using AR paid less visual attention to buildings but more to persons than the participants using 2D maps. Sketched routes results revealed that it was more difficult for AR participants to form a clear memory of the route. The aim of this study is to inspire more usability research on AR navigation.

 

To cite this article: Weihua Dong , Yulin Wu , Tong Qin , Xinran Bian , Yan Zhao , Yanrou He , Yawei Xu & Cheng Yu (2021): What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding?, Cartography and Geographic Information Science.

DOI: 10.1080/15230406.2021.1871646

Doctoral candidate, Bing Liu, from the Chair of Cartography, won Best Doctoral Colloquium Paper Award in the 6th Immersive Learning Research Network Conference, iLRN2020.

Abstract: Navigation service is a widespread geoinformation service and can be embedded in an augmented reality (AR). In this work-in-progress, we aim at a user interface of AR-based indoor navigation system, which could not only guide users to destinations quickly and safely, but also improve users’ spatial learning. We designed an interface for indoor navigation on HoloLens, gathered feedback from users, and found that arrows are an intuitive aid of orientation. Semantic meanings embedded in icons are not self-explaining, but icons with text can serve as virtual landmarks and help with spatial learning.

To cite this paper: Liu, B., & Meng, L. (2020, June). Doctoral Colloquium—Towards a Better User Interface of Augmented Reality Based Indoor Navigation Application. In 2020 6th International Conference of the Immersive Learning Research Network (iLRN) (pp. 392-394). IEEE.

Her work is about Towards a Better User Interface of Augmented Reality based Indoor Navigation Application. The work-in-progress paper is available online: ieeexplore.ieee.org/abstract/document/9155198

How does gender affect indoor wayfinding under time pressure?

ABSTRACT:Indoor wayfinding is an important and complex daily activity. In this study, we aimed to explore the indoor wayfinding performance of pedestrians of different genders under time pressure.We conducted a way finding experiment in a real-world subway station in Beijing using eye-tracking and verbal protocol methods and analyzed wayfinding efficiency, strategies and eye movement data from 38 participants. The results indicated that both male and female participants experienced more difficulty reading maps under time pressure. We also found that males consistently had higher efficiency when they searched for information and could extract information from signage more efficiently than females when they were not under time pressure. Males were more adventurous and preferred to take risks under time pressure, while females consistently maintained a conservative strategy. These findings contribute to the understanding of gender differences in indoor wayfinding and cognition.

TO cite this paper:

Yixuan Zhou, Xueyan Cheng, Lei Zhu, Tong Qin, Weihua Dong & Jiping Liu (2020) How does gender affect indoor wayfinding under time pressure?, Cartography and Geographic Information Science, 47:4, 367-380, DOI: 10.1080/15230406.2020.1760940

Comparing pedestrians’ gaze behavior in desktop and in real environments

ABSTRACT: This research is motivated by the widespread use of desktop environments in the lab and by the recent trend of conducting real-world eye-tracking experiments to investigate pedestrian navigation. Despite the existing significant differences between the real world and the desktop environments, how pedestrians’ visual behavior in real environments differs from that in desktop environments is still not well understood. Here, we report a study that recorded eye movements for a total of 82 participants while they were performing five common navigation tasks in an unfamiliar urban environment (N = 39) and in a desktop environment (N = 43). By analyzing where the participants allocated their visual attention, what objects they fixated on, and how they transferred their visual attention among objects during navigation, we found similarities and significant differences in the general fixation indicators, spatial fixation distributions and attention to the objects of interest. The results contribute to the ongoing debate over the validity of using desktop environments to investigate pedestrian navigation by providing insights into how pedestrians allocate their attention to visual stimuli to accomplish navigation tasks in the two environments.

TO cite this paper:

Weihua Dong, Hua Liao, Bing Liu, Zhicheng Zhan, Huiping Liu, Liqiu Meng & Yu Liu (2020) Comparing pedestrians’ gaze behavior in desktop and in real environments, Cartography and Geographic Information Science, DOI: 10.1080/15230406.2020.1762513

 

How does map use differ in virtual reality and desktop-based environments?

ABSTRACT: Maps based on virtual reality (VR) are evolving and are being increasingly used in the fifield of geography. However, the advantages of VR based on the map use processes of users over desktop-based environments (DEs) are not fully understood. In this study, an experiment was conducted in which 120 participants performed map use tasks using maps and globes in VR and DE. The participants’ eye movements and questionnaires were collected to compare the map use performance di erences. We analyzed the general metrics, information searching and processing metrics of participants (e.g. response time, RT; average fifixation duration, AFD; average saccade duration, ASD; saccade frequency, SF, etc.) using maps and globes in di erent environments. We found that the participants using VR processed information more effiffifficiently (AFDDE = 233.34 ms, AFDVR = 173.09 ms), and the participants using DE had both a signifificantly shorter response time (RTDE = 88.68 s, RTVR = 124.05 s) and a shorter visual search time (ASDDE = 60.78 ms, ASDVR = 112.13 ms; SFDE = 6.30, SFVR = 2.07). We also found similarities in accuracy, satisfaction and readability. These results are helpful for designing VR maps that can adapt to human cognition and reflflect the advantages of VR.

TO cite this paper:

Weihua Dong, Tianyu Yang, Hua Liao & Liqiu Meng (2020) How does map use differ in virtual reality and desktop-based environments?, International Journal of Digital Earth, DOI: 10.1080/17538947.2020.1731617

Assessing Similarities and Differences between Males and Females in Visual Behaviors in Spatial Orientation Tasks

Abstract: Spatial orientation is an important task in human wayfinding. Existing research indicates sex‐related similarities and differences in performance and strategies when executing spatial orientation behaviors, but few studies have investigated the similarities and differences in visual behaviors between males and females. To address this research gap, we explored visual behavior similarities and differences between males and females using an eye‐tracking method. We recruited 40 participants to perform spatial orientation tasks in a desktop environment and recorded their eye‐tracking data during these tasks. The results indicate that there are no significant differences between sexes in efficiency and accuracy of spatial orientation. In terms of visual behaviors, we found that males fixated significantly longer than females on roads. Males and females had similar fixation counts in building, signpost, map, and other objects. Males and females performed similarly in fixation duration for all five classes. Moreover, fixation duration was well fitted to an exponential function for both males and females. The base of the exponential function fitted by males’ fixation duration was significantly lower than that of females, and the coefficient difference of exponential function was not found. Females were more effective in switching from maps to signposts, but differences of switches from map to other classes were not found. The newfound similarities and differences between males and females in visual behavior may aid in the design of better human centered outdoor navigation applications.

To cite this paper:

Dong, W.; Zhan, Z.; Liao, H.; Meng, L.; Liu, J. Assessing Similarities and Differences between Males and Females in Visual Behaviors in Spatial Orientation Tasks. ISPRS Int. J. Geo-Inf. 20209, 115.

Differences in the Gaze Behaviours of Pedestrians Navigating between Regular and Irregular Road Patterns

Abstract:While a road pattern influences wayfinding and navigation, its influence on the gaze behaviours of navigating pedestrians is not well documented. In this study, we compared gaze behaviour differences between regular and irregular road patterns using eye-tracking technology. Twenty-one participants performed orientation (ORI) and shortest route selection (SRS) tasks with both road patterns. We used accuracy of answers and response time to estimate overall performance and time to first fixation duration, average fixation duration, fixation count and fixation duration to estimate gaze behaviour. The results showed that participants performed better with better accuracy of answers using irregular road patterns. For both tasks and both road patterns, the Label areas of interest (AOIs) (including shops and signs) received quicker or greater attention. The road patterns influenced gaze behaviour for both Road AOIs and Label AOIs but exhibited a greater influence on Road AOIs in both tasks. In summary, for orientation and route selection, users are more likely to rely on labels, and roads with irregular patterns are important. These findings may serve as the anchor point for determining how people’s gaze behaviours differ depending on road pattern and indicate that labels and unique road patterns should be highlighted for better wayfinding and navigation.

To site this paper:

Liu, B.; Dong, W.; Zhan, Z.; Wang, S.; Meng, L. Differences in the Gaze Behaviours of Pedestrians Navigating between Regular and Irregular Road Patterns. ISPRS Int. J. Geo-Inf. 20209, 45.

DOI:10.3390/ijgi9010045

Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding

 

 

Abstract:Landmark visual salience (characterized by features that contrast with their surroundings and visual peculiarities) and semantic salience (characterized by features with unusual or important meaning and content in the environment) are two important factors that affect an individual’s visual attention during wayfinding. However, empirical evidence regarding which factor dominates visual guidance during indoor wayfinding is rare, especially in real-world environments. In this study, we assumed that semantic salience dominates the guidance of visual attention, which means that semantic salience will correlate with participants’ fixations more significantly than visual salience. Notably, in previous studies, semantic salience was shown to guide visual attention in static images or familiar scenes in a laboratory environment. To validate this assumption, first, we collected the eyemovement data of 22 participants as they found their way through a building. We then computed the landmark visual and semantic salience using computer vision models and questionnaires, respectively. Finally, we conducted correlation tests to verify our assumption. The results failed to validate our assumption and show that the role of salience in visual guidance in a real-world wayfinding process is different from the role of salience in perceiving static images or scenes in a laboratory. Visual salience dominates visual attention during indoor wayfinding, but the roles of salience in visual guidance are mixed across different landmark classes and tasks. The results provide new evidence for understanding how pedestrians visually interpret landmark information during real-world indoor wayfinding.

To site this paper:

Weihua Dong, Tong Qin, Hua Liao, Yu Liu & Jiping Liu (2019) Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding, Cartography and Geographic Information Science.

 DOI: 10.1080/15230406.2019.1697965