摘要: 驾驶场景的视觉显著性建模是智能驾驶的重要研究方向。现有的静态和虚拟场景的视觉显著性建模方法不能适应真实驾驶环境下道路场景实时性、动态性和任务驱动特性。构建真实驾驶环境的动态场景视觉显著性模型是目前研究的挑战。从驾驶环境的特点与驾驶员的视觉认知规律出发,本文提取道路场景的低级视觉特征、高级视觉特征和动态视觉特征,并结合速度和道路曲率两个重要影响因素,建立了多特征逻辑回归模型(logistic regression,LR)计算驾驶场景视觉显著性。使用AUC值对模型进行评价,结果显示精度达到了90.43%,与传统的算法相比具有明显的优势。
作者:TongQin
地图线状要素眼动识别的朴素贝叶斯方法
摘要: 眼动追踪技术在人机交互、用户行为识别、预测等方面得到了广泛应用,但是如何自动识别用户的地图阅读行为,眼动行为仍具有一定的挑战性。本文提出了一种基于朴素贝叶斯分类模型的方法识别用户阅读地图线状要素时的眼动行为。本试验首先通过25名被试者阅读地图过程中的眼动行为进行数据采集,然后提取了250个眼动特征并对其进行离散化处理,采用最小冗余最大相关方法进行特征选择排序。结果显示,当采用信息熵法,特征数量为m=5时分类准确率最大为78.27%;而采用信息差法,特征数量为m=4时分类准确率达到最大值为77.01%。本文提出的基于朴素贝叶斯的方法在准确率方面优于已有研究方法。此外,由于特征数量的减少,大幅提高了算法的执行效率。本文提出的地图阅读行为眼动识别方法,为未来眼控交互式地图研究奠定基础。
GIScience and remote sensing in natural resource andenvironmental research: Status quo and future perspectives.
Abstract: Geographic information science (GIScience) and remote sensing have long provided essential data and methodological support for natural resource challenges and environmental problems research. With increasing advances in information technology, natural resource and environmental science research faces the dual challenges of data and computational intensiveness. Therefore, the role of remote sensing and GIScience in the fields of natural resources and environmental science in this new information era is a key concern of researchers. This study clarifies the definition and frameworks of these two disciplines and discusses their role in natural resource and environmental research. GIScience is the discipline that studies the abstract and formal expressions of the basic concepts and laws of geography, and its research framework mainly consists of geo-modeling, geo-analysis, and geo-computation. Remote sensing is a comprehensive technology that deals with the mechanisms of human effects on the natural ecological environment system by observing the earth surface system. Its main areas include sensors and platforms, information processing and interpretation, and natural resource and environmental applications. GIScience and remote sensing provide data and methodological support for resource and environmental science research. They play essential roles in promoting the development of resource and environmental science and other related technologies. This paper provides forecasts of ten future directions for GIScience and eight future directions for remote sensing, which aim to solve issues related to natural resources and the environment.
To cite this article: TAO, P., et al. 2021. GIScience and remote sensing in natural resource and environmental research: Status quo and future perspectives. Geography and Sustainability, 2(3), 207-215.
DOI: 10.1016/j.geosus.2021.08.004
Identifying map users with eye movement data from map-based spatial tasks: user privacy concerns
ABSTRACT: Individuals with different characteristics exhibit different eye movement patterns in map reading and wayfinding tasks. In this study, we aim to explore whether and to what extent map users’ eye movements can be used to detect who created them. Specifically, we focus on the use of gaze data for inferring users’ identities when users are performing map-based spatial tasks. We collected 32 participants’ eye movement data as they utilized maps to complete a series of self-localization and spatial orientation tasks. We extracted five sets of eye movement features and trained a random forest classifier. We used a leave-one-task-out approach to cross-validate the classifier and achieved the best identification rate of 89%, with a 2.7% equal error rate. This result is among the best performances reported in eye movement user identification studies. We evaluated the feature importance and found that basic statistical features (e.g. pupil size, saccade latency and fixation dispersion) yielded better performance than other feature sets (e.g. spatial fixation densities, saccade directions and saccade encodings). The results open the potential to develop personalized and adaptive gaze-based map interactions but also raise concerns about user privacy protection in data sharing and gaze-based geoapplications.
To cite this article: (2021) Identifying map users with eye movement data from map-based spatial tasks: user privacy concerns, Cartography and Geographic Information Science, DOI: 10.1080/15230406.2021.1980435
Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments?
Abstract: Finding one’s way is a fundamental daily activity and has been widely studied in the field of geospatial cognition. Immersive virtual reality (iVR) techniques provide new approaches for investigating wayfinding behavior and spatial knowledge acquisition. It is currently unclear, however, how wayfinding behavior and spatial knowledge acquisition in iVR differ from those in real-world environments (REs). We conducted an RE wayfinding experiment with twenty-five participants who performed a series of tasks. We then conducted an iVR experiment using the same experimental design with forty participants who completed the same tasks. Participants’ eye movements were recorded in both experiments. In addition, verbal reports and postexperiment questionnaires were collected as . The results revealed that individuals’ wayfinding performance is largely the same between the two environments, whereas their visual attention exhibited significant differences. Participants processed visual information more efficiently in RE but searched visual information more efficiently in iVR. For spatial knowledge acquisition, participants’ distance estimation was more accurate in iVR compared with RE. Participants’ direction estimation and sketch map results were not significantly different, however. This empirical evidence regarding the ecological validity of iVR might encourage further studies of the benefits of VR techniques in geospatial cognition research.
To cite this article: Dong, W.H., Qin, T., Yang, T.Y., Liao, H., Liu, B., Meng, L.Q., Liu, Y., Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments? Ann. Am. Assoc. Geogr., 21.
Mapping relationships between mobile phone call activity and regional function using self-organizing map
Abstract: Mobile phone data help us to understand human activities. Researchers have investigated the characteristics and relationships of human activities and regional function using information from physical and virtual spaces. However, how to establish location mapping between spaces to explore the relationships between mobile phone call activity and regional function remains unclear. In this paper, we employ a self-organizing map (SOM) to map locations with 24-dimensional activity attributes and identify relationships between users’ mobile phone call activities and regional functions. We apply mobile phone call data from Harbin, a city in northeast China, to build the location mapping relationships between user clusters of mobile phone call activity and points of interest (POI) composition in geographical space. The results indicate that for mobile phone call activities, mobile phone users are mapped to five locations that represent particular mobile phone call patterns. Regarding regional functions, we identified nine unique types of functional areas that are related to production, business, entertainment and education according to the patterns of users and POI proportions. We then explored the correlations between users and POIs for each type of area. The results of this research provide new insights into the relationships between human activity and regional functions.
To cite this article: Weihua, D., Shengkai, W., Yu, L., 2021. Mapping relationships between mobile phone call activity and regional function using self-organizing map. Computers, Environment and Urban Systems 87, 101624.
What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding?
Abstract: Augmented reality (AR) navigation aids have become widely used in pedestrian navigation, yet few studies have verified their usability from the perspective of human spatial cognition, such as visual attention, cognitive processing, and spatial memory. We conducted an empirical study in which smartphone-based AR aids were compared with a common two-dimensional (2D) electronic map. We conducted eye-tracking wayfinding experiments, in which 73 participants used either a 2D electronic map or AR navigation aids. We statistically compared participants’ wayfinding performance, visual attention, and route memory between two groups (AR and 2D map navigation aids). The results showed their wayfinding performance did not differ significantly. Regarding visual attention, the participants using AR tended to have significantly shorter fixation durations, greater saccade amplitudes, and smaller pupil sizes on average than the 2D map participants, which indicates lower average cognitive workloads throughout the wayfinding process. Considering attention on environmental objects, the participants using AR paid less visual attention to buildings but more to persons than the participants using 2D maps. Sketched routes results revealed that it was more difficult for AR participants to form a clear memory of the route. The aim of this study is to inspire more usability research on AR navigation.
To cite this article: Weihua Dong , Yulin Wu , Tong Qin , Xinran Bian , Yan Zhao , Yanrou He , Yawei Xu & Cheng Yu (2021): What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding?, Cartography and Geographic Information Science.
董卫华教授应邀担任美国CaGIS期刊编委
董卫华教授荣获2020年测绘科学技术奖一等奖、2020年青年测绘地理信息科技创新人才奖
近日,中国测绘学会发布《中国测绘学会“2020年测绘科学技术奖”评选结果公告》、《中国测绘学会“2020年青年测绘地理信息科技创新人才奖”评选结果公告》,我部董卫华教授所主持的项目《基于脑神经机制的地理空间认知基础研究》荣获2020年测绘科学技术奖一等奖,此外董卫华教授荣获2020年青年测绘地理信息科技创新人才奖。
测绘科技进步奖主要奖励在我国测绘科学研究、技术创新与开发、科技成果推广应用、高新技术产业化、重大工程建设以及社会公益性测绘科技事业中,做出突出贡献的单位和个人。
Doctoral candidate, Bing Liu, from the Chair of Cartography, won Best Doctoral Colloquium Paper Award in the 6th Immersive Learning Research Network Conference, iLRN2020.
Abstract: Navigation service is a widespread geoinformation service and can be embedded in an augmented reality (AR). In this work-in-progress, we aim at a user interface of AR-based indoor navigation system, which could not only guide users to destinations quickly and safely, but also improve users’ spatial learning. We designed an interface for indoor navigation on HoloLens, gathered feedback from users, and found that arrows are an intuitive aid of orientation. Semantic meanings embedded in icons are not self-explaining, but icons with text can serve as virtual landmarks and help with spatial learning.
To cite this paper: Liu, B., & Meng, L. (2020, June). Doctoral Colloquium—Towards a Better User Interface of Augmented Reality Based Indoor Navigation Application. In 2020 6th International Conference of the Immersive Learning Research Network (iLRN) (pp. 392-394). IEEE.
Her work is about Towards a Better User Interface of Augmented Reality based Indoor Navigation Application. The work-in-progress paper is available online: ieeexplore.ieee.org/abstract/document/9155198