用于智能驾驶的动态场景视觉显著性多特征建模方法

摘要: 驾驶场景的视觉显著性建模是智能驾驶的重要研究方向。现有的静态和虚拟场景的视觉显著性建模方法不能适应真实驾驶环境下道路场景实时性、动态性和任务驱动特性。构建真实驾驶环境的动态场景视觉显著性模型是目前研究的挑战。从驾驶环境的特点与驾驶员的视觉认知规律出发,本文提取道路场景的低级视觉特征、高级视觉特征和动态视觉特征,并结合速度和道路曲率两个重要影响因素,建立了多特征逻辑回归模型(logistic regression,LR)计算驾驶场景视觉显著性。使用AUC值对模型进行评价,结果显示精度达到了90.43%,与传统的算法相比具有明显的优势。

关键词: 视觉显著性, 驾驶场景, 驾驶环境, 动态性

Abstract: Visual saliency modeling of driving scenes is an important research direction in intelligent driving, especially in the areas of assisted driving and automatic driving. The existing visual saliency modeling methods for static and virtual scenes cannot adapt to the real-time, dynamic and task-driven characteristics of road scenes in real driving environments. Building a visual saliency model of dynamic road scenes in real driving environments is a challenge for current research. Starting from the characteristics of driving environment and driver’s visual cognitive law, this paper extracts low-level visual features, high-level visual features and dynamic visual features of road scenes, and combines two influencing factors of speed and road curvature to build a visual saliency calculation model of driving scenes based on logistic regression model (LR). In this paper, the AUC value is used to evaluate the model, and the results showed an accuracy of 90.43%, which is significant advantage over traditional algorithms.

Key words: visual saliency, driving scene, driving environment, dynamics

引用本文:詹智成, 董卫华. 用于智能驾驶的动态场景视觉显著性多特征建模方法[J]. 测绘学报, 2021, 50(11): 1500-1511.

To cite this article: ZHAN Zhicheng, DONG Weihua. A multi-feature approach for modeling visual saliency of dynamic scene for intelligent driving[J]. Acta Geodaetica et Cartographica Sinica, 2021, 50(11): 1500-1511.

地图线状要素眼动识别的朴素贝叶斯方法

摘要: 眼动追踪技术在人机交互、用户行为识别、预测等方面得到了广泛应用,但是如何自动识别用户的地图阅读行为,眼动行为仍具有一定的挑战性。本文提出了一种基于朴素贝叶斯分类模型的方法识别用户阅读地图线状要素时的眼动行为。本试验首先通过25名被试者阅读地图过程中的眼动行为进行数据采集,然后提取了250个眼动特征并对其进行离散化处理,采用最小冗余最大相关方法进行特征选择排序。结果显示,当采用信息熵法,特征数量为m=5时分类准确率最大为78.27%;而采用信息差法,特征数量为m=4时分类准确率达到最大值为77.01%。本文提出的基于朴素贝叶斯的方法在准确率方面优于已有研究方法。此外,由于特征数量的减少,大幅提高了算法的执行效率。本文提出的地图阅读行为眼动识别方法,为未来眼控交互式地图研究奠定基础。

关键词: 眼动识别, 地图读图行为, 朴素贝叶斯分类器, 特征选择, 最小冗余最大相关

Abstract: At present, eye tracking technology has been widely used in human-computer interaction, user behavior recognition and prediction, but how to automatically identify user’s eye movement behavior in map reading is still a challenge. This paper proposed a method based on the naive Bayesian classification model to identify the users’ behavior when performing linear feature recognition. We first conducted an eye tracking experiment to acquire users’ eye movement dataset of map reading. Then we extracted and discretized 250 eye movement features involved in the algorithm, and used minimum redundancy maximum relevance algorithm to further select the features. The results show that when the attribute selection method is m=5 using mutual information quotient, the classification accuracy is 78.27%. And when using mutual information difference and m=4, the classification accuracy is 77.01%.We suggested that the proposed method can effectively identify the first elements in the map using eye movement data. This study explores the interaction technology by combining the eye tracking, laying the foundation for the future of designing gaze-controlled interactive map. The proposed method based on naive Bayesian model in this paper is comparable to the existing methods. In addition, the execution efficiency of the model is greatly improved due to the reduction in the number of features. The eye-track recognition algorithm of map reading behavior proposed in this study lays a foundation for future gaze-controlled interactive map research.

Key words: eye movement recognition, map reading behavior, naive Bayesian classifier, feature selection, minimum redundancy maximum relevance

引文格式:董卫华, 王圣凯, 王雪元, 杨天宇. 地图线状要素眼动识别的朴素贝叶斯方法[J]. 测绘学报, 2021, 50(6): 749-756.

To cite this article: DONG Weihua, WANG Shengkai, WANG Xueyuan, YANG Tianyu. A naive Bayesian method for eye movement recognition of map linear elements[J]. Acta Geodaetica et Cartographica Sinica, 2021, 50(6): 749-756.

GIScience and remote sensing in natural resource andenvironmental research: Status quo and future perspectives.

Abstract: Geographic information science (GIScience) and remote sensing have long provided essential data and methodological support for natural resource challenges and environmental problems research. With increasing advances in information technology, natural resource and environmental science research faces the dual challenges of data and computational intensiveness. Therefore, the role of remote sensing and GIScience in the fields of natural resources and environmental science in this new information era is a key concern of researchers. This study clarifies the definition and frameworks of these two disciplines and discusses their role in natural resource and environmental research. GIScience is the discipline that studies the abstract and formal expressions of the basic concepts and laws of geography, and its research framework mainly consists of geo-modeling, geo-analysis, and geo-computation. Remote sensing is a comprehensive technology that deals with the mechanisms of human effects on the natural ecological environment system by observing the earth surface system. Its main areas include sensors and platforms, information processing and interpretation, and natural resource and environmental applications. GIScience and remote sensing provide data and methodological support for resource and environmental science research. They play essential roles in promoting the development of resource and environmental science and other related technologies. This paper provides forecasts of ten future directions for GIScience and eight future directions for remote sensing, which aim to solve issues related to natural resources and the environment.

To cite this article: TAO, P., et al. 2021. GIScience and remote sensing in natural resource and environmental research: Status quo and future perspectives. Geography and Sustainability, 2(3), 207-215.

DOI: 10.1016/j.geosus.2021.08.004

Identifying map users with eye movement data from map-based spatial tasks: user privacy concerns

ABSTRACT: Individuals with different characteristics exhibit different eye movement patterns in map reading and wayfinding tasks. In this study, we aim to explore whether and to what extent map users’ eye movements can be used to detect who created them. Specifically, we focus on the use of gaze data for inferring users’ identities when users are performing map-based spatial tasks. We collected 32 participants’ eye movement data as they utilized maps to complete a series of self-localization and spatial orientation tasks. We extracted five sets of eye movement features and trained a random forest classifier. We used a leave-one-task-out approach to cross-validate the classifier and achieved the best identification rate of 89%, with a 2.7% equal error rate. This result is among the best performances reported in eye movement user identification studies. We evaluated the feature importance and found that basic statistical features (e.g. pupil size, saccade latency and fixation dispersion) yielded better performance than other feature sets (e.g. spatial fixation densities, saccade directions and saccade encodings). The results open the potential to develop personalized and adaptive gaze-based map interactions but also raise concerns about user privacy protection in data sharing and gaze-based geoapplications.

To cite this article: Hua Liao, Weihua Dong & Zhicheng Zhan (2021) Identifying map users with eye movement data from map-based spatial tasks: user privacy concerns, Cartography and Geographic Information Science, DOI: 10.1080/15230406.2021.1980435

Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments?

Abstract: Finding one’s way is a fundamental daily activity and has been widely studied in the field of geospatial cognition. Immersive virtual reality (iVR) techniques provide new approaches for investigating wayfinding behavior and spatial knowledge acquisition. It is currently unclear, however, how wayfinding behavior and spatial knowledge acquisition in iVR differ from those in real-world environments (REs). We conducted an RE wayfinding experiment with twenty-five participants who performed a series of tasks. We then conducted an iVR experiment using the same experimental design with forty participants who completed the same tasks. Participants’ eye movements were recorded in both experiments. In addition, verbal reports and postexperiment questionnaires were collected as . The results revealed that individuals’ wayfinding performance is largely the same between the two environments, whereas their visual attention exhibited significant differences. Participants processed visual information more efficiently in RE but searched visual information more efficiently in iVR. For spatial knowledge acquisition, participants’ distance estimation was more accurate in iVR compared with RE. Participants’ direction estimation and sketch map results were not significantly different, however. This empirical evidence regarding the ecological validity of iVR might encourage further studies of the benefits of VR techniques in geospatial cognition research.

To cite this article: Dong, W.H., Qin, T., Yang, T.Y., Liao, H., Liu, B., Meng, L.Q., Liu, Y., Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments? Ann. Am. Assoc. Geogr., 21.

DOI: 10.1080/24694452.2021.1894088

Mapping relationships between mobile phone call activity and regional function using self-organizing map

Abstract: Mobile phone data help us to understand human activities. Researchers have investigated the characteristics and relationships of human activities and regional function using information from physical and virtual spaces. However, how to establish location mapping between spaces to explore the relationships between mobile phone call activity and regional function remains unclear. In this paper, we employ a self-organizing map (SOM) to map locations with 24-dimensional activity attributes and identify relationships between users’ mobile phone call activities and regional functions. We apply mobile phone call data from Harbin, a city in northeast China, to build the location mapping relationships between user clusters of mobile phone call activity and points of interest (POI) composition in geographical space. The results indicate that for mobile phone call activities, mobile phone users are mapped to five locations that represent particular mobile phone call patterns. Regarding regional functions, we identified nine unique types of functional areas that are related to production, business, entertainment and education according to the patterns of users and POI proportions. We then explored the correlations between users and POIs for each type of area. The results of this research provide new insights into the relationships between human activity and regional functions.

To cite this article: Weihua, D., Shengkai, W., Yu, L., 2021. Mapping relationships between mobile phone call activity and regional function using self-organizing map. Computers, Environment and Urban Systems 87, 101624.

DOI: 10.1016/j.compenvurbsys.2021.101624

What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding?

Abstract: Augmented reality (AR) navigation aids have become widely used in pedestrian navigation, yet few studies have verified their usability from the perspective of human spatial cognition, such as visual attention, cognitive processing, and spatial memory. We conducted an empirical study in which smartphone-based AR aids were compared with a common two-dimensional (2D) electronic map. We conducted eye-tracking wayfinding experiments, in which 73 participants used either a 2D electronic map or AR navigation aids. We statistically compared participants’ wayfinding performance, visual attention, and route memory between two groups (AR and 2D map navigation aids). The results showed their wayfinding performance did not differ significantly. Regarding visual attention, the participants using AR tended to have significantly shorter fixation durations, greater saccade amplitudes, and smaller pupil sizes on average than the 2D map participants, which indicates lower average cognitive workloads throughout the wayfinding process. Considering attention on environmental objects, the participants using AR paid less visual attention to buildings but more to persons than the participants using 2D maps. Sketched routes results revealed that it was more difficult for AR participants to form a clear memory of the route. The aim of this study is to inspire more usability research on AR navigation.

 

To cite this article: Weihua Dong , Yulin Wu , Tong Qin , Xinran Bian , Yan Zhao , Yanrou He , Yawei Xu & Cheng Yu (2021): What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding?, Cartography and Geographic Information Science.

DOI: 10.1080/15230406.2021.1871646

董卫华教授荣获2020年测绘科学技术奖一等奖、2020年青年测绘地理信息科技创新人才奖

近日,中国测绘学会发布《中国测绘学会“2020年测绘科学技术奖”评选结果公告》、《中国测绘学会“2020年青年测绘地理信息科技创新人才奖”评选结果公告》,我部董卫华教授所主持的项目《基于脑神经机制的地理空间认知基础研究》荣获2020年测绘科学技术奖一等奖,此外董卫华教授荣获2020年青年测绘地理信息科技创新人才奖。

测绘科技进步奖主要奖励在我国测绘科学研究、技术创新与开发、科技成果推广应用、高新技术产业化、重大工程建设以及社会公益性测绘科技事业中,做出突出贡献的单位和个人。

Doctoral candidate, Bing Liu, from the Chair of Cartography, won Best Doctoral Colloquium Paper Award in the 6th Immersive Learning Research Network Conference, iLRN2020.

Abstract: Navigation service is a widespread geoinformation service and can be embedded in an augmented reality (AR). In this work-in-progress, we aim at a user interface of AR-based indoor navigation system, which could not only guide users to destinations quickly and safely, but also improve users’ spatial learning. We designed an interface for indoor navigation on HoloLens, gathered feedback from users, and found that arrows are an intuitive aid of orientation. Semantic meanings embedded in icons are not self-explaining, but icons with text can serve as virtual landmarks and help with spatial learning.

To cite this paper: Liu, B., & Meng, L. (2020, June). Doctoral Colloquium—Towards a Better User Interface of Augmented Reality Based Indoor Navigation Application. In 2020 6th International Conference of the Immersive Learning Research Network (iLRN) (pp. 392-394). IEEE.

Her work is about Towards a Better User Interface of Augmented Reality based Indoor Navigation Application. The work-in-progress paper is available online: ieeexplore.ieee.org/abstract/document/9155198