Using eye tracking to evaluate the usability of flow maps

Abstract: Flow maps allow users to perceive not only the location where interactions take place, but also the direction and volume of events. Previous studies have proposed numerous methods to produce flow maps. However, how to evaluate the usability of flow maps has not been well documented. In this study, we combined eye-tracking and questionnaire methods to evaluate the usability of flow maps through comparisons between (a) straight lines and curves and (b) line thicknesses and color gradients. The results show that curved flows are more effective than straight flows. Maps with curved flows have more correct answers, fixations, and percentages of fixations in areas of interest. Furthermore, we find that the curved flows require longer finish times but exhibit smaller times to first fixation than straight flows. In addition, we find that using color gradients to indicate the flow volume is significantly more effective than the application of different line thicknesses, which is mainly reflected by the presence of more correct answers in the color-gradient group. These empirical studies could help improve the usability of flow maps employed to visualize geo-data.

To cite this paper:

Dong, W *.; Wang, S. *; Chen, Y.; Meng, L. Using Eye Tracking to Evaluate the Usability of Flow Maps. ISPRS Int. J. Geo-Inf. 20187, 281. doi: https://doi.org/10.3390/ijgi7070281

Inferring user tasks in pedestrian navigation from eye movement data in real-world environments

Abstract: Eye movement data convey a wealth of information that can be used to probe human behaviour and cognitive processes. To date, eye tracking studies have mainly focused on laboratory-based evaluations of cartographic interfaces; in contrast, little attention has been paid to eye movement data mining for real-world applications. In this study, we propose using machine-learning methods to infer user tasks from eye movement data in real-world pedestrian navigation scenarios. We conducted a real-world pedestrian navigation experiment in which we recorded eye movement data from 38 participants. We trained and cross-validated a random forest classifier for classifying five common navigation tasks using five types of eye movement features. The results show that the classifier can achieve an overall accuracy of 67%. We found that statistical eye movement features and saccade encoding features are more useful than the other investigated types of features for distinguishing user tasks. We also identified that the choice of classifier, the time window size and the eye movement features considered are all important factors that influence task inference performance. Results of the research open doors to some potential real-world innovative applications, such as navigation systems that can provide task-related information depending on the task a user is performing.

 To cite this paper:
Hua Liao, Weihua Dong*, Haosheng Huang, Georg Gartner & Huiping
Liu (2018): Inferring user tasks in pedestrian navigation from eye movement data in real-world environments. International Journal of Geographical Information Science: 1-25. doi: https://doi.org/10.1080/13658816.2018.1482554