Where To Look? Automating Attending Behaviors of Virtual Human Characters

Loading...
Thumbnail Image

Degree type

Discipline

Subject

Funder

Grant number

License

Copyright date

Distributor

Related resources

Author

Chopra-Khullar, Sonu

Contributor

Abstract

This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.

Advisor

Date of presentation

1999-05-01

Conference name

Center for Human Modeling and Simulation

Conference dates

2023-05-17T00:54:30.000

Conference location

Date Range for Data Collection (Start Date)

Date Range for Data Collection (End Date)

Digital Object Identifier

Series name and number

Volume number

Issue number

Publisher

Publisher DOI

relationships.isJournalIssueOf

Comments

Postprint version. Published in Autonomous Agents and Multi-Agent Systems, Volume 4, Issue 1-2, March 2001, pages 16-23. Publisher URL: http://dx.doi.org/10.1023/A:1010010528443

Recommended citation

Collection