TOWARD SHARED AUTONOMY CONTROL SCHEMES FOR HUMAN-ROBOT SYSTEMS: ACTION PRIMITIVE RECOGNITION USING EYE GAZE FEATURES

Toward Shared Autonomy Control Schemes for Human-Robot Systems: Action Primitive Recognition Using Eye Gaze Features

Toward Shared Autonomy Control Schemes for Human-Robot Systems: Action Primitive Recognition Using Eye Gaze Features

Blog Article

The functional independence of individuals with upper limb impairment could be enhanced by teleoperated robots that can assist with activities of daily living.However, robot control is not 730 sunken lake road always intuitive for the operator.In this work, eye gaze was leveraged as a natural way to infer human intent and advance action recognition for shared autonomy control schemes.We introduced a classifier structure for recognizing low-level action primitives that incorporates novel three-dimensional gaze-related features.We defined an action primitive as a triplet comprised of a verb, target object, and hand object.

A recurrent neural network was trained to recognize a verb and target object, and was tested on three different activities.For a representative activity (making a powdered drink), the average recognition accuracy was 77% for the verb and 83% for the target object.Using a non-specific approach to classifying and indexing objects in the workspace, we observed a modest level of generalizability of the action primitive classifier across activities, including those for which the 15-eg2373cl classifier was not trained.The novel input features of gaze object angle and its rate of change were especially useful for accurately recognizing action primitives and reducing the observational latency of the classifier.

Report this page