The mHealthHub is a virtual forum where technologists, researchers and clinicians connect, learn, share, and innovate on mHealth tools to transform healthcare.

Tools & Datasets

Reach out or find us on Social Media.

365 Innovation Drive, Suite 335, Memphis, TN 38152

mHealthHUB@MD2K.org

Join our Community.

Stay up-to-date on the latest mHealth news and training.

Invalid email address
We promise not to spam you. You can unsubscribe at any time.

8/20/15: Understanding Behavior Through First Person Vision

August 20, 2015

 

James M. Rehg, PhD
Deputy Director, MD2K Center
Professor, School of Interactive Computing
Georgia Tech University

About the Webinar:

Recent progress in miniaturizing digital cameras and improving battery life has created a growing market for wearable cameras, exemplified by products such as GoPro and Google Glass. At the same time, the field of computer vision, which is concerned with the automatic extraction of information about the world from images and video, has also made rapid progress due to the increasing availability of image data, increases in computational power, and the emergence of machine learning methods such as deep learning.

The analysis of video captured from body-worn cameras is an emerging subfield of computer vision known as First Person Vision (FPV). FPV provides new opportunities to model and analyze human behavior, create personalized records of visual experiences, and improve the treatment of a broad range of mental and physical health conditions. This is the second talk in a series of two talks. In this talk, I will focus on specific FPV technologies in the context of MD2K.

Learning objectives:

Following the presentation, attendees will be able to:

  • Describe cues for video analysis which are available in the first person setting and outline their use in predicting visual attention and recognizing actions
  • Compare and contrast methods for skimming collections of first person video based on selectively dropping frames and detecting specific events of interest
  • Describe the use of first person vision to measure social behavior in the context of a behavioral therapy program for children with autism

Recommended Reading:

  • Yin Li, Alireza Fathi, and James M. Rehg. Learning to Predict Gaze in Egocentric Video. In Proc. International Conference on Computer Vision (ICCV), 2013.
  • Zhefan Ye, Yin Li, Yun Liu, Chanel Bridges, Agata Rozga, and James M. Rehg. Detecting Bids for Eye Contact using a Wearable Camera, In Proc. IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2015.
  • Neel Joshi, Wolf Kienzle, Mike Toelle, Matt Uyttendaele, and Michael F. Cohen. Real-Time Hyperlapse Creation via Optimal Frame Selection. In Proc. ACM SIGGRAPH 2015.

About James Rehg:

Dr. James M. Rehg  (pronounced “ray”) is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is co-Director of the Computational Perception Lab (CPL) and Director of the Center for Behavioral Imaging. He also serves as the Deputy Director of the NIH Center of Excellence on Mobile Sensor Data-to-Knowledge (MD2K). More about James Rehg.

Leave a comment