EPIC@ICCV19
The Fifth International Workshop on Egocentric Perception, Interaction and Computing
Seoul, Saturday Nov. 2, 2019

Program

The open access proceedings of the workshop are available at the CVF open access repository.

The workshop will be held within the ICCV 2019 conference, as a half-day workshop in the morning of Saturday Nov. 2 2019.

Place: Room 307A
Date: Saturday Nov. 2 2019
Time: 8:30-12:45

The poster session will be held in Section 1 (poster numbers 41-63). The program reports the number assigned to each poster.

8:30 8:45 Welcome
8:45 9:30 Invited Talk by Marc Pollefeys (ETH Zurich, CH)
9:30 10:00 Oral Session I
Simultaneous Segmentation and Recognition: Towards more accurate Ego Gesture Recognition
Tejo Chalasani (Trinity College Dublin), Aljosa Smolic (Trinity College Dublin)
EgoVQA - An Egocentric Video Question Answering Benchmark Dataset
Chenyou Fan (Google)
10:00 11:00 Coffee Break and Poster Session (Section 1)
Full Papers
41. Simultaneous Segmentation and Recognition: Towards more accurate Ego Gesture Recognition
Tejo Chalasani (Trinity College Dublin), Aljosa Smolic (Trinity College Dublin)
42. EgoVQA - An Egocentric Video Question Answering Benchmark Dataset
Chenyou Fan (Google)
43. Learning Spatiotemporal Attention for Egocentric Action Recognition
Minlong Lu (Simon Fraser University), Danping Liao (Zhejiang University), Ze-Nian Li (Simon Fraser University)
44. Seeing and Hearing Egocentric Actions: How Much Can We Learn?
Alejandro Cartas (University of Barcelona), Jordi Luque, Petia Radeva, Carlos Segura, Mariella Dimiccoli
45. Multitask Learning to Improve Egocentric Action Recognition
George Kapidis (Utrecht University), Ronald Poppe (Utrecht University), Elsbeth van Dam (Noldus IT), Lucas Noldus (Noldus IT), Remco C. Veltkamp (Utrecht University)
46. Weakly-supervised Degree of Eye-closeness Estimation
Eyasu Zemene (Qualcomm), Shuai Zhang (Qualcomm AI Research), Bijan Forutanpour (Qualcomm), Yingyong Qi (Qualcomm), Ning Bi (Qualcomm)
47. Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People
Chayma Zatout (University USTHB), Slimane Larabi (USTHB University), Ilyes Mendili (University USTHB), Barnabé SOEDJI (University USTHB)
48. An Analysis of how Driver Experience Affects Eye-Gaze Behavior for Robotic Wheelchair Operation
Yamato Maekawa (Nagoya University), Naoki Akai (Nagoya University), Takatsugu Hirayama (Nagoya University), Luis Yoichi Morales (Nagoya University), Daisuke Deguchi (Nagoya University), Yasutomo Kawanishi (Nagoya University), Ichiro Ide (Nagoya University), Hiroshi Murase (Nagoya University)
49. Manipulation-skill Assessment from Videos with Spatial Attention Network
Zhenqiang Li (The University of Tokyo), Yifei Huang (The University of Tokyo), Minjie Cai (Hunan University), Yoichi Sato (University of Tokyo)
50. The applicability of Cycle GANs for pupil and eyelid segmentation, data generation and image refinement
Wolfgang Fuhl (University of Tuebingen), David Geisler (University of Tübingen), Wolfgang Rosenstiel (University of Tuebingen), Enkelejda Kasneci (University of Tuebingen)
51. EPIC-Tent: An Egocentric Video Dataset for Camping Tent Assembly
Youngkyoon Jang (University of Bristol), Brian Sullivan (University of Bristol), Casimir Ludwig (University of Bristol), Iain Gilchrist (University of Bristol), Dima Damen (University of Bristol), Walterio Mayol-Cuevas (Bristol University)
52. First-person camera system to evaluate Tender Dementia-care skill
Atsushi Nakazawa (Kyoto University), Miwako Honda (Tokyo Medical Center)
53. Assessment of Optical See-Through Head Mounted Display Calibration for interactive Augmented Reality
Giorgio Ballestin (University of Genoa), Manuela Chessa (University of Genova, Italy), Fabio Solari (University of Genova, Italy)
Extended Abstracts
54. Early Estimation of User's Intention of Tele-Operation Using Object Affordance and Hand Motion in a Dual First-Person Vision
Motoki Kojima (Toyohashi University of Technology), Jun Miura (Toyohashi University of Technology)
55. Dog-Centric Activity Recognition by Integrating Appearance, Motion and Sound
Tsuyohito Araki (Univ. Electro-Comm., Tokyo), Ryunosuke Hamada (Tohoku University), Kazunori Ohno (Tohoku University), Keiji Yanai (Univ. Electro-Comm., Tokyo)
Invited Papers From ICCV/ICCV Workshops
56. EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition
Evangelos Kazakos (University of Bristol, UK), Arsha Nagrani (University of Oxford, UK), Andrew Zisserman (University of Oxford, UK), Dima Damen (University of Bristol, UK)
57. Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings
Michael Wray (University of Bristol, UK), Diane Larlus (Naver Labs Europe), Gabriela Csurka (Naver Labs Europe), Dima Damen (University of Bristol, UK)
58. Retro-Actions: Learning 'Close' by Time-Reversing 'Open' Videos
Will Price (University of Bristol, UK), Dima Damen (University of Bristol, UK)
59. What Would you Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention
Antonino Furnari (University of Catania, Italy), Giovanni Maria Farinella (University of Catania, Italy)
60. xR-EgoPose: Egocentric 3D Human Pose from an HMD Camera
Denis Tome (University College London & Facebook Reality Lab), Patrick Peluse (Facebook Reality Lab), Lourdes Agapito (University College London), Hernan Badino (Facebook Reality Lab)
61. Ego-Pose Estimation and Forecasting as Real-Time PD Control
Ye Yuan (Carnegie Mellon University), Kris Kitani (Carnegie Mellon University)
62. Grounded Human-Object Interaction Hotspots from Video
Tushar Nagarajan (UT Austin), Christoph Feichtenhofer (Facebook AI Research), Kristen Grauman (Facebook AI Research)
11:00 11:45 Invited Talk by Oswald Lanz (Fondazione Bruno Kessler, Trento, Italy)
11:45 12:30 Oral Session II
Learning Spatiotemporal Attention for Egocentric Action Recognition
Minlong Lu (Simon Fraser University), Danping Liao (Zhejiang University), Ze-Nian Li (Simon Fraser University)
Seeing and Hearing Egocentric Actions: How Much Can We Learn?
Alejandro Cartas (University of Barcelona), Jordi Luque, Petia Radeva, Carlos Segura, Mariella Dimiccoli
Multitask Learning to Improve Egocentric Action Recognition
George Kapidis (Utrecht University), Ronald Poppe (Utrecht University), Elsbeth van Dam (Noldus IT), Lucas Noldus (Noldus IT), Remco C. Veltkamp (Utrecht University)
12:30 12:45 Closing Remarks