EPIC@CVPR2020
The Sixth International Workshop on Egocentric Perception, Interaction and Computing
Online, June 15 (AM), 2020

MTurk Funds for Egocentric Datasets

Thanks to the generous donation from Amazon’s Mechanical Turk and Augmented AI (A2I) teams, we are pleased to offer a novel opportunity to generate new data related to egocentric vision.

As part of the 6th International workshop on Egocentric Perception, Interaction and Computing (EPIC@CVPR2020), to be held in Seattle Washington on 15th June, we are making a call for proposals to create new public data and/or annotations useful for egocentric visual research.

Egocentric visual perception concerns imagery captured from a platform, human or artificial, that senses the world from an outward-facing perspective. Examples include the EPIC Kitchens dataset or data from robotic platforms interacting with the environment.

Proposals may consider annotating newly collected data or using existing data that has not been annotated in the proposed way.

Importantly, any data that is produced with the funds must be made publicly available to further support research into egocentric perception.

Recipients

We received a number of high quality proposals, thus the funding was split among the following recipients:

Classifying Cycling Hazards in Egocentric Data
Jayson Haebich (City University of Hong Kong); Christian Sandor (City University of Hong Kong); Alvaro Cassinelli (City University of Hong Kong)
This dataset contains 5-10 second egocentric video segments of hazardous cycling situations and associated IMU data. These videos are annotated with classifications of the cause of the hazard and the type of surface the cyclist is travelling on. The dataset is available here.

Understanding Dyadic Interactions from Egocentric Multi-Views
Cristina Palmero (Universitat de Barcelona)*; Javier Selva (Universitat de Barcelona); Zejian Zhang (Universitat de Barcelona); Julio Cezar S. Silveira Jacques Junior (Universitat Oberta de Catalunya (UOC) & Computer Vision Center (CVC)); David Leiva (Universitat de Barcelona); Sergio Escalera (Computer Vision Center (UAB) & University of Barcelona)
Multi-view egocentric dataset of non-scripted face-to-face dyadic interactions. It consists of recordings and profiling data of 147 subjects, distributed in 188 dyadic sessions, performing competitive and collaborative tasks with different behavior elicitation and cognitive workload.

Pixel-Wise Labelling of RGB-D THU-READ dataset
Ester Gonzalez-Sosa (Nokia Bell Labs)*; Diego Gonzalez-Morin (Nokia Bell Labs); Andrija Gajic (Universidad Autonoma de Madrid); Marcos Escudero-Viñolo (Universidad Autónoma de Madrid); Alvaro Villegas (Nokia Bell Labs)
Pixel-Wise THU-READ Labelling contains segmentation masks from a representative subset of the original 960 RGB-D egocentric videos, with 35 different object classes: human body, and 34 different objects. For more information, please contact ester.gonzalez@nokia-bell-labs.com

Call for Proposals

We invite up to 3 page proposals covering the following:

  1. Summary. Summary of proposal.
  2. What’s new and Why? What is new about this data/annotation and why it will be of use to the community.
  3. How Amazon Mechanical Turk (AMT) will be used. Describe what MTurkers will be asked to do in as much detail as possible including how you plan to handle response ambiguity and data quality. You can also reference as evidence any existing datasets that authors have already released and that used AMT.
  4. Impact. Potential impact of the new data.
  5. Storage. Where the new data would be stored.
  6. Timings. When the data would be made publicly available (estimate days after completing the AMT phase).
  7. License. Describe any license that is expected to be attached to the data. We are looking to fund work that is as permissive as possible for research purposes so as to help the community.
  8. Confirmation. Statement confirming that the authors of the proposal have the permission to release any collected data and that the data will be released shortly after annotation/collection.
  9. Funds. Total funds requested in USD.
  10. Names and affiliation. Of those responsible for the proposal indicating a named principal person who will also serve as corresponding person.

Submission

The submission will be managed with the EPIC@CVPR2020 CMT website. Please use with EPIC Workshop - Dataset Proposal track on CMT.

Members of the Panel

Hazel Doughty, Walterio Mayol-Cuevas, Dima Damen, Antonino Furnari, Giovanni Maria Farinella, David Crandall, Kristen Grauman.

Current available funds and expected number of proposals to be funded

We have currently been given 10K USD, and while we are seeking more funds, we expect the current funds to help annotate one large or various smaller datasets.

Judging Criteria

No workshop organizer or member of the panel will be able to submit a proposal. Any potential actual or perceived conflict of interest with a workshop organizer or panel member must be declared in the proposal.

Proposals received by the deadline will be shortlisted by workshop organizers/panel.

The proposals shortlisted will be offered the opportunity to make a short presentation during the EPIC@CVPR workshop (either in person or remotely) where the panel may ask clarifying questions. The panel will be tasked to consider potential impact to the egocentric community and also maximize the number of proposals funded given the available resources.

The panel reserves the right to declare the competition to be deserted if in their opinion no proposal satisfies the expected level of potential impact or has insufficient details to judge it. The panel may also be able to delay the final decision until further details are available, make a partial award to any proposal or encourage similar proposals to be merged as a condition of funding.

By submitting a proposal, you and anyone related must abide by the panel’s decision. You and anyone related agrees that the panel’s decision will be final and neither panel’s nor workshop or conference organizers will be liable for anything, nor entertain any follow up discussions.

FAQ

If our proposal is accepted, how will we receive the funds?

You will receive access to an Amazon Mechanical Turk account containing the awarded funds.

How should the MTurk tasks be created?

Authors are responsible for building the MTurk tasks and can use any externel tools as long as the task can be run using Amazon Mechanical Turk

For annotating newly collected data, should data be already collected?

Data may not necessarily have been collected. If it has not been collected by the time of shortlisting we expect a plan is in pace for data collection and will release the funds once the task is ready to be submitted to MTurk

For existing datasets should we contact the original authors?

This will be one thing authors need to self certify and show in the proposal (point 8) that they have the right to use whatever data (existing or new) they plan to use and release. In many cases public datasets are released for research purposes which in general will cover their use assuming it complies with their license. It is also possible to imagine that the data released by authors is only the annotations (as in the case of YouTube videos) and not the original data which may also simplify things.