###################################################
CALL FOR TASK PARTICIPATION
NTCIR-13 Lifelog Access and Retrieval (Lifelog) Task
Twitter: https://twitter.com/NTCIRLifelog
Registration: http://ntcir.nii.ac.jp/NTCIR13Regist/
Important Dates:
2016
29 Nov: Phase I dry-run data release
19 Dec: Phase I dry-run submissions
29 Dec: Phase I dry-run results
2017
21 Jan: Phase I full data release & formal run concepts/topics release
15 Mar: Phase I: Formal Run Due
01 Apr: Phase I: formal results
01 May: Phase II: data, dry-run topics release
14 Jun: Phase II Search API open
30 Jun: Phase II dry-run submissions
30 Jun: Phase II: formal run topics release
21 Jul: Deadline for registration for phase 2
15 Aug: Phase II: formal run due
01 Sep: Phase II: formal run results release
01 Oct: Draft papers due
01 Nov: Camera ready due
=== OVERVIEW ===
Digital recording of life experience (lifelogging) is gaining popularity as new types of wearable sensors can generate a rich archive of life experience. Such wearable sensors include wearable cameras, fitness trackers and various mobile devices. The objective of the NTCIR Lifelog task is to encourage research in this field and to understand the current state-of-the-art in lifelog retrieval.
A real-world lifelog dataset will be distributed to task participants. This dataset will consist of at least 45 days of data from two active lifeloggers. Much of the data will be recorded 24 x 7, though multimedia data will only be recorded during waking hours. The dataset will contain wearable camera images, biometrics records, human activity logs, and computer usage. A topic set of real-world topics will accompany the dataset.
Task participants must submit a paper to the NTCIR-13 Conference and at least one member of each participating group must either attend the conference at NII, Tokyo or a satellite event in Europe in December 2017, to present their work.
=== TASKS ===
NTCIR-13 Lifelog includes four subtasks, all of which can be participated in independently.
Phase I
(November 2016 - March 2017)
Lifelog Annotation Task (LAT)
The aim of this subtask is to explore the most effective computer vision algorithms to accurately describe the visual content of lifelog images. A small ontology will be generated of important lifelog concepts (activities, environments and objects) and the task will require the development of automated approaches to annotating these concepts. Both image content as well as provided metadata and external evidence sources can be used to generate the annotations The best performing annotation outputs are encouraged to release the data for other participants to use in the other three tasks.
Phase II
(April - September 2017)
Lifelog Semantic Access Task (LSAT)
In this subtask, the participants have to retrieve specific moments in a lifelogger's life. We define moments as semantic events, or activities that the individual was involved in. The task can best be compared to a known-item search task as known from TRECVid.
Example search tasks include:
- Find the moment(s) where I was boarding an A380.
- Find the moment(s) where I am in my kitchen.
- Find the moment(s) where I am playing with my phone.
- Find the moment(s) where I am preparing breakfast.
Tasks can be undertaken in an interactive (user in the loop) or automatic manner (automatic query processing). Submissions will indicate the time (in minutes) of all instances of ranked results that best match the topic description and for interactive runs, the time-taken to find each result is required. The main metric to be used for evaluation is NDCG@N. Dependent on the feedback received from the participants during the dry run, these metrics might be revised.
Lifelog Insight Task (LIT)
The aim of this subtask is to gain insights into the lifelogger's life. It follows the idea of the Quantified Self movement that focuses on the visualization of knowledge mined from self-tracking data to provide "self-knowledge through numbers". Participants are requested to generate new types of visualisations and insights about the life of the lifeloggers by generating a themed diary.. This task is not evaluated in the traditional sense, but participants will be expected to present their work in a special session at NTCIR. An event segmentation will be defined for the data.
Example tasks include:
- Provide insights on my social interactions or diet.
- Provide insights on the relationship between my diet, exercise and my reported blood sugar levels
- Provide insights on the relationship between my work schedule and my reported mood
Event Segmentation Task (LEST)
The aim of this subtask is to examine approaches to event segmentation from continual lifelog stream data. Events have always been proposed to be the standard unit of retrieval and there have been a number of suggestions made as to how to segment into events. The proposal here is to release a training set of manually segmented lifelog data and evaluate how well the participants can segment a test set of lifelog data. There would be a sliding window facility to calculate accuracy in submissions.
Please visit NTCIR lifelog website for more information about the task:
http://ntcir-lifelog.computing.dcu.ie/
=== ORGANIZERS ===
Cathal Gurrin (Dublin City University, Ireland)
Hideo Joho (University of Tsukuba, Japan)
Frank Hopfgartner (University of Glasgow, UK)
Liting Zhou (Dublin City University, Ireland)
Rami Albatal (Heystaks, Ireland)
=== CONTACT ===
ntcir-lifelog at computing dot dcu dot ie