NTCIR-Lifelog is a core task of the NTCIR-18 conference. This core task aims to advance the state-of-the-art research in lifelog analytics and retrieval as an application of information retrieval. This NTCIR lifelog task is one of two international lifelog-related challenges, the other being the annual ACM Lifelog Search Challenge.
At NTCIR-18, the lifelog task consists of three distinct sub-tasks.
- LSAT - Lifelog Semantic Access sub-task is a known-item search task that can be undertaken in a non-interactive (automatic) manner. In this sub task, an text, visual or text-visual query is given, the participants have to retrieve a number of specific moments based on the query in a lifelogger's life. We define moments as semantic events, or activities that happened throughout the day. The task can best be compared to a known-item search task. All previous editions of the NTCIR-Lifelog task have included an LSAT task.
- LIT - Lifelog Insight sub-task is to explore knowledge mining and visualisation of lifelogs by setting general challenges for the participants to address. Speficially this year we are focusing on the detection of a set of activities of daily living from the 18-month dataset. It follows the idea of the Quantified Self movement that focuses on the visualization of knowledge mined from self-tracking data to provide "self-knowledge through numbers". Participants are requested to generate new types of visualisations and insights about the life of the lifelogger by generating a themed diary or themed insights. The submissions are not evaluated in the traditional sense, but will form the basis of an interactive session at NTCIR-18 and can be described in the accompanying paper.
- LQAT: Lifelog Question Answer sub-task is to encourage comparative progress on the important Q&A topic from lifelogs. For this subtask, we will use the existing LSC collection (same as the LSAT & LIT) tasks and we will release a set of topics for which we are seeking the submission of a direct answer in text form. The answers will be judged manually to account for small variances in answers..
----
Related Links:
- For those seeking the previous tasks, we have archived the previous task web pages:
- NTCIR12-Lifelogging web pages
- NTCIR13-Lifelogging web pages
- NTCIR14-Lifelogging web pages
- NTCIR16-Lifelogging web pages
- NTCIR17-Lifelogging web pages
How to Participate:
- In order to take part in the NTCIR-18 Lifelog Task, you should firstly register for NTCIR-18 at this link. Then you should download the dataset if you do not already have it. Following this, you should access the topic lists for LSAT and LQAT subtasks and submit your runs according to the instructions in this website. Results of your submissions will then be provided to you and you submit a draft paper by 01 March 2025. Final versions of the papers are due on 1 May 2025. The conference takes place at NII in Tokyo from 10-13 June 2025. It is anticipated that most participants will attend in person, but online participation will also be facilitated.
Recommended Reading:
- The following are a list of publications related to the NTCIR-Lifelog Benchmarking Activity
- [1] Experiments in Lifelog Organisation and Retrieval at NTCIR. Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, Rami Albatal, Graham Healy, Duc-Tien Dang Nguyen. Evaluating Information Retrieval and Access Tasks. 2020. Springer.
- [2] Comparing Approaches to Interactive Lifelog Search at the Lifelog Search Challenge (LSC2018). Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Andreas Leibetseder, Liting Zhou, Aaron Duane, Duc-Tien Dang-Nguyen, Michael Riegler, Luca Piras, Minh-Triet Tran, Jakub Lokoč, Wolfgang Hürst. ITE Transactions on Media Technology and Applications. 7.2 (2019)
- Full details of the paper submission process and the templates to use will be made available in early 2025. Please note that the deadlines for the Lifelog task are separate from the default NTICR deadlines. Please note also that the paper limit is 8 pages.
- When referring to the NTCIR18-Lifelog6 task, please cite the following publication (details in February 2025). Please note that these details are likely to be updated closer to the actual publication date. We will circulate a draft version of this paper once all runs are received and processed. At the moment, you can assume that the paper will describe the challenge and sub-task, the dataset and compare the different submitted runs.