Each participating group should contact the organisers to get the collection used in the task. The required forms must be filled in as per the dataset instructions.
Full details of submission will be provided in September 2024.
The official topics for the LSAT and LQAT sub-tasks are available for download.
LSAT Submissions
A submitted run for the LSAT task is in the form of a single CSV file per run. Please note that each group can submit up to 10 runs, each as an individual file. The submission files should be sent (one email per group) to (ntcirlifelog@gmail.com) by the due date with the title ’NTCIR-Lifelog LSAT Submission’. The submission file should be named as follows: GroupID-RunID-[Interactive or Automatic].txt, where GroupID is the registration ID of your group at NTCIR, RunID is the number of the run (e.g. DCULSAT01 or DCULSAT02, etc..), and the label Automatic or Interactive.
For every topic, every image considered relevant should have one line in the CSV file. For some topics there will be only one relevant item (one line in the submission), for others there will be many relevant items (many lines in the submission), up to 100. It is also possible that no relevant items are found for a topic, so then there should be no entry in the file for the topic.
The format of the CSV file for an automatic run would be as follows:
GROUP-ID, RUN-ID, TOPIC-ID, IMAGE-ID, SCORE (in decreasing order of relevance)
...
DCU, DCULSAT01, 16001, u1_2016-08-15_112559, 1.0
DCU, DCULSAT01, 16001, u1_2016-08-15_120354, 1.0
...
In total there will be 24 topics for this lifelog LSAT task. There are two types of topics, adhoc and known-item. All topics are new and not before used for the LSC'22,23,24 competitions.
- ADHOC - topics that may have many moments in the collection that are relevant. These topics are all new.
- KNOWNITEM - topics with one (or few) relevant moments in the collection. .
The trec-eval programme will be employed to generate result scores for each run. Relevance judgements will be generated using a pooled approach whereby human judges will be employed to manually evaluate each submitted image for each topic, up to a maximum of 100 images per topic, per run, per participant. The relevance judgements will be binary and will be informed by the immediate context of each image if important.
LIT Submissions
The aim of this subtask is to gain insights into the lifelogger's lifestyle and lived experience. It follows the idea of the Quantified Self movement that focuses on the visualization of knowledge mined from self-tracking data to provide "self-knowledge through numbers". Participants are requested to generate new types of visualisations and insights about the life of the lifelogger by generating a themed diary or themed insights. The submissions are not evaluated in the traditional sense, but will form the basis of an interactive session at NTCIR-18 and can be described in the accompanying paper and displayed on a poster during NTCIR-18 in Tokyo.
LQAT Submissions
A submitted run for the LQAT subtask is in the form of a single CSV file per run. Please note that each group can submit up to 10 runs, each as an individual file. The submission files should be sent (one email per group) to (ntcirlifelog@gmail.com) by the due date with the title ’NTCIR-Lifelog LQAT Submission’. The submission file should be named as follows: GroupID-RunID-[Interactive or Automatic].txt, where GroupID is the registration ID of your group at NTCIR, RunID is the number of the run (e.g. DCULQAT01 or DCULQAT02, etc..), and the label Automatic or Interactive. Automatic runs are runs in which the query is presented to the system and the results generated with no further user input. Interactive runs, on the other hand, are runs in which a user is involved as an active participant in the result generation process.
For every topic, a textual answer should be provided and every topic has one line in the CSV file. Should there be no answers found for a topic, then the line does not need to be appended for that particular topic.
The format of the CSV file for an automatic run would be as follows:
GROUP-ID, RUN-ID, TOPIC-ID, "ANSWER"
...
DCU, DCULQAT01, 16001, "green"
DCU, DCULQAT01, 16002, "friday"
Following submission, each participating team must prepare a paper describing their experimental approach and scores. The organisers will prepare their own Overview Paper which should be referenced by all participants.