Each participating group should contact the organisers to get the collection used in the task. The required forms must be filled in as per the dataset instructions. When the dataset has been downloaded, then the participating team
can index the content in whichever form is desired. There is no limitation on the type of data processing, enrichment or indexing process to be employed. Some participants may choose to index the provided metadata into a conventional
inverted index, while others may choose to enhance the provided metadata using automated or semi-automated means, then index the data according to their preference.
Since there is only one subtask in NTCIR16-Lifelog4, then the participant must decide between two types of run to submit. Each run must be either automatic or interactive and must be appropriately labeled. The unit of retrieval (ranking)
is the image ID (without JPG file extension).
- Automatic runs assume that there was no user involvement in the search process beyond specifying the initial query, which can only happen once for each topic. The search system generates a ranked list of up to 100 images for
each topic. There is no time limit on how long it can take for an automatic run. We assume that any human involvement in generating a query from the topic is a once-off process that is not iterative and dependent on the
results of a previous execution of a query for that topic (i.e. no human-influenced relevance feedback mechanism can be applied to an automatic run). The submission file format includes SCORE to capture the score of every
image as returned by the ranking algorithm. In the automatic run case the SECONDS-ELAPSED column should always have a value equal to 0, since it is only relevant for Interactive runs.
-
Interactive runs assume that there is a user involved in the search process that generates a query and selects which moments are considered correct for each topic. This may be a single phase, or may contain multiple phases of relevance feedback or query
reformulation. In interactive runs, the maximum time allowed for any topic should be 300 seconds. The submission file format includes SECONDS-ELAPSED to capture the time taken to find every moment. In the interactive runs,
mo more than 100 images may be submitted for any query also. In the interactive run, the SCORE value should be equal to 1. For interactive runs, the seconds elapsed should be equal to the number of seconds (from zero) that
it took the user to find a particular item in the submission. For example, if a user in an interactive run found one potentially relevant item at 5 seconds, another at 15 seconds and a third at 255 seconds, then there would
be three lines in the CSV file for that run, each of which has a different value for the SECONDS-ELAPSED column. It is important to accurately record this value since it will be used to calculate run performance at different
time cutoffs (e.g. 10 seconds, 60 seconds, etc...).
A submitted run for the LSAT task is in the form of a single CSV file per run. Please note that each group can submit up to 10 runs, each as an individual file. The submission files should be sent (one email per group) to (cathal.gurrin@dcu.ie) by the
due date with the title ’NTCIR-Lifelog LSAT Submission’. The submission file should be named as follows: GroupID-RunID-[Interactive or Automatic].txt, where GroupID is the registration ID of your group at NTCIR, RunID is the number
of the run (e.g. DCULSAT01 or DCULSAT02, etc..), and the label Automatic or Interactive. For every topic, every image considered relevant should have one line in the CSV file. For some topics there will be only one relevant item (one
line in the submission), for others there will be many relevant items (many lines in the submission), up to 100. It is also possible that no relevant items are found for a topic, so then there should be no entry in the file for the
topic. The format of the CSV file for an automatic run would be as follows: GROUP-ID, RUN-ID, TOPIC-ID, IMAGE-ID, SECONDS-ELAPSED, SCORE ... DCU, DCULSAT01, 16001, u1_2016-08-15_112559, 0, 1.0 DCU, DCULSAT01, 16001, u1_2016-08-15_120354,
0, 1.0 ... \end{verbatim} The format of the CSV file for an interactive run would be as follows: GROUP-ID, RUN-ID, TOPIC-ID, IMAGE-ID, SECONDS-ELAPSED, SCORE ... DCU, DCULSAT01, 16001, u1_2016-08-15_112559, 33, 1.0 DCU, DCULSAT01,
16001, u1_2016-08-15_120354, 54, 1.0 DCU, DCULSAT01, 16001, u1_2016-08-15_120412, 243, 1.0 ... In total there are 48 topics for this lifelog LSAT task. They are available now for download. There are two types of topics, adhoc and knownitem:
ADHOC - topics that may have many moments in the collection that are relevant. These topics are all new. KNOWNITEM - topics with one (or few) relevant moments in the collection. These topics are reused from the LSC'21 challenge. The
format of the topic is as follows: ID - A unique identifier of every topic Type - identifying each topic as being either adhoc, or knownitem. UID - a user identifier. Always u1 (user 1) for this collection. Title - a title of the query
used for identification Description - a descriptive query that would represent the information need of the user Narrative - additional details about the information need that helps to define what is correct and incorrect. Therefore
a sample query would be as follows:
16000
knownitem
u1
Eating fruit
Find the time when I was eating both mandarins and apples at my work desk.
The lifelogger was at work and eating mandarin oranges and apples at his desk. Any moment showing such activities are considered relevant. Eating both fruits in a location other than the lifelogger's desk is not considered to be
relevant.
The trec-eval programme will be employed to generate result scores for each run. Relevance judgements will be generated using a pooled approach whereby human judges will be employed to manually evaluate each submitted image for each topic, up to a maximum
of 100 images per topic, per run, per participant. The relevance judgements will be binary and will be informed by the immediate context of each image if important. Following submission, each participating team must prepare a paper
describing their experimental approach and scores. The organisers will prepare their own Overview Paper which should be referenced by all participants. @inproceedings{gurrin-ntcir16-lifelog4, title={Overview of the NTCIR-16 Lifelog-4
Task}, author={Gurrin, Cathal and Hopfgartner, Frank and Dang-Nguyen, Duc-Tien and Nguyen, Thanh-Binh and Healy, Graham, and Albatal, Rami, and Zhou, Liting}, booktitle={Proceedings of the 16th NTCIR Conference on Evaluation of Information
Access Technologies}, series = "NTCIR-16", address ={Tokyo, Japan}, year={2022} }