NTCIR-13 Lifelog is complete. This is an archived site. Please see the NTCIR-14 Lifelog site for up-to-date information.
NTCIR-Lifelog is a core task of the NTCIR-13 conference. This core task aims to advance the state-of-the-art research in lifelogging as an application of information retrieval. The methodology employed for the lifelog task at NTCIR-13 is based on the successfully deployed methodology from NTCIR-12. New multimodal datasets (one large and one small) will be gathered. The large dataset will be generated by real lifeloggers, anonymised and distributed to the participants. A number of information needs (topics), guided by those proposed in Kahneman's lifestyle activities will be generated by the lifeloggers and distributed as training and test topics to participants. Ground truth data will be generated directly by the lifeloggers who gather the dataset, and will form the basis for evaluating participant submissions.
We plan to divide the challenge into the four sub-tasks:
- Lifelog Semantic Access Task (LSAT), a known-item search task that can be undertaken in an interactive or automatic manner
- Lifelog Event Segmentation Task (LES), a sub-task that uses the same data as the LSAT sub-task, but explores how best to segment the lifelog data into a discrete set of retrievable units (events)
- Lifelog Insight Task (LIT), an exploratory task that is concerned with knowledge mining from lifelogs and aims to be a fun task that can attract new researchers to the area.
- Lifelog Annotation Task (LAT) is aimed to explore the most effective computer vision algorithms to accurately describe the visual content of lifelog images.
---
Lifelogging represents a phenomenon whereby individuals can digitally record their own daily lives in varying amounts of detail and for a variety of purposes. In a sense it represents a comprehensive black-box of a person's life activities and offers great potential to mine or infer valuable knowledge about life activities, given the availability of appropriate software.
Although there are many definitions in literature, we define lifelogging to be: a form of pervasive computing which utilises software and sensors to generate a permanent, private and unified multimedia record of the totality of an individual's life experience and makes it available in a secure and pervasive manner. A key aspect of this definition is that the lifelog should archive the totality of an individual’s experiences, i.e. follow Bell and Gemmels' vision of total capture. This means that engaging in the process of lifelogging will result in the capture of significant quantities of rich multimedia data about the lifelogger, but by necessity, also about the environment of the lifelogger, and the people and objects contained therein. It is important to consider that lifelogging is typically carried out ambiently, or passively, without the lifelogger having to initiate anything, and it is different to the current mass data acquisition activities of some web organisations. The fact that lifelogging captures data ambiently results in the additional problems of non-curation or non-filtering, and the individual lifelogger may not even be aware of all the data - or the implications of keeping this data - that exists in the lifelog.
Since lifelogging is concerned with passively sensing the totality of an individual's life experience, there are likely to be a wide range of lifelogging tools employed at this point in time. As a starting point, we consider the following:
- Passive Visual Capture. Utilising wearable devices such as the Narrative Clip allows for the continuous and automatic capture of life activities as a visual sequence of digital images (up to 1,500 per day. These wearable cameras are typically worn on a lanyard around the neck, or clipped to clothing and as such, capture images from the viewpoint of the individual at a frequency of several each minute.
- Personal Biometrics. There are many personal sensing devices for monitoring everyday activities and aimed at the consumer market and these are used by interested parties, such as the quantified self community. Such devices monitor human performance, for example activity levels (number of steps taken, distance travelled, caloric output), sleep duration, etc. Mobile Device Context. This refers to using the smartphone to continuously and passively capture the user’s context, as the smartphone can now be used to record location, acceleration and movement, WiFi signal strength, and various other sensors. Coupled with a new generation of ‘smart watches’, the smartphone will be able to capture much of life activity.
- Communication Activities. Passively logging our (electronic) communications with others - such as our SMS messages, instant messages, phone calls, video calls, social network activities and email exchanges - is also possible and can form part of a lifelog.
- Data Creation/Access Activities. Logging our data consumed and created; all the words typed, web pages read, YouTube videos watched and so on. The Stuff-I’ve-Seen system from Microsoft Research, is an example of such a logging tool.
- Environmental Context and Media. Lifelogging is mostly, but not exclusively about recording using wearable technology. Sensors in the home, or surveillance cameras in the environment, can capture user activities and environmental context. For example, a system for retrieval and summarisation of multimedia data from a home-like environment that generates a life-log of the home.
In this lifelogging task, we aim to begin the process of supporting the IR Community to develop new and novel lifelogging retrieval and visualisation systems.
----
Historical Links:
- For those seeking the previous tasks, we have archived the old NTCIR12-Lifelogging web pages.
- See also the slides we used at the kickoff event on 23rd August 2016.