Home
Research
Publications
People

CINet: A Learning Based Approach to Incremental Context Modeling in Robots


An overview of how incremental context modeling is addressed as a learning problem. When a new scene is encountered, the objects are detected (not a contribution of the paper), and the Latent Dirichlet Allocation Model is updated. The updated model is fed as input to a Recurrent Neural Network, which predicts whether to increment the number of contexts.

Abstract

There have been several attempts at modeling context in robots. However, either these attempts assume a fixed number of contexts or use a rule-based approach to determine when to increment the number of contexts. In this paper, we propose to pose the task of incrementing as a learning problem, which we solve using a Recurrent Neural Network. We show that the network successfully (with 98\% testing accuracy) learns to predict when to increment, and demonstrate, in a scene modeling problem (where the correct number of contexts is not known), that the robot increments the number of contexts in an expected manner (i.e., the entropy of the system is reduced). We also present how the incremental model can be used for various scene reasoning tasks.

Paper

    Fethiye Irmak Doğan*, İlker Bozcan*, Mehmet Celik, Sinan Kalkan
    * Equal contribution
    CINet: A Learning Based Approach to Incremental Context Modeling in Robots (IROS 2018)
    [paper]

Methods

In order to obtain dataset, we used Latent Dirichlet Allocation (LDA), which, being a generative model, allows one to sample artificial data with various number of contexts (topics). Therefore our dataset consist of scenes generated with k number of contexts. Then, we trained LDA models with k0 contexts s.t. k0 ≤ k. Finally, we used probabilities of contexts given objects obtained from LDA in order to train LSTM model.

Experiments and Results

Artificially Generated Dataset:

TABLE I: Training and test accuracies for the different models. Accuracy is the percentage of correct increment decisions calculated over the artificial data.
Figure 1: Probability of incrementing contexts for various states of an LDA model on the artificial data. Ground truth is respectively (a) 5, (b) 7, (c) 10, (d) 15 and (e) 20. Note that the network was trained for LDA models up to 10 contexts.
Figure 2: How the entropy of the system changes on the artificial dataset with respect to change in number of contexts. The graph is for the subset of the data with 5 contexts (arbitrarily selected from the dataset).

Real Dataset:

Figure3: Probability of incrementing contexts for various states of the LDA model on the real data.
Figure4: How the entropy of the system changes on the real dataset with respect to change in number of contexts. There are 25 sub-categories (giving as a baseline for contexts).

Video


© 201820182017 KOVAN Research Labs ‒ Department of Computer Engineering @ Middle East Technical University ‒ Üniversiteler Mahallesi, Dumlupınar Bulvarı No:1 06800 Çankaya Ankara/TÜRKİYE.

You have turned off the paragraph player. You can turn it on again from the options page.