Home
Research
Publications
People

A Deep Incremental Boltzmann Machine for Modeling Context in Robots


Figure 1: An overview of the proposed system. Our model receives scenes one at a time, and updates its structure by adding a new context node or a context layer combining existing contexts if necessary.

Abstract

Context is an essential capability for robots that are to be as adaptive as possible in challenging environments. Although there are many context modeling efforts, they assume a fixed structure and number of contexts. In this paper, we propose an incremental deep model that extends Restricted Boltzmann Machines. Our model gets one scene at a time, and gradually extends the contextual model when necessary, either by adding a new context or a new context layer to form a hierarchy. We show on a scene classification benchmark that our method converges to a good estimate of the contexts of the scenes, and performs better or on-par on several tasks compared to other incremental models or non-incremental models.

Paper

    Fethiye Irmak Doğan, Hande Çelikkanat, Sinan Kalkan
    A Deep Incremental Boltzmann Machine for Modeling Context in Robots (Accepted by ICRA 2018)
    [paper]

Our Models: An Incremental Restricted Boltzmann Machine (iRBM) and A Deep Incremental Boltzmann Machine (diBM)

An Incremental Restricted Boltzmann Machine (iRBM):

Our model starts with one hidden neuron initially and when the model is fed with new scenes (v), over time, the model will slowly fall short in representing p(v), and the model’s current confidence will slightly drift away from its baseline confidence. When that happens, a new hidden neuron should be added to increase the model’s capacity.

A Deep Incremental Boltzmann Machine (diBM):

Our model starts with one hidden layer with one hidden neuron and after adding more neurons to hidden layers as in iRBM, current confidence of the final hidden layer (f) drifts away. When that happens, we add a new hidden layer as the next layer of layer f.

Experiments and Results

Number of Contexts:

Figure 2: Number of contexts obtained from SUN RGB-D Dataset with equally distributed 8 contexts and 200 scenes from each context with online learning.
Figure 3: Number of hidden layers obtained from SUN RGB-D Dataset with equally distributed 8 contexts and 200 scenes with online learning.

Entropy of the Models:

Figure 4: Entropy change of different models obtained from the NYU Depth Dataset.
Figure 5: Number of context for different models obtained from the NYU Depth Dataset.

Qualitative Inspection of Context Coherence (Hidden Nodes):

TABLE I: Most probable 10 objects of best 3 hidden units of a subset of SUN RGBD-Data dataset (8 contexts and 1600 scenes).

Partially Damaged Scene Reconstruction:

TABLE II: Reconstruction performances of the methods for a corruption rate (α) of 40% in the testing subset of the SUN RGB-D dataset.

Paper Video


Spotlight Presentation


© KOVAN Research Labs ‒ Department of Computer Engineering @ Middle East Technical University ‒ Üniversiteler Mahallesi, Dumlupınar Bulvarı No:1 06800 Çankaya Ankara/TÜRKİYE.