A Deep Incremental Boltzmann Machine for Modeling Context in Robots
Context is an essential capability for robots that are to be as adaptive as possible in challenging environments. Although there are many context modeling efforts, they assume a fixed structure and number of contexts. In this paper, we propose an incremental deep model that extends Restricted Boltzmann Machines. Our model gets one scene at a time, and gradually extends the contextual model when necessary, either by adding a new context or a new context layer to form a hierarchy. We show on a scene classification benchmark that our method converges to a good estimate of the contexts of the scenes, and performs better or on-par on several tasks compared to other incremental models or non-incremental models.
A Deep Incremental Boltzmann Machine for Modeling Context in Robots (Accepted by ICRA 2018)
Our Models: An Incremental Restricted Boltzmann Machine (iRBM) and A Deep Incremental Boltzmann Machine (diBM)
An Incremental Restricted Boltzmann Machine (iRBM):
Our model starts with one hidden neuron initially and when the model is fed with new scenes (v), over time, the model will slowly fall short in representing p(v), and the model’s current confidence will slightly drift away from its baseline confidence. When that happens, a new hidden neuron should be added to increase the model’s capacity.
A Deep Incremental Boltzmann Machine (diBM):
Our model starts with one hidden layer with one hidden neuron and after adding more neurons to hidden layers as in iRBM, current confidence of the final hidden layer (f) drifts away. When that happens, we add a new hidden layer as the next layer of layer f.
Experiments and Results
Number of Contexts:
Entropy of the Models:
Qualitative Inspection of Context Coherence (Hidden Nodes):
Partially Damaged Scene Reconstruction: