Generalised belief propagation

Making an inference with cyclic networks is not straightforward. Belief propagation may still be used, but it could yield incorrect inferences. Generalised belief propagation was proposed as an improvement over belief propagation. I walk through the generalised belief propagation with an example in this post.


Belief propagation

Belief propagation, also known as sum-product algorithm, is an inference framework. In the context of Bayesian networks, this algorithm is often used to compute the marginal probability distribution. In this post, I walk through the computations involved in the belief propagation.




Long Short-Term Memory Network and Back-Propagation through Time

Recurrent neural networks (RNNs) are neural networks to model sequential data. RNNs are often used in speech recognition and natural language processing. In this blog post, I discuss one of the most popular RNNs, a long short-term memory (LSTM) network. Then I briefly address a training procedure for a LSTM.


Hidden Markov Model

Deep learning is quite popular in sequence modelling, but this blog post discusses a more traditional model, a hidden Markov model.

(Updated on 18 March 2017)


Hierarchical Bayes

Sometimes, data contain multiple entries from observation units. For example, researchers may be interested in effectiveness of medical treatments, and tested several treatments in several different cities. Then, it may make sense to consider that the effectiveness may be overall consistent across cities but slightly vary between the cities, as population characteristic may vary between the cities. For instance, one city may have more younger people than another. In these cases, hierarchical Bayes may help. In this blog post, I review hierarchical Bayes.


Dirichlet process mixture model as a cognitive model of category learning

In cognitive science/psychology, Dirichlet process mixture model (DPMM) is considered as a rational model of category learning (Anderson, 1991; Sanborn, Griffiths & Navarro, 2010). That is, the DPMM is used to approximate how human learns to categorise objects.

In this post, I review the DPMM, assuming that all the features are discrete. The implementation in C++ can be found here.