A-quick-tour-of-tMoE

Introduction

TMoE (t Mixture-of-Experts) provides a flexible and robust modelling framework for heterogenous data with possibly heavy-tailed distributions and corrupted by atypical observations. TMoE consists of a mixture of K t expert regressors network (of degree p) gated by a softmax gating network (of degree q) and is represented by:

  • The gating network parameters alpha’s of the softmax net.
  • The experts network parameters: The location parameters (regression coefficients) beta’s, scale parameters sigma’s, and the degree of freedom (robustness) parameters nu’s. TMoE thus generalises mixtures of (normal, t, and) distributions and mixtures of regressions with these distributions. For example, when q = 0, we retrieve mixtures of (t-, or normal) regressions, and when both p = 0 and q = 0, it is a mixture of (t-, or normal) distributions. It also reduces to the standard (normal, t) distribution when we only use a single expert (K = 1).

Model estimation/learning is performed by a dedicated expectation conditional maximization (ECM) algorithm by maximizing the observed data log-likelihood. We provide simulated examples to illustrate the use of the model in model-based clustering of heterogeneous regression data and in fitting non-linear regression functions.

It was written in R Markdown, using the knitr package for production.

See help(package="meteorits") for further details and references provided by citation("meteorits").

Application to a simulated dataset

Generate sample

n <- 500 # Size of the sample
alphak <- matrix(c(0, 8), ncol = 1) # Parameters of the gating network
betak <- matrix(c(0, -2.5, 0, 2.5), ncol = 2) # Regression coefficients of the experts
sigmak <- c(0.5, 0.5) # Standard deviations of the experts
nuk <- c(5, 7) # Degrees of freedom of the experts network t densities
x <- seq.int(from = -1, to = 1, length.out = n) # Inputs (predictors)

# Generate sample of size n
sample <- sampleUnivTMoE(alphak = alphak, betak = betak, sigmak = sigmak, 
                         nuk = nuk, x = x)
y <- sample$y

Set up tMoE model parameters

K <- 2 # Number of regressors/experts
p <- 1 # Order of the polynomial regression (regressors/experts)
q <- 1 # Order of the logistic regression (gating network)

Set up EM parameters

n_tries <- 1
max_iter <- 1500
threshold <- 1e-5
verbose <- TRUE
verbose_IRLS <- FALSE

Estimation

tmoe <- emTMoE(X = x, Y = y, K, p, q, n_tries, max_iter, 
               threshold, verbose, verbose_IRLS)
## EM - tMoE: Iteration: 1 | log-likelihood: -509.750768757613
## EM - tMoE: Iteration: 2 | log-likelihood: -505.318312231173
## EM - tMoE: Iteration: 3 | log-likelihood: -503.303200086803
## EM - tMoE: Iteration: 4 | log-likelihood: -501.547540972697
## EM - tMoE: Iteration: 5 | log-likelihood: -500.111935966035
## EM - tMoE: Iteration: 6 | log-likelihood: -499.005989879639
## EM - tMoE: Iteration: 7 | log-likelihood: -498.188899959906
## EM - tMoE: Iteration: 8 | log-likelihood: -497.602051936467
## EM - tMoE: Iteration: 9 | log-likelihood: -497.1884865711
## EM - tMoE: Iteration: 10 | log-likelihood: -496.900725635636
## EM - tMoE: Iteration: 11 | log-likelihood: -496.702212568277
## EM - tMoE: Iteration: 12 | log-likelihood: -496.566066947855
## EM - tMoE: Iteration: 13 | log-likelihood: -496.473070685174
## EM - tMoE: Iteration: 14 | log-likelihood: -496.409726818939
## EM - tMoE: Iteration: 15 | log-likelihood: -496.366666212537
## EM - tMoE: Iteration: 16 | log-likelihood: -496.337435568798
## EM - tMoE: Iteration: 17 | log-likelihood: -496.317613460454
## EM - tMoE: Iteration: 18 | log-likelihood: -496.304181665092
## EM - tMoE: Iteration: 19 | log-likelihood: -496.295085134814
## EM - tMoE: Iteration: 20 | log-likelihood: -496.288927191214
## EM - tMoE: Iteration: 21 | log-likelihood: -496.284759860706

Summary

tmoe$summary()
## -------------------------------------
## Fitted t Mixture-of-Experts model
## -------------------------------------
## 
## tMoE model with K = 2 experts:
## 
##  log-likelihood df       AIC       BIC       ICL
##       -496.2848 10 -506.2848 -527.3578 -527.3576
## 
## Clustering table (Number of observations in each expert):
## 
##   1   2 
## 249 251 
## 
## Regression coefficients:
## 
##     Beta(k = 1) Beta(k = 2)
## 1     0.1556474   0.2320383
## X^1   2.7602279  -2.8271625
## 
## Variances:
## 
##  Sigma2(k = 1) Sigma2(k = 2)
##      0.2318897     0.4267446

Plots

Mean curve

tmoe$plot(what = "meancurve")

Confidence regions

tmoe$plot(what = "confregions")

Clusters

tmoe$plot(what = "clusters")

Log-likelihood

tmoe$plot(what = "loglikelihood")

Application to a real dataset

Load data

library(MASS)
data("mcycle")
x <- mcycle$times
y <- mcycle$accel

Set up tMoE model parameters

K <- 4 # Number of regressors/experts
p <- 2 # Order of the polynomial regression (regressors/experts)
q <- 1 # Order of the logistic regression (gating network)

Set up EM parameters

n_tries <- 1
max_iter <- 1500
threshold <- 1e-5
verbose <- TRUE
verbose_IRLS <- FALSE

Estimation

tmoe <- emTMoE(X = x, Y = y, K, p, q, n_tries, max_iter, 
               threshold, verbose, verbose_IRLS)
## EM - tMoE: Iteration: 1 | log-likelihood: -584.072418706954
## EM - tMoE: Iteration: 2 | log-likelihood: -579.48557089341
## EM - tMoE: Iteration: 3 | log-likelihood: -577.891680085229
## EM - tMoE: Iteration: 4 | log-likelihood: -575.477680460292
## EM - tMoE: Iteration: 5 | log-likelihood: -569.322998867093
## EM - tMoE: Iteration: 6 | log-likelihood: -562.229703805906
## EM - tMoE: Iteration: 7 | log-likelihood: -558.414934331097
## EM - tMoE: Iteration: 8 | log-likelihood: -557.181772778067
## EM - tMoE: Iteration: 9 | log-likelihood: -556.318806570285
## EM - tMoE: Iteration: 10 | log-likelihood: -555.423094329129
## EM - tMoE: Iteration: 11 | log-likelihood: -554.471514266266
## EM - tMoE: Iteration: 12 | log-likelihood: -553.506338437062
## EM - tMoE: Iteration: 13 | log-likelihood: -552.663141876798
## EM - tMoE: Iteration: 14 | log-likelihood: -552.06309105666
## EM - tMoE: Iteration: 15 | log-likelihood: -551.669232005789
## EM - tMoE: Iteration: 16 | log-likelihood: -551.410168678443
## EM - tMoE: Iteration: 17 | log-likelihood: -551.240546357276
## EM - tMoE: Iteration: 18 | log-likelihood: -551.130660579196
## EM - tMoE: Iteration: 19 | log-likelihood: -551.059711111903
## EM - tMoE: Iteration: 20 | log-likelihood: -551.013914619384
## EM - tMoE: Iteration: 21 | log-likelihood: -550.984210955596
## EM - tMoE: Iteration: 22 | log-likelihood: -550.964801766827
## EM - tMoE: Iteration: 23 | log-likelihood: -550.951996605352
## EM - tMoE: Iteration: 24 | log-likelihood: -550.94344834037
## EM - tMoE: Iteration: 25 | log-likelihood: -550.937673772606
## EM - tMoE: Iteration: 26 | log-likelihood: -550.933713650656

Summary

tmoe$summary()
## -------------------------------------
## Fitted t Mixture-of-Experts model
## -------------------------------------
## 
## tMoE model with K = 4 experts:
## 
##  log-likelihood df       AIC       BIC       ICL
##       -550.9337 26 -576.9337 -614.5083 -614.5043
## 
## Clustering table (Number of observations in each expert):
## 
##  1  2  3  4 
## 28 37 31 37 
## 
## Regression coefficients:
## 
##      Beta(k = 1) Beta(k = 2)  Beta(k = 3) Beta(k = 4)
## 1   -1.050047262  991.866525 -1816.986895 301.0450434
## X^1 -0.101482320 -103.835124   111.968180 -12.5161930
## X^2 -0.008687259    2.431056    -1.679079   0.1284358
## 
## Variances:
## 
##  Sigma2(k = 1) Sigma2(k = 2) Sigma2(k = 3) Sigma2(k = 4)
##       1.653741      453.0279      560.5597      524.8276

Plots

Mean curve

tmoe$plot(what = "meancurve")

Confidence regions

tmoe$plot(what = "confregions")

Clusters

tmoe$plot(what = "clusters")

Log-likelihood

tmoe$plot(what = "loglikelihood")