Multi-Task Learning (MTL) is a machine learning paradigm, which is aimed to learn multiple related tasks simultaneously with information being shared across tasks. It is hoped that, with the help of the other tasks, the model of each task can be better trained, which leads to generalization performance. One practical example for MTL is the learning of multiple classification tasks simultaneously, each of which features a handwritten letter classification problems, such as "c" versus "e", "g" versus "y", etc.