You are here

Multi-Task Learning

Multi-Task Learning (MTL) is a machine learning paradigm, which is aimed to learn multiple related tasks simultaneously with information being shared across tasks. It is hoped that, with the help of the other tasks, the model of each task can be better trained, which leads to generalization performance. One practical example for MTL is the learning of multiple classification tasks simultaneously, each of which features a handwritten letter classification problems, such as "c" versus "e", "g" versus "y", etc. 

Given T tasks, one widely adaped MTL approach is to optimize the average of the T task objective functions, in hoping that the averaged performance of the T tasks can be improved. Most previous research works fall into this category, with different information sharing strategy being proposed. However, in our work [1], named Pareto-Path Multi-Task Learning, we observed that, interestingly, comparing to the averaged task objectives, a simple non-linear combination (i.e. the Lp-(pseudo)norm) of the task objectives almost always achieves better performance. The reason for this phenomenon is theoretically exaplained in [2]. Observing that such a model is a special case of minizing the conic combination of the task objectives, in [2], we proposed a general Conic Multi-Task Learning framework. With sound theoretical foundation, such a conic MTL model finds the conic combination coefficients so that a better generalization performance is guaranteed. Some of our other works in MTL are included in [3] and [4].

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer