Tuning Pruning in Sparse Nonnegative Matrix Factorization. 

Abstract  Nonnegative matrix factorization (nmf) has become a popular tool for exploratory analysis due to its part based easy interpretable representation. Sparseness is commonly invoked in nmf (snmf) by regularizing by the L1norm both to alleviate the nonuniqueness of the nmf representation as well as promote sparse (i.e. part based) representations. While sparseness can prune excess components thereby potentially also establish the number of components it is an open problem what constitutes the adequate degree of sparseness, i.e. how to tune the pruning. In a hierarchical Bayesian framework extsc{snmf} corresponds to imposing an exponential prior while the regularization strength can be expressed in terms of the hyperparameters of these priors. Thus, within the Bayesian modelling framework Automatic Relevance Determination (ard) can learn these pruning strengths from data. We demonstrate on three benchmark nmf data how the proposed ard framework can be used to tune the pruning thereby also estimate the nmf model order. 
Type  Conference paper [With referee] 
Conference  European Signal Processing Conference 2009 (EUSIPCO'09) 
Year  2009 
Electronic version(s)  [pdf] 
BibTeX data  [bibtex] 
IMM Group(s)  Intelligent Signal Processing 