Tuning Pruning in Sparse Non-negative Matrix Factorization.

AbstractNon-negative matrix factorization (nmf) has become a popular tool for exploratory analysis due to its part based easy interpretable representation. Sparseness is commonly invoked in nmf (snmf) by regularizing by the L1-norm both to alleviate the non-uniqueness of the nmf representation as well as promote sparse (i.e. part based) representations. While sparseness can prune excess components thereby potentially also establish the number of components it is an open problem what constitutes the adequate degree of sparseness, i.e. how to tune the pruning. In a hierarchical Bayesian framework extsc{snmf} corresponds to imposing an exponential prior while the regularization strength can be expressed in terms of the hyper-parameters of these priors. Thus, within the Bayesian modelling framework Automatic Relevance Determination (ard) can learn these pruning strengths from data. We demonstrate on three benchmark nmf data how the proposed ard framework can be used to tune the pruning thereby also estimate the nmf model order.
TypeConference paper [With referee]
ConferenceEuropean Signal Processing Conference 2009 (EUSIPCO'09)
Electronic version(s)[pdf]
BibTeX data [bibtex]
IMM Group(s)Intelligent Signal Processing