@MASTERSTHESIS\{IMM2003-02035, author = "A. Meng", title = "Robust adaptive filtering using information theoretical methods", year = "2003", school = "Informatics and Mathematical Modelling, Technical University of Denmark, {DTU}", address = "Richard Petersens Plads, Building 321, {DK-}2800 Kgs. Lyngby", type = "", note = "Supervisor: Jan Larsen", url = "http://www2.compute.dtu.dk/pubdb/pubs/2035-full.html", abstract = "This project have mainly focussed on supervised adaptive filters. Different norm-based robust adaptive algorithms are introduced and discussed. Information theoretical methods are introduced by minimizing a {KL-}divergence between the true joint data distribution and the model joint distribution. This leads to the Shannon generalization error which shows to be a generalization of the norm criterium. A similar measure introduced is the Renyi generalization error, which is used as an information theoretical cost-function. The generalization error and its relation to regularization will be discussed. Two algorithms based on the Renyi generalization error are derived using a Gauss Newton approach. An existing information theoretical algorithm known as the Stochastic Information Gradient (SIG) is derived and discussed. Selected norm and information theoretical methods are tested in a real setup with data sets from an open loop measurement of a hearing aid placed in an artificial ear. Abstract in Danish: Dette projekt har hovedsageligt omhandlet robuste adaptive filtre. Nogle enkelte norm metoder er blevet introduceret og diskuteret. Ved at minimere Kullback Leibler divergensen mellem den sande simultane fordelingsfunktion og modellens simultane fordelingsfunktion, introduceres de informationsteoretiske metoder. Resultatet af denne minimering leder til en objekt funktion, som i projektet bliver kaldes for Shannons generalisations fejl. Shannons generalisations fejl er en generalisering af de klassiske norm metoder. Det vises at Shannons generalisations fejl kan omskrives til en Renyi generalisations fejl, som er et tilsvarende informationsm{\aa}l. Renyi generalisations fejlen vil blive brugt som objekt funktion. Der eksisterer et sammenh{\ae}ng mellem generalisations fejlen og regularisering, som vil blive vist. To algoritmer er udledt fra Renyi generalisations fejlen ved brug af en Gauss Newton metode. En eksisterende stokastisk metode (SIG) blev udledt fra Renyi generalisations fejlen. Udvalgte norm og informations teoretiske metoder testes med datas{\ae}t fra en {\aa}ben sl{\o}jfe m{\aa}ling, fra et h{\o}reapparat placeret i et kunstigt {\o}re." }