An Effective Self-Taught Learning Low-Rank Coding Using Mm-Alm Algorithms

K. Sreenivasulu, S. Choudaiah

Abstract


The absence of labeled data presents a typical test in numerous computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been created to handle this test by using auxiliary samples from the same area or from an alternate space, respectively. Self-taught learning, which is a special kind of transfer learning, has less restrictions on the decision of auxiliary data. It has shown promising execution in visual learning. Be that as it may, existing self-taught learning methods usually overlook the structure information in data. In this paper, we focus on building a self-taught coding framework, which can viably use the rich low-level pattern information abstracted from the auxiliary space, so as to describe the high-level structural information in the target area. By utilizing a high-quality dictionary learned across auxiliary and target domains, the proposed methodology learns expressive codings for the samples in the target space. Since numerous types of visual data have been demonstrated to contain subspace structures, a low-rank constraint is acquainted into the coding objective with better portray the structure of the given target set. The proposed representation learning framework is called self-taught low-rank (S-Low) coding, which can be detailed as anonconvex rank-minimization and dictionary learning problem. We devise an efficient majorization– minimization augmented Lagrange multiplier algorithm to solve it. Based on the proposed S-Low coding mechanism, both unsupervised and supervised visual learning algorithms are determined. Extensive experiments

on five benchmark data sets demonstrate the effectiveness of our methodology.


Full Text:

PDF




Copyright (c) 2019 Edupedia Publications Pvt Ltd

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

All published Articles are Open Access at  https://journals.pen2print.org/index.php/ijr/ 


Paper submission: ijr@pen2print.org