wanghongjun
Research Associate
Supervisor of Master's Candidates
- Master Tutor
- Education Level:PhD graduate
- Degree:Doctor of engineering
- Business Address:犀浦3号教学楼31529
- Professional Title:Research Associate
- Alma Mater:四川大学
- Supervisor of Master's Candidates
- School/Department:计算机与人工智能学院
- Discipline:Electronic Information
Software Engineering
Computer Application Technology
Contact Information
- PostalAddress:
- Email:
- Paper Publications
Micro-supervised Disturbance Learning: A Perspective of Representation Probability Distribution
- Impact Factor:26.7
- Journal:IEEE Transactions on Pattern Analysis and Machine Intelligence
- Place of Publication:UNITED STATES
- Key Words:Clustering, micro-supervised disturbance learning, representation probability distribution, small-perturbation
- Abstract:The instability is shown in the existing methods of representation learning based on Euclidean distance under a broad set of conditions. Furthermore, the scarcity and high cost of labels prompt us to explore more expressive representation learning methods which depends on as few labels as possible. To address above issues, the small-perturbation ideology is firstly introduced on the representation learning model based on the representation probability distribution. The positive small-perturbation information (SPI) which only depend on two labels of each cluster is used to stimulate the representation probability distribution and then two variant models are proposed to fine-tune the expected representation distribution of Restricted Boltzmann Machine (RBM), namely, Micro-supervised Disturbance Gaussian-binary RBM (Micro-DGRBM) and Micro-supervised Disturbance RBM (Micro-DRBM) models. The Kullback-Leibler (KL) divergence of SPI is minimized in the same cluster to promote the representation probability distributions to become more similar in Contrastive Divergence (CD) learning. In contrast, the KL divergence of SPI is maximized in the different clusters to enforce the representation probability distributions to become more dissimilar in CD learning. To explore the representation learning capability under the continuous stimulation of the SPI, we present a deep Micro-supervised Disturbance Learning (Micro-DL) framework based on the Micro-DGRBM and Micro-DRBM models and compare it with a similar deep structure which has no external stimulation. Experimental results demonstrate that the proposed deep Micro-DL architecture shows better performance in comparison to the baseline method, the most related shallow models and deep frameworks for clustering.
- Co-author:Jing Liu,Hongjun Wang,Hua Meng,Zhiguo Gong
- First Author:Jielei Chu
- Indexed by:SCI
- Correspondence Author:Tianrui Li
- Discipline:Engineering
- Document Type:J
- Volume:45
- Issue:6
- Page Number:7542-7558
- ISSN No.:0162-8828
- Translation or Not:no
- Date of Publication:2022-11-29
- Included Journals:SCI