wanghongjun
Research Associate
Supervisor of Master's Candidates
- Master Tutor
- Education Level:PhD graduate
- Degree:Doctor of engineering
- Business Address:犀浦3号教学楼31529
- Professional Title:Research Associate
- Alma Mater:四川大学
- Supervisor of Master's Candidates
- School/Department:计算机与人工智能学院
- Discipline:Electronic Information
Software Engineering
Computer Application Technology
Contact Information
- PostalAddress:
- Email:
- Paper Publications
The detection of distributional discrepancy for language GANs
- DOI number:10.1080/09540091.2022.2080182
- Affiliation of Author(s):西南交通大学
- Journal:Connection Science
- Place of Publication:ENGLAND
- Key Words:Text generation; generative adversarial nets; distributional discrepancy
- Abstract:A pre-trained neural language model (LM) is usually used to generate texts. Due to exposure bias, the generated text is not as good as real text. Many researchers claimed they employed the Generative Adversarial Nets (GAN) to alleviate this issue by feeding reward signals from a discriminator to update the LM (generator). However, some researchers argued that GAN did not work by evaluating the generated texts with a quality-diversity metric such as Bleu versus self-Bleu, and language model score versus reverse language model score. Unfortunately, these two-dimension metrics are not reliable. Furthermore, the existing methods only assessed the final generated texts, thus neglecting the dynamic evaluating the adversarial learning process. Different from the above-mentioned methods, we adopted the most recent metric functions, which measure the distributional discrepancy between real and generated text. Besides that, we design a comprehensive experiment to investigate the performance during the learning process. First, we evaluate a language model with two functions and identify a large discrepancy. Then, several methods with the detected discrepancy signal to improve the generator were tried. Experimenting with two language GANs on two benchmark datasets, we found that the distributional discrepancy increases with more adversarial learning rounds. Our research provides convicted evidence that the language GANs fail.
- Co-author:Peng Jin,Ping Cai,Hongjun Wang,Jiajun Chen
- First Author:Xingyuan Chen
- Indexed by:SCI
- Correspondence Author:Xinyu Dai
- Discipline:Engineering
- Document Type:J
- Volume:34
- Issue:1
- Page Number:1736-1750
- ISSN No.:0954-0091
- Translation or Not:no
- Date of Publication:2022-06-14
- Included Journals:SCI