王红军 副研究员

硕士生导师

个人信息Personal Information


学历:博士研究生毕业

学位:工学博士学位

办公地点:犀浦3号教学楼31529

毕业院校:四川大学

学科:电子信息. 软件工程. 计算机应用技术

所在单位:计算机与人工智能学院

报考该导师研究生的方式

欢迎你报考王红军老师的研究生,报考有以下方式:

1、参加西南交通大学暑期夏令营活动,提交导师意向时,选择王红军老师,你的所有申请信息将发送给王红军老师,老师看到后将和你取得联系,点击此处参加夏令营活动

2、如果你能获得所在学校的推免生资格,欢迎通过推免方式申请王红军老师研究生,可以通过系统的推免生预报名系统提交申请,并选择意向导师为王红军老师,老师看到信息后将和你取得联系,点击此处推免生预报名

3、参加全国硕士研究生统一招生考试报考王红军老师招收的专业和方向,进入复试后提交导师意向时选择王红军老师。

4、如果你有兴趣攻读王红军老师博士研究生,可以通过申请考核或者统一招考等方式报考该导师博士研究生。

点击关闭

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

The detection of distributional discrepancy for language GANs

DOI码:10.1080/09540091.2022.2080182

所属单位:西南交通大学

发表刊物:Connection Science

刊物所在地:ENGLAND

关键字:Text generation; generative adversarial nets; distributional discrepancy

摘要:A pre-trained neural language model (LM) is usually used to generate texts. Due to exposure bias, the generated text is not as good as real text. Many researchers claimed they employed the Generative Adversarial Nets (GAN) to alleviate this issue by feeding reward signals from a discriminator to update the LM (generator). However, some researchers argued that GAN did not work by evaluating the generated texts with a quality-diversity metric such as Bleu versus self-Bleu, and language model score versus reverse language model score. Unfortunately, these two-dimension metrics are not reliable. Furthermore, the existing methods only assessed the final generated texts, thus neglecting the dynamic evaluating the adversarial learning process. Different from the above-mentioned methods, we adopted the most recent metric functions, which measure the distributional discrepancy between real and generated text. Besides that, we design a comprehensive experiment to investigate the performance during the learning process. First, we evaluate a language model with two functions and identify a large discrepancy. Then, several methods with the detected discrepancy signal to improve the generator were tried. Experimenting with two language GANs on two benchmark datasets, we found that the distributional discrepancy increases with more adversarial learning rounds. Our research provides convicted evidence that the language GANs fail.

合写作者:Peng Jin,Ping Cai,Hongjun Wang,Jiajun Chen

第一作者:Xingyuan Chen

论文类型:SCI

通讯作者:Xinyu Dai

学科门类:工学

文献类型:J

卷号:34

期号:1

页面范围:1736-1750

ISSN号:0954-0091

是否译文:

发表时间:2022-06-14

收录刊物:SCI