Abstract
Despite its significant effectiveness in adversarial training approaches to multidomain task-oriented dialogue systems, adversarial inverse reinforcement learning of the dialogue policy frequently fails to balance the performance of the reward estimator and policy generator. During the optimization process, the reward estimator frequently overwhelms the policy generator, resulting in excessively uninformative gradients. We propose the variational reward estimator bottleneck (VRB), which is a novel and effective regularization strategy that aims to constrain unproductive information flows between inputs and the reward estimator. The VRB focuses on capturing discriminative features by exploiting information bottleneck on mutual information. Quantitative analysis on a multidomain task-oriented dialogue dataset demonstrates that the VRB significantly outperforms previous studies.
Original language | English |
---|---|
Article number | 6624 |
Journal | Applied Sciences (Switzerland) |
Volume | 11 |
Issue number | 14 |
DOIs | |
Publication status | Published - 2021 Jul 2 |
Keywords
- Dialogue policy
- Inverse reinforcement learning
- Reinforcement learning
- Task-oriented dialogue
ASJC Scopus subject areas
- Materials Science(all)
- Instrumentation
- Engineering(all)
- Process Chemistry and Technology
- Computer Science Applications
- Fluid Flow and Transfer Processes