July 2019
Henry Elder, ADAPT Centre
Dublin City University
Supervisors: Jennifer Foster, Alexander O'Connor
Introduction
Research Methods
Current Progress
Conclusion
https://www.vphrase.com/blog/natural-language-generation-explained/
Stephen Merity. 2016. “Peeking into the Neural Network Architecture Used for Google’s Neural Machine Translation.” https://smerity.com/articles/2016/google_nmt_arch.html
Ehud Reiter and Robert Dale. 2000. "Building Natural Language Generation Systems"
How can we generate intermediate representations as part of the content selection step?
Source unknown
You will find this local gem near Cotto in the riverside area.
Automated metrics - Not that informative or useful!
N-gram overlap e.g. BLEU
Perplexity, edit distance
Human evaluation
Count based metrics
Ranking
Results: Top scores in all automated and human evaluation metrics on the seen subtask out of 9 systems. But relatively worse than other systems on unseen subtask
Lessons learned: Baseline seq2seq model very strong performance, especially given graph like nature of inputs
Results: Joint submission with Harvard NLP got top METEOR, ROUGE, and CIDEr scores out of 60 systems. Own diversity enhancing approach performed worse than baseline
Results: Top scores in all automated metrics and first for readability in human evaluation for English out of 8 system. Only team to enter the deep track
Target Sentence: This happened very quickly, and I wanted to make sure that I let everyone know before I left.
Results: Ranked second on the automated evaluation leaderboard out of 23 systems. But performed worse in human evaluation, ranking 6th
Multi-stage neural NLG systems are capable of improving the quality of generated utterances (in English).