#31: MLDublin meets Criteo @ Huckletree D2
- 1 minWe’re joined by Diarmuid and the some of the team from Criteo AI Lab.
Falvian presented his work which was submitted to RecSys 2018 and won a best paper award for best long paper.
Our world is dynamic and three-dimensional. Understanding the 3D layout of scenes and the motion of objects is crucial for successfully operating in such an environment. This talk will cover two lines of recent research in this direction: End-to-End learning of motion and 3D structure and exploring the synthetic depth/3D data generation platforms.
The encoder-decoder architecture has facilitated Neural Machine Translation. For NMT an input sentence is encoded into a context vector via an RNN -- the last hidden state represents the context. Consecutively this context is used in a decoder RNN to compute the target sentence. While recent advances have lead to the development of the multi-headed attention architecture (known as the transformer), RNN-based encoder-decoder models are still widely used not only in MT but in other related tasks. This talk covered basic and advanced encoder-decoder architectures for MT along with available implementations and will investigate which other tasks can be solved with the same or similar architectures together with the extra requirements that are imposed by these tasks. Among them are the tasks of automatic post-editing, translation quality estimation and others.