• Speaker: Antoine Bordes
  • Paper: Memory Networks
  • Note - lots of results and tasks discussed! Only took notes on a few.
  • Data can be found at fb.ai/babi.
  • A true dialog agent should:
    • combine knowledge and reason to fulfill complex tasks
    • handle open-ended conversations
    • be able to learn and acquire knowledge
  • Interested in training dialog agents in an end-to-end way.
  • Memory networks:
    • Class of models that combine large memory and learning component that can read and write to it
    • incorporates reasoning with attention over memory (RAM)
    • most ML has limited memory - all that's needed for low-level tasks
    • but, memory required for more complex tasks like story understanding, dialog
  • bAbI tasks
    • Set of 20 tasks testing basic reasoning for question answering from stories
    • short stories generated from a simulation - simple command format => story
  • Memory nets perform well on synthetic bAbI tasks
  • Children's Books Test (CBT)
    • Story understanding dataset based on 118 children books from project Gutenberg
    • Keep 20 sentences, remove 1 word from 21st sentence; select missing word from list of words that are same part of speech.
    • 100k training, 10k test
    • LSTMs offer best performance for verbs, prepositions
    • Memory nets offer best performance for named entities, common nouns
  • Recent work has focused on predicting named entities and common nouns - LSTMs do very well elsewhere, room for improvement here.
  • Open-domain question answering
    • Answer questions on any topic
    • have some kind of knowledge base
    • 'what year was the movie blade runner released?'
    • information extraction - extract missing facts from raw text
    • Can questions be answered directly from text, without a knowledge base?