Richard is Chief Scientist at Salesforce, which he joined through acquisition of his startup Metamind. Previously, Richard was a professor in the Stanford CS department.
decaNLP - A Benchmark for Generalized NLP
Why Unified Multi-Task Models for NLP?
Multi-task learning is a blocker for general NLLP systems.
Unified models can decide how to transfer knowledge (domain adaptation, weight sharing, transfer learning, and zero-shot learning).
Unified AND multi-task models can:
More easily adapt to new tasks.
Make deploying to production X times simpler.
Lower the bar for more people to solve new tasks.
Potentially move towards continual learning.
The 3 Major NLP Task Categories
1.
Sequence tagging: named entity recognition, aspect specific sentiment.
2.
Text classification: dialogue state tracking, sentiment classification.