Regardless of fixed advances and seemingly super-human efficiency on constrained domains, state-of-the-art fashions for NLP are imperfect. These imperfections, coupled with at present’s advances being pushed by (seemingly black-box) neural fashions, depart researchers and practitioners scratching their heads asking, why did my mannequin make this prediction?
We current AllenNLP Interpret, a toolkit constructed on prime of AllenNLP for interactive mannequin interpretations. The toolkit makes it simple to use gradient-based saliency maps and adversarial assaults to new fashions, in addition to develop new interpretation strategies. AllenNLP interpret comprises three elements: a collection of interpretation strategies relevant to most fashions, APIs for growing new interpretation strategies (e.g., APIs to acquire enter gradients), and reusable front-end elements for visualizing the interpretation outcomes.
This web page presents hyperlinks to:
@inproceedingsWallace2019AllenNLP, Writer = Eric Wallace and Jens Tuyls and Junlin Wang and Sanjay Subramanian and Matt Gardner and Sameer Singh, Booktitle = , Yr = , Title = : A Framework for Explaining Predictions of Fashions