Unified Multilingual Robustness Evaluation Toolkit
for Natural Language Processing
TextFlint is a multilingual robustness evaluation platform for natural language processing, which unifies text transformation, sub-population, adversarial attack,and their combinations to provide a comprehensive robustness analysis. So far, TextFlint supports 13 NLP tasks.
If you’re looking for robustness evaluation results of SOTA models, you might want the TextFlint IO page.
Features
Full coverage of transformation types, including 20 general transformations, 8 subpopulations and 60 task-specific transformations, as well as thousands of their combinations.
Subpopulation, which is to identify the specific part of dataset on which the target model performs poorly.
Adversarial attack aims to find a perturbation of an input text that is able to fool the given model.
Complete analytical report to accurately explain where your model’s shortcomings are, such as the problems in lexical rules or syntactic rules.
Online Demo
You can test most of transformations directly on our online demo.
Require python version >= 3.7, recommend install with pip.
pip install textflint
Once TextFlint is installed, you can run it via command-line (textflint ...) or integrate it inside another NLP project.
Usage
Workflow
The general workflow of TextFlint is displayed above. Evaluation of target models could be divided into three steps:
For input preparation, the original dataset for testing, which is to be loaded by Dataset, should be firstly formatted as a series of JSON objects. You can use the built-in Dataset following this instruction. TextFlint configuration is specified by Config. Target model is also loaded as FlintModel.
In adversarial sample generation, multi-perspective transformations (i.e., 80+Transformation, Subpopulation and AttackRecipe), are performed on Dataset to generate transformed samples. Besides, to ensure semantic and grammatical correctness of transformed samples, Validator calculates confidence of each sample to filter out unacceptable samples.
Lastly, Analyzer collects evaluation results and ReportGenerator automatically generates a comprehensive report of model robustness.
For example, on the Sentiment Analysis (SA) task, this is a statistical chart of the performance ofXLNET with different types of Transformation/Subpopulation/AttackRecipe on the IMDB dataset.
We release tutorials of performing the whole pipeline of TextFlint on various tasks, including:
where input_file is the input file of csv or json format, config.json is a configuration file with generation and target model options. Transformed datasets would save to your out dir according to your config.json.
Based on the design of decoupling sample generation and model verification, TextFlint can be used inside another NLP project with just a few lines of code.
For more input and output instructions of TextFlint, please refer to the IO format document.
Architecture
Input layer: receives textual datasets and models as input, represented as Dataset and FlintModel separately.
DataSet: a container, provides efficient and handy operation interfaces for Sample. Dataset supports loading, verification, and saving data in Json or CSV format for various NLP tasks.
FlintModel: a target model used in an adversarial attack.
Generation layer: there are mainly four parts in generation layer:
Subpopulation: generates a subset of a DataSet.
Transformation: transforms each sample of Dataset if it can be transformed.
AttackRecipe: attacks the FlintModel and generates a DataSet of adversarial examples.
Validator: verifies the quality of samples generated by Transformation and AttackRecipe.
textflint provides an interface to integrate the easy-to-use adversarial attack recipes implemented based on textattack. Users can refer to textattack for more information about the supported AttackRecipe.
Report layer: analyzes model testing results and provides robustness report for users.
We welcome community contributions to TextFlint in the form of bugfixes 🛠️ and new features💡! If you want to contribute, please first read our contribution guideline.
Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing
TextFlint is a multilingual robustness evaluation platform for natural language processing, which unifies text transformation, sub-population, adversarial attack,and their combinations to provide a comprehensive robustness analysis. So far, TextFlint supports 13 NLP tasks.
Features
Online Demo
You can test most of transformations directly on our online demo.
Table of Contents
Setup
Require python version >= 3.7, recommend install with
pip
.Once TextFlint is installed, you can run it via command-line (
textflint ...
) or integrate it inside another NLP project.Usage
Workflow
The general workflow of TextFlint is displayed above. Evaluation of target models could be divided into three steps:
Dataset
, should be firstly formatted as a series ofJSON
objects. You can use the built-inDataset
following this instruction. TextFlint configuration is specified byConfig
. Target model is also loaded asFlintModel
.Dataset
to generate transformed samples. Besides, to ensure semantic and grammatical correctness of transformed samples, Validator calculates confidence of each sample to filter out unacceptable samples.Analyzer
collects evaluation results andReportGenerator
automatically generates a comprehensive report of model robustness.For example, on the Sentiment Analysis (SA) task, this is a statistical chart of the performance of
XLNET
with different types ofTransformation
/Subpopulation
/AttackRecipe
on theIMDB
dataset.We release tutorials of performing the whole pipeline of TextFlint on various tasks, including:
Quick Start
Using TextFlint to verify the robustness of a specific model is as simple as running the following command:
where input_file is the input file of csv or json format, config.json is a configuration file with generation and target model options. Transformed datasets would save to your out dir according to your config.json.
Based on the design of decoupling sample generation and model verification, TextFlint can be used inside another NLP project with just a few lines of code.
For more input and output instructions of TextFlint, please refer to the IO format document.
Architecture
Input layer: receives textual datasets and models as input, represented as
Dataset
andFlintModel
separately.DataSet
: a container, provides efficient and handy operation interfaces forSample
.Dataset
supports loading, verification, and saving data in Json or CSV format for various NLP tasks.FlintModel
: a target model used in an adversarial attack.Generation layer: there are mainly four parts in generation layer:
Subpopulation
: generates a subset of aDataSet
.Transformation
: transforms each sample ofDataset
if it can be transformed.AttackRecipe
: attacks theFlintModel
and generates aDataSet
of adversarial examples.Validator
: verifies the quality of samples generated byTransformation
andAttackRecipe
.Report layer: analyzes model testing results and provides robustness report for users.
Learn More
Contributing
We welcome community contributions to TextFlint in the form of bugfixes 🛠️ and new features💡! If you want to contribute, please first read our contribution guideline.
Citation
If you are using TextFlint for your work, please kindly cite our ACL2021 TextFlint demo paper:
__special_katext_id_1__