R2ATA
Reasoning Robustness to Adversarial Typo Attacks
About
Reasoning Robustness to Adversarial Typo Attacks (R2ATA) is a specialized evaluation benchmark designed to assess the robustness of language models against adversarial typographical errors. R2ATA focuses on three common reasoning tasks: GSM8K, BBH, and MMLU. These tasks have been augmented with adversarial typos to create challenging test cases that expose vulnerabilities in model reasoning and comprehension.
Features
Our R2ATA benchmark includes a diverse set of questions and prompts from mathematics, common sense, and multi-domain knowledge tasks, ensuring a comprehensive evaluation of language model capabilities.
The typos consists of Proximity Error, Double Typing Error, Omission Error, Whitespace Error.
Easy Evaluation
Our R2ATA benchmark provides a seamless, automated evaluation pipeline for assessing the robustness of language models against adversarial typographical errors. Users can integrate their models into our system, which automatically loads the R2ATA dataset and tests the models' performance under these challenging conditions.