You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
model_wrapper: A TextAttack model wrapper compatible with the selected goal function.
56
-
goal_function_type (str, optional): One of:
57
-
- "targeted_classification": targeted attack on a classification model (default).
58
-
- "targeted_strict": stricter targeted attack on a classification model.
59
-
- "targeted_bonus": targeted attack on a classification model that gives a bonus score of 1 if the prediction for the target class is the max of all classes.
60
-
- "named_entity_recognition": token-level targeted attack on a NER model.
61
-
- "logit_sum": untargeted attack minimizing total logits.
62
-
- "minimize_bleu": attack minimizing BLEU score between original and perturbed translations.
63
-
- "maximize_levenshtein": attack maximizing Levenshtein distance between original and perturbed translations.
64
-
perturbation_type (str, optional): One of:
65
-
- "homoglyphs" (default)
66
-
- "invisible"
67
-
- "deletions"
68
-
- "reorderings"
69
-
allow_skip (bool): If set to False, the attack will continue even if attacking the unperturbed input string already completes the goal. Set to False in the paper.
70
-
perturbs (int): Maximum number of perturbations allowed per input string. Values from 1 to 5 were used in the paper.
71
-
popsize (int): Population size for differential evolution. Set to 32 in the paper.
72
-
maxiter (int): Maximum number of generations for differential evolution. Set to 10 in the paper.
73
-
**goal_function_kwargs: Additional arguments passed to the goal function.
74
-
75
-
Returns:
76
-
textattack.Attack: Configured Attack instance.
56
+
Parameters
57
+
----------
58
+
model_wrapper : ModelWrapper
59
+
A TextAttack model wrapper compatible with the selected goal function.
60
+
goal_function_type : str, optional
61
+
Goal function type. One of:
62
+
63
+
- ``"targeted_classification"``: targeted attack on a classification model (default).
0 commit comments