Security Evaluation of Pattern Classifiers under Attack

M V Vidyadhari, K Kiranmai, K Rama Krishniah, D Sarath Babu

Abstract


Pattern classification systems are normally worn in adversarial applications, similar to biometric confirmation, system imposition uncovering, and spam filtering, in which data can exist intentionally control by humans to destabilize their process. As this adversarial situation is not engaged into explanation by traditional intend methods, pattern classification systems may display vulnerabilities, whose utilization may harshly influence their presentation, and subsequently limit their realistic usefulness. Extending pattern classification theory and intend methods to adversarial settings is thus a narrative and very applicable explore bearing, which has not yet been pursued in a methodical way. In this paper, we deal with one of the major release issues: evaluating at propose phase the sanctuary of example classifiers, namely, the routine degradation below possible attacks they may acquire throughout operation. We suggest a structure for experiential estimate of classifier sanctuary that formalizes and generalizes the major information planned in the creative writing, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more absolute sympathetic of the classifier’s performance in adversarial environments, and lead to enhanced intend choices.
Index Terms—Pattern classification; adversarial classification; performance evaluation; security evaluation; robustness evaluation.

Full Text:

PDF




Copyright (c) 2016 M V Vidyadhari, K Kiranmai, K Rama Krishniah, D Sarath Babu

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

All published Articles are Open Access at  https://journals.pen2print.org/index.php/ijr/ 


Paper submission: ijr@pen2print.org