Making malware classification more robust to adversarial attacks
Lead Research Organisation:
King's College London
Department Name: Informatics
Abstract
Abstract:
Modern malware classification uses various techniques to gauge whether a particular example is malicious or benign. Recent advances in AI have paved the way for machine learning models (such as artificial neural networks) to play a substantial role in malware classification. However, it is now well-known that such models are susceptible to adversarial examples. An adversarial example is an example which is designed to intentionally deceive the classifier so that it misclassifies it. Various techniques have been discussed and presented to make classifiers more resilient against such attacks, but most work has been carried out in the image recognition domain (as opposed to malware classification) and these techniques are not optimal. Resilience in malware classification is absolutely vital and thus we aim to devise techniques to counter the effects of adversarial examples in this domain.
---
There are various techniques which we wish to research such as a system based on hybrid models, a regulatory system or perhaps even machine learning-based approaches to counter adversarial examples are especially novel. As a result of the lack of work in this area, I am currently in a deep literature review phase to gauge what has been done in this area since it is mostly unexplored for this domain.
A proposed research methodology is below:
1. Extensive literature review specific to areas in research objective as well as existing works.
2. Experimental evaluation of these existing defences and other more novel techniques to understand the current state from a practical point of view.
3. Identification of problems within existing techniques and where opportunities for enhancement lie.
4. Development of techniques to be able to counter erroneous misclassification of adversarial examples.
5. Evaluation and analysis of the developed techniques and comparison with existing techniques.
6. Discussion of results and report on results through final thesis.
Modern malware classification uses various techniques to gauge whether a particular example is malicious or benign. Recent advances in AI have paved the way for machine learning models (such as artificial neural networks) to play a substantial role in malware classification. However, it is now well-known that such models are susceptible to adversarial examples. An adversarial example is an example which is designed to intentionally deceive the classifier so that it misclassifies it. Various techniques have been discussed and presented to make classifiers more resilient against such attacks, but most work has been carried out in the image recognition domain (as opposed to malware classification) and these techniques are not optimal. Resilience in malware classification is absolutely vital and thus we aim to devise techniques to counter the effects of adversarial examples in this domain.
---
There are various techniques which we wish to research such as a system based on hybrid models, a regulatory system or perhaps even machine learning-based approaches to counter adversarial examples are especially novel. As a result of the lack of work in this area, I am currently in a deep literature review phase to gauge what has been done in this area since it is mostly unexplored for this domain.
A proposed research methodology is below:
1. Extensive literature review specific to areas in research objective as well as existing works.
2. Experimental evaluation of these existing defences and other more novel techniques to understand the current state from a practical point of view.
3. Identification of problems within existing techniques and where opportunities for enhancement lie.
4. Development of techniques to be able to counter erroneous misclassification of adversarial examples.
5. Evaluation and analysis of the developed techniques and comparison with existing techniques.
6. Discussion of results and report on results through final thesis.
Organisations
People |
ORCID iD |
Jose Such (Primary Supervisor) | |
Aqib Rashid (Student) |
Studentship Projects
Project Reference | Relationship | Related To | Start | End | Student Name |
---|---|---|---|---|---|
EP/R513064/1 | 30/09/2018 | 29/09/2023 | |||
2320321 | Studentship | EP/R513064/1 | 30/09/2019 | 30/03/2023 | Aqib Rashid |