fdd-defense

Форк
0

2 месяца назад
2 месяца назад
год назад
2 месяца назад
7 месяцев назад
README.md

FDD Defense: Adversarial Attacks and Defenses on Fault Diagnosis and Detection models

Introduction

The development of the smart manufacturing trend includes the integration of Artificial Intelligence technologies into industrial processes. One example of such implementation is deep learning models that diagnose the current state of a technological process. Recent studies have demonstrated that small data perturbations, named adversarial attacks, can significantly affect the correct predictions of such models. This fact is critical in industrial systems, where AI-based decisions can be made to manage physical equipment. fdd-defense helps to evaluate the robustness of technological process diagnosis models to adversarial attacks, as well as consider defense methods.

fdd-defense is a python library with adversarial attacks on Fault Detection and Diagnostic (FDD) models and defense methods against adversarial attacks. This repository contains the original implementation of methods from the paper Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process.

Installing

To install fdd-defense, run the following command:

pip install git+https://github.com/AIRI-Institute/fdd-defense.git

Usage

from fdd_defense.models import MLP
from fdd_defense.attackers import NoAttacker, FGSMAttacker
from fdd_defense.defenders import NoDefenceDefender, AdversarialTrainingDefender
from fdd_defense.utils import accuracy
from fddbenchmark import FDDDataset
from sklearn.preprocessing import StandardScaler
# Download and scale the TEP dataset
dataset = FDDDataset(name='reinartz_tep')
scaler = StandardScaler()
scaler.fit(dataset.df[dataset.train_mask])
dataset.df[:] = scaler.transform(dataset.df)
# Define and train a FDD model
model = MLP(
window_size=50,
step_size=1,
device='cuda',
batch_size=128,
num_epochs=10
)
model.fit(dataset)
# Test the FDD model on original data without defense
attacker = NoAttacker(model, eps=0.05)
defender = NoDefenceDefender(model)
acc = accuracy(attacker, defender, step_size=1)
print(f'Accuracy: {acc:.4f}')
# Test the FDD model under FGSM attack without defense
attacker = FGSMAttacker(model, eps=0.05)
defender = NoDefenceDefender(model)
acc = accuracy(attacker, defender, step_size=1)
print(f'Accuracy: {acc:.4f}')
# Test the FDD model under FGSM attack with Adversarial Training defense
attacker = FGSMAttacker(model, eps=0.05)
defender = AdversarialTrainingDefender(model, attacker)
acc = accuracy(attacker, defender, step_size=1)
print(f'Accuracy: {acc:.4f}')

Implemented methods

FDD models

FDD modelReference
LinearPandya, D., Upadhyay, S. H., & Harsha, S. P. (2014). Fault diagnosis of rolling element bearing by using multinomial logistic regression and wavelet packet transform. Soft Computing, 18, 255-266.
BoostingRuder, Sebastian. "An overview of gradient descent optimization algorithms." arXiv preprint arXiv:1609.04747 (2016).
MLPKhoualdia, T., Lakehal, A., Chelli, Z., Khoualdia, K., & Nessaib, K. (2021). Optimized multi layer perceptron artificial neural network based fault diagnosis of induction motor using vibration signals. Diagnostyka, 22.
GRU, TCNLomov, Ildar, et al. "Fault detection in Tennessee Eastman process with temporal deep learning models." Journal of Industrial Information Integration 23 (2021): 100216.

Adversarial attacks

Adversarial attackReference
NoiseZhuo, Yue, Zhenqin Yin, and Zhiqiang Ge. "Attack and defense: Adversarial security of data-driven FDC systems." IEEE Transactions on Industrial Informatics 19.1 (2022): 5-19.
FGSMGoodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
PGDMadry, Aleksander, et al. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083 (2017).
DeepFoolMoosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. "Deepfool: a simple and accurate method to fool deep neural networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
Carlini & WagnerCarlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 ieee symposium on security and privacy (sp). Ieee, 2017.
Distillation black-boxCui, Weiyu, et al. "Substitute model generation for black-box adversarial attack based on knowledge distillation." 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020.

Defense methods

Defense methodReference
Adversarial trainingGoodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
Data quantizationXu, Weilin, David Evans, and Yanjun Qi. "Feature squeezing: Detecting adversarial examples in deep neural networks." arXiv preprint arXiv:1704.01155 (2017).
Gradient regularizationFinlay, Chris, and Adam M. Oberman. "Scaleable input gradient regularization for adversarial robustness." Machine Learning with Applications 3 (2021): 100017.
Defensive distillationPapernot, Nicolas, et al. "Distillation as a defense to adversarial perturbations against deep neural networks." 2016 IEEE symposium on security and privacy (SP). IEEE, 2016.
ATQPozdnyakov, Vitaliy, et al. "Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process." IEEE Open Journal of the Industrial Electronics Society (2024).

Testing

To test the library, run the command pytest fdd_defense/tests from the root directory.

Running experiments

To reproduce the results from the paper, open the notebook experiments.ipynb and follow the instructions.

Citation

Please cite our paper as follows:

@article{pozdnyakov2024adversarial,
  title={Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process},
  author={Pozdnyakov, Vitaliy and Kovalenko, Aleksandr and Makarov, Ilya and Drobyshevskiy, Mikhail and Lukyanov, Kirill},
  journal={IEEE Open Journal of the Industrial Electronics Society},
  year={2024},
  publisher={IEEE}
}

Описание

Defense of adversarial attacks on FDD models. fdd-defense is a python library with adversarial attacks on Fault Detection and Diagnostic (FDD) models and defense methods against attacks.

Языки

Jupyter Notebook

  • Python
Сообщить о нарушении

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.