Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Artificial Intelligence Security in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Artificial Intelligence Security Framework
    JING Hui-yun, WEI Wei, ZHOU Chuan, HE Xin
    Computer Science    2021, 48 (7): 1-8.   DOI: 10.11896/jsjkx.210300306
    Abstract793)      PDF(pc) (2122KB)(2373)       Save
    With the advent of artificial intelligence,all walks of life begin to deploy artificial intelligence systems according to their own business needs,which accelerates the scale construction and widespread application of artificial intelligence worldwide in an all-around way.However,the security risks of artificial intelligence infrastructure,design and development,and integration applications also arise.To avoid risks,countries worldwide have formulated AI ethical norms and improved laws and regulations and industry management to carry out artificial intelligence safety governance.In the artificial intelligence security governance,the artificial intelligence security technology system has important guiding significance.Specifically,the artificial intelligence security technology system is an essential part of artificial intelligence security governance,critical support for implementing artificial intelligence ethical norms,meeting legal and regulatory requirements.However,there is a general lack of artificial intelligence security framework in the world at the current stage,and security risks are prominent and separated.Therefore,it is urgent to summarize and conclude the security risks existing in each life cycle of artificial intelligence.To solve the above problems,this paper proposes an AI security framework covering AI security goals,graded capabilities of AI security,and AI security technologies and management systems.It looks forward to providing valuable references for the community to improve artificial intelligence's safety and protection capabilities.
    Reference | Related Articles | Metrics
    Survey on Artificial Intelligence Model Watermarking
    XIE Chen-qi, ZHANG Bao-wen, YI Ping
    Computer Science    2021, 48 (7): 9-16.   DOI: 10.11896/jsjkx.201200204
    Abstract1249)      PDF(pc) (2905KB)(3013)       Save
    In recent years,with the rapid development of artificial intelligence,it has been used in voice,image and other fields,and achieved remarkable results.However,these trained AI models are very easy to be copied and spread.Therefore,in order to protect the intellectual property rights of the models,a series of algorithms or technologies for model copyright protection emerge as the times require,one of which is model watermarking technology.Once the model is stolen,it can prove its copyright through the verification of the watermark,maintain its intellectual property rights and protect the model.This technology has become a hot spot in recent years,but it has not yet formed a more unified framework.In order to better understand,this paper summarizes the current research of model watermarking,discusses the current mainstream model watermarking algorithms,analyzes the research progress in the research direction of model watermarking,reproduces and compares several typical algorithms,and finally puts forward some suggestions for future research direction.
    Reference | Related Articles | Metrics
    Security Evaluation Method for Risk of Adversarial Attack on Face Detection
    JING Hui-yun, ZHOU Chuan, HE Xin
    Computer Science    2021, 48 (7): 17-24.   DOI: 10.11896/jsjkx.210300305
    Abstract705)      PDF(pc) (2671KB)(2027)       Save
    Face detection is a classic problem in the field of computer vision.With the power-driven by artificial intelligence and big data,it has displayed a new vitality.Face detection shows its important application value and great application prospect in the fields of face payment,identity authentication,beauty camera,intelligent security,and so on.However,with the overall acceleration of face detection deployment and application process,its security risks and hidden dangers have become increasingly prominent.Therefore,this paper analyzes and summarizes the security risks which the current face detection models face in each stage of their life cycle.Among them,adversarial attack has received extensive attention because it poses a serious threat to the availability and reliability of face detection,and may cause the dysfunction of the face detection module.The current adversarial attacks on face detection mainly focus on white-box adversarial attacks.However,because white-box adversarial attacks require a full understanding of the internal structure and all parameters of a specific face detection model,and for the protection of business secrets and corporate interests,the structure and parameters of a commercially deployed face detection model in the real physical world are usually inaccessible.This makes it almost impossible to use white-box adversarial methods to attack commercial face detection models in the real world.To solve the above problems,this paper proposes a black-box physical adversarial attack me-thod for face detection.Through the idea of ensemble learning,the public attention heat map of many face detection models is extracted,then the obtained public attention heat map is attacked.Experiments show that our method realizes the successful escape of the black-box face detection model deployed on mobile terminals,including the face detection module of mobile terminal’s built-in camera software,face payment software,and beauty camera software.This demonstrates that our method will be helpful to evaluate the security of face detection models in the real world.
    Reference | Related Articles | Metrics
    Feature Gradient-based Adversarial Attack on Modulation Recognition-oriented Deep Neural Networks
    WANG Chao, WEI Xiang-lin, TIAN Qing, JIAO Xiang, WEI Nan, DUAN Qiang
    Computer Science    2021, 48 (7): 25-32.   DOI: 10.11896/jsjkx.210300299
    Abstract650)      PDF(pc) (2887KB)(1126)       Save
    Deep neural network (DNN)-based automatic modulation recognition (AMR) outperforms traditional AMR methods in automatic feature extraction,recognition accuracy with less manual intervention.However,high recognition accuracy is the first priority of the practitioners when designing AMR-oriented DNN (ADNN) models while security is usually neglected.In this backdrop,from the perspective of the security of artificial intelligence,this paper presents a novel characteristic gradient-based adversarial attack method on ADNN models.Compared with traditional label gradient-based attack method,the proposed method can better attack the extracted temporal and spatial features by ADNN models.Experimental results on an open dataset show that the proposed method outperforms label gradient-based method in the attacking success ratio and transferability in both white-box and black-box attacks.
    Reference | Related Articles | Metrics
    Differential Privacy Protection Machine Learning Method Based on Features Mapping
    CHEN Tian-rong, LING Jie
    Computer Science    2021, 48 (7): 33-39.   DOI: 10.11896/jsjkx.201200224
    Abstract764)      PDF(pc) (2100KB)(1184)       Save
    The differential privacy algorithm in image classification improves the privacy protection capability of the machine learning model by adding noise,and at the same time easily causes the accuracy of the model classification to decrease.To solve the above problems,a differential privacy protection machine learning method based on features mapping is proposed.Thismethodcombines the pre-training neural network and shadow model training technology to map the feature vectors of the original data sample to the high-dimensional vector space in the form of differential vectors,so as to shorten the distance of the sample in the high-dimensional vector space to reduce the leakage of private information caused by model updates,and improve the privacy protection and classification capabilities of the machine learning model.The experimental results on the MNIST and CIFAR-10 datasets show that for the ε-differential privacy model with ε equal to 0.01 and 0.11,the classification accuracy is improved to 99% and 96%,respectively,indicating that compared with DP-SGD and many other commonly used differential privacy algorithms,the model trained by this method can maintain stronger classification capabilities at a lower privacy budget.And the success rate of reasoning attacks against this model on the two data sets is reduced to 10%,which is against inference attacks.Compared with the traditional CNN model of image classification,the defense capability of the CNN model is greatly improved.
    Reference | Related Articles | Metrics
    Intelligent Penetration Testing Path Discovery Based on Deep Reinforcement Learning
    ZHOU Shi-cheng, LIU Jing-ju, ZHONG Xiao-feng, LU Can-ju
    Computer Science    2021, 48 (7): 40-46.   DOI: 10.11896/jsjkx.210400057
    Abstract639)      PDF(pc) (2540KB)(2197)       Save
    Penetration testing is a general method for network security testing by simulating hacker attacks.Traditional penetration testing methods mainly rely on manual operations,which have high time and labor costs.Intelligent penetration testing is the future direction of development,aiming at more efficient and low-cost network security protection.Penetration testing path discovery is a key issue in the research of intelligent penetration testing,the purpose of which is to discover vulnerabilities in the network and possible attackers’ penetration testing path in time and achieve targeted defense.In this paper,deep reinforcement learning and penetration testing are combined,the agent is trained in simulated network scenarios,the penetration testing process is modeled as a Markov decision process model,and an improved deep reinforcement learning algorithm Noisy-Double-Dueling DQNper is proposed.The algorithm integrates prioritized experience replay mechanism,double DQN,dueling DQN and noise net mechanism.Different scale network scenarios are used for comparative experiments.The algorithm is better than the traditional DQN (Deep Q Network) algorithm and its improved version in convergence speed and can be applied to larger scale network scenarios.
    Reference | Related Articles | Metrics
    DRL-IDS:Deep Reinforcement Learning Based Intrusion Detection System for Industrial Internet of Things
    LI Bei-bei, SONG Jia-rui, DU Qing-yun, HE Jun-jiang
    Computer Science    2021, 48 (7): 47-54.   DOI: 10.11896/jsjkx.210400021
    Abstract991)      PDF(pc) (2283KB)(1987)       Save
    In recent years,the Industrial Internet of Things (IIoT) has developed rapidly.While realizing industrial digitization,automation,and intelligence,the IIoT has introduced tremendous cyber threats.Further,the complex,heterogeneous,and distributed IIoT environment has created a brand-new attack surface for cyber intruders.Traditional intrusion detection techniques no longer fulfill the needs of intrusion detection for the current IIoT environment.This paper proposes a deep reinforcement learning algorithm (i.e.,Proximal Policy Optimization 2.0,PPO2) based intrusion detection system for the IIoT.The proposed intrusion detection system combines the perceptual ability of deep learning with the decision-making ability of reinforcement learning,which can effectively detect multiple types of cyber attacks for the IIoT.First,a LightGBM-based feature selection algorithm is used to filter the most effective feature sets in IIoT data.Then,the hidden layer of the multilayer perceptron network is used as the shared network structure of the value network and policy network in the PPO2 algorithm.At last,the PPO2 algorithm is used to construct the intrusion detection model and ReLU (Rectified Linear Unit) is employed for classification output.Extensive experiments conducted on a real IIoT dataset released by the Oak Ridge National Laboratory,sponsored by the U.S.Department of Energy,show that the proposed intrusion detection system achieves 99.09% accuracy in detecting multiple types of network attacks for the IIoT,and it outperforms state-of-the-art deep learning models (e.g.,LSTM,CNN,RNN) based and deep reinforcement learning models (e.g.,DDQN and DQN) based intrusion detection systems,in terms of the accuracy,precision,recall,and F1 score.
    Reference | Related Articles | Metrics
    Adversarial Attacks Threatened Network Traffic Classification Based on CNN
    YANG Yang, CHEN Wei, ZHANG Dan-yi, WANG Dan-ni, SONG Shuang
    Computer Science    2021, 48 (7): 55-61.   DOI: 10.11896/jsjkx.210100095
    Abstract801)      PDF(pc) (1741KB)(1955)       Save
    Deep learning algorithm is widely used in network traffic classification,which has good classification effect.Convolutional neural network can not only greatly improve the accuracy of network traffic classification,but also simplify the classification process.However,neural network is faced with security threats such as adversarial attack.The impact of these security threats on network traffic classification based on neural network needs to be further researched and verified.This paper proposes an adversarial attack method for network traffic classification based on convolutional neural network.By adding the disturbance which is difficult to recognize by human eyes to the deep learning input image converted from network traffic,it makes convolutional neural network misclassify network traffic.At the same time,to this attack method,this paper also proposes a defense method based on mixed adversarial training,which combines the adversarial traffic samples generated by adversarial attack and the original traffic samples to enhance the robustness of the classification model.We evaluate the proposed method on public data sets.The experimental results show that the proposed adversarial attack method can cause a sharply drop in the accuracy of the network traffic classification method based on convolutional neural network,and the proposed mixed adversarial attack training can effectively resist the adversarial attack,so as to improve the robustness of the network traffic classification model.
    Reference | Related Articles | Metrics
    Detection of Abnormal Flow of Imbalanced Samples Based on Variational Autoencoder
    ZHANG Ren-jie, CHEN Wei, HANG Meng-xin, WU Li-fa
    Computer Science    2021, 48 (7): 62-69.   DOI: 10.11896/jsjkx.200600022
    Abstract591)      PDF(pc) (1927KB)(1231)       Save
    With the rapid development of machine learning technology,more and more machine learning algorithms are used to detect and analyze attack traffic.However,attack traffic often accounts for a very small portion of network traffic.When training machine learning models,there is often a problem of imbalance between the positive and negative samples of the training set,which affects model training effect.Aiming at the problem of imbalanced samples,an imbalanced sample generation method based on variational auto-encoder is proposed.The idea is that when expanding imbalanced samples,not all of them are expanded.But imbalanced samples are analyzed,and a small number of boundary samples that are most likely to have confusion effects on machine learning are expanded.First,the KNN algorithm is used to screen the samples that are closest to the majority of samples;second,DBSCAN algorithm is used to cluster the partial samples selected by the KNN algorithm to generate one or more sub-clusters;then,a VAE network model is designed to learn and expand the few samples in one or more sub-clusters distinguished by the DBSCAN algorithm.The expanded samples are added to the original samples to build a new training set;finally,the newly constructed training set is used to train decision tree classifier to detect abnormal traffic.The recall rate and F1 score are selected as the evaluation indicators.The original sample,the SMOTE-generated sample and our sample are compared.The experimental results show that the decision tree classifier trained using the proposed method in this paper has improved the recall rate and F1 score among the four types of anomalies.The F1 score is up to 20.9%,which is higher than the original sample and the SMOTE method.
    Reference | Related Articles | Metrics
    SQL Injection Attack Detection Method Based on Information Carrying
    CHENG Xi, CAO Xiao-mei
    Computer Science    2021, 48 (7): 70-76.   DOI: 10.11896/jsjkx.200600010
    Abstract446)      PDF(pc) (2488KB)(1321)       Save
    At present,the accuracy of SQL injection attack detection based on traditional machine learning still needs to be improved.The main reason behind this phenomenon is that if too many features are selected when extracting feature vectors,it will cause the overfitting of the model and negatively affect the efficiency of the algorithm,whereas a large number of false and missed number will be generated if too little features are selected.To solve this problem,the paper proposes SQLIA-IC,a SQL injection attack detection method based on information carrying.The SQLIA-IC adds a marker and content matching module on the basis of machine learning detection.The marker is used to detect sensitive information in the sample,and the content matching module is used to match the feature items of the sample to achieve the purpose of secondary judgment.In order to improve the efficiency of SQL injection attack detection,the information value is used to simplify the detection results of machine learning and markers.In the content matching module,the dynamic matching is performed according to the information value carried by the sample.The simulation experiment results show that compared with the traditional machine learning methods,the accuracy rate of the method proposed in this paper is 2.62% higher on average,the precision ratio is 4.35% higher on average,the recall rate is 0.96%higheron average while the time loss has only increased by about 5 ms,which reveals that the method proposed can detect SQL injection attacks efficiently and effectively.
    Reference | Related Articles | Metrics
    Deepfake Videos Detection Method Based on i_ResNet34 Model and Data Augmentation
    BAO Yu-xuan, LU Tian-liang, DU Yan-hui, SHI Da
    Computer Science    2021, 48 (7): 77-85.   DOI: 10.11896/jsjkx.210300258
    Abstract539)      PDF(pc) (4141KB)(1044)       Save
    Existing Deepfake videos detection methods are weak in extracting facial feature.Therefore,this paper proposes an improved ResNet(i_ResNet34) model and three data augmentation methods based on information dropping.Firstly,the ResNet is optimized by using the group convolution to replace the ordinary convolution to extract more sufficient facial features without increasing model parameters.Then,max pooling layer is used to the down sampling in the shortcut branch of the dashed residual structure of the model whichis improved,so that loss of facial feature information decreases in video frames.Then,the channel attention layer is introduced after the convolution layer to increase the weight of the channel which extracts the key features and improves the channel correlation of the feature map.Finally,the i_ResNet34 model is implemented to train the original dataset and the expanded dataset with three data augmentation methods based on information dropping,achieving 99.33% and 98.67% detection accuracy on FaceSwap and Deepfakes datasets of FaceForensicans++ respectively,superior to the existing mainstream algorithms,thus verifying the effectiveness of the proposed method.
    Reference | Related Articles | Metrics
    Deepfake Video Detection Based on 3D Convolutional Neural Networks
    XING Hao, LI Ming
    Computer Science    2021, 48 (7): 86-92.   DOI: 10.11896/jsjkx.210200127
    Abstract663)      PDF(pc) (2833KB)(1562)       Save
    In recent years,“Deepfake” has attracted widespread attention.It is difficult for people to distinguish Deepfake videos.However,these forged videos will bring huge potential threats to our society,such as being used to make fake news.Therefore,it is necessary to find a method to identify these synthetic videos.In order to solve the problem,a Deepfake video detection model based on 3D CNNS for deepfake detection is proposed.This model notices the inconsistency of temporal and spatial features in the Deepfake video,and 3D CNNS can effectively capture temporal and spatial features of deepfake video.The experimental results show that models based on 3D CNNS have high accuracy rate,and strong robustness on the Deepfake-detection-challenge dataset and Celeb-DF dataset.The detection accuracy of the proposed model reaches 96.25%,and the AUC value reaches 0.92.This model also solves the problem of poor generalization.By comparing with the existing Deepfake detection models,the proposed model is superior to the existing models in terms of detection accuracy and AUC value,which verifies the effectiveness of the proposed model.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 1, 12 records