Computer Science ›› 2025, Vol. 52 ›› Issue (5): 241-247.doi: 10.11896/jsjkx.240700059
• Artificial Intelligence • Previous Articles Next Articles
SI Yuehang1, CHENG Qing1,2, HUANG Jincai1
CLC Number:
| [1]GORDON A,EBAN E,NACHUM O,et al.Morphnet:Fast & simple resource-constrained structure learning of deep networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:1586-1595. [2]HAN S,MAO H,DALLY W J.Deep compression:Compressing deep neural networks with pruning,trained quantization and huffman coding[J].arXiv:1510.00149,2015. [3]LIANG Z P,HUANG X J,LI S D,et al.Offline data-driven evolutionary optimization based on pruning stack generalization[J].Acta Automatica Sinica,2023,49(6):1306-1325. [4]HINTON G,VINYALS O,DEAN J.Distilling the knowledge in a neural network[J].arXiv:1503.02531,2015. [5]DONG X,HUANG O,THULASIRAMAN P,et al.ImprovedKnowledge Distillation via Teacher Assistants for Sentiment Analysis[C]//2023 IEEE Symposium Series on Computational Intelligence(SSCI).IEEE,2023:300-305. [6]ZAGORUYKO S,KOMODAKIS N.Paying more attention toattention:Improving the performance of convolutional neural networks via attention transfer[J].arXiv:1612.03928,2016. [7]TARVAINEN A,VALPOLA H.Mean teachers are better role models:Weight-averaged consistency targets improve semi-supervised deep learning results[J].arXiv:1703.01780,2017. [8]GUO W,HUANG J H,HOU C Y,et al.A text classificationmethod combining noise suppression and double distillation [J].Computer Science,2023,50(6):251-260. [9]FUKUDA T,KURATA G.Generalized knowledge distillationfrom an ensemble of specialized teachers leveraging unsupervised neural clustering[C]//ICASSP 2021-2021 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP).IEEE,2021:6868-6872. [10]SHI S H,WANG X D,YANG C X,et al.SAR image target re-cognition method based on cross-domain small sample learning [J].Computer Science,2024,51(201):465-471. [11]ZHANG L F,SONG J B,GAO A,et al.Be your own teacher:Improve the performance of convolutional neural networks via self distillation[C]//Proceedings of the IEEE/CVF InternationalConference on Computer Vision.2019:3713-3722. [12]KIM S,JEONG M,KO B C.Lightweight surrogate random fo-rest support for model simplification and feature relevance[J].Applied Intelligence,2022,52(1):471-481. [13]HEO B,LEE M,YUN S,et al.Knowledge transfer via distillation of activation boundaries formed by hidden neurons[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:3779-3787. [14]WANG R Z,ZHANG X S,WANG M H.Text classificationcombining dynamic mask attention and multi-teacher multi-feature knowledge distillation[J].Journal of Chinese Information Processing,2024,38(3):113-129. [15]YANG C L,XIE L X,QIAO S Y,et al.Knowledge distillation in generations:More tolerant teachers educate better students[J].arXiv:1805.05551,2018. [16]MIRZADEH S I,FARAJTABAR M,LI A,et al.Improvedknowledge distillation via teacher assistant[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:5191-5198. [17]LIU S H,DU K,SHE C D,et al.Multi-teacher joint knowledge distillation based on CenterNet [J].Systems Engineering and Electronics,2024,46(4):1174-1184. [18]CHO J H,HARIHARAN B.On the efficacy of knowledge distillation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:4794-4802. [19]CHU Y C,GONG H,WANG X F,et al.Research on knowledge distillation algorithm for target detection based on YOLOv4 [J].Computer Science,2022,49(201):337-344. [20]SHAO R R,LIU Y A,ZHANG W,et al.Review of knowledge distillation in deep learning [J].Chinese Journal of Computers,2022,45(8):1638-1673. [21]GAO Y,CAO Y J,DUAN P S.Review on lightweight methods of neural network models [J].Computer Science,2024,51(201):23-33. |
| [1] | PAN Jiahao, FENG Xiang, YU Huiqun. SM-PHT:Robust,Scalable,and Efficient Method for Multi-task Reinforcement Learning [J]. Computer Science, 2026, 53(4): 366-376. |
| [2] | WU Qiaorui, LUO Li, ZHAO Cairong. LLM-augmented Training Framework with Cycle-Consistency Constraints [J]. Computer Science, 2026, 53(4): 377-383. |
| [3] | SONG Jianhua, LIU Chun, ZHANG Yan. Lightweight Camouflaged Object Detection Model Based on Structured Knowledge Distillation [J]. Computer Science, 2026, 53(4): 299-307. |
| [4] | SUN Mingxu, LIANG Gang, WU Yifei, HU Haixin. Chinese Hate Speech Detection Incorporating Hate Object Features and Variant Word Restoration Mechanism [J]. Computer Science, 2026, 53(2): 289-299. |
| [5] | ZHANG Haopeng, SHI Zheng, LIU Feng, SONG Wanru. CPViG-Net:Students’ Classroom Behavior Recognition Based on Cross-stage Visual GraphConvolution [J]. Computer Science, 2026, 53(2): 57-66. |
| [6] | JIANG Yunliang, JIN Senyang, ZHANG Xiongtao, LIU Kaining, SHEN Qing. Multi-scale Multi-granularity Decoupled Distillation Fuzzy Classifier and Its Application inEpileptic EEG Signal Detection [J]. Computer Science, 2025, 52(9): 37-46. |
| [7] | DENG Jiayan, TIAN Shirui, LIU Xiangli, OUYANG Hongwei, JIAO Yunjia, DUAN Mingxing. Trajectory Prediction Method Based on Multi-stage Pedestrian Feature Mining [J]. Computer Science, 2025, 52(9): 241-248. |
| [8] | LIU Le, XIAO Rong, YANG Xiao. Application of Decoupled Knowledge Distillation Method in Document-level RelationExtraction [J]. Computer Science, 2025, 52(8): 277-287. |
| [9] | ZHANG Hang, WEI Shoulin, YIN Jibin. TalentDepth:A Monocular Depth Estimation Model for Complex Weather Scenarios Based onMultiscale Attention Mechanism [J]. Computer Science, 2025, 52(6A): 240900126-7. |
| [10] | ZHOU Yi, MAO Kuanmin. Research on Individual Identification of Cattle Based on YOLO-Unet Combined Network [J]. Computer Science, 2025, 52(4): 194-201. |
| [11] | HE Liren, PENG Bo, CHI Mingmin. Unsupervised Multi-class Anomaly Detection Based on Prototype Reverse Distillation [J]. Computer Science, 2025, 52(2): 202-211. |
| [12] | HU Peng, XIA Xiaohua, ZHONG Yuquan. Road Crack Detection Method for Embedded Applications [J]. Computer Science, 2025, 52(12): 175-188. |
| [13] | ZHAO Tong, CHEN Xuebin, WANG Liu, JING Zhongrui, ZHONG Qi. Backdoor Attack Method for Federated Learning Based on Knowledge Distillation [J]. Computer Science, 2025, 52(11): 434-443. |
| [14] | TAN Zhiwen, XU Ruzhi, WANG Naiyu, LUO Dan. Differential Privacy Federated Learning Method Based on Knowledge Distillation [J]. Computer Science, 2024, 51(6A): 230600002-8. |
| [15] | SHI Songhao, WANG Xiaodan, YANG Chunxiao, WANG Yifei. SAR Image Target Recognition Based on Cross Domain Few Shot Learning [J]. Computer Science, 2024, 51(6A): 230800136-7. |
|
||