Computer Science ›› 2021, Vol. 48 ›› Issue (8): 220-225.doi: 10.11896/jsjkx.200900045

• Artificial Intelligence • Previous Articles     Next Articles

DragDL:An Easy-to-Use Graphical DL Model Construction System

TANG Shi-zheng, ZHANG Yan-feng   

  1. School of Computer Science and Engineering,Northeastern University,Shenyang 110000,China
  • Received:2020-09-05 Revised:2020-12-18 Published:2021-08-10
  • About author:TANG Shi-zheng,born in 1994,postgraduate.His main research interests include data mining and deep learning.(tangsz1023@qq.com)ZHANG Yan-feng,born in 1982,professor,Ph.D supervisor,is a senior member of China Computer Federation.His main research interests include big data mining,large-scale machine learning and distributed systems.
  • Supported by:
    National Natural Science Foundation of China(61672141),Key R&D Program of Liaoning Province(2020JH2/10100037)and Fundamental Research Funds for the Central Universities(N181605017,N181604016).

Abstract: Deep learning has broad applications in various fields.However,users still need to face problems from two aspects when applying deep learning.First,deep learning has a complex theoretical background,non-professional users lack background knowledge in modeling and tuning.It is difficult for them to build performance-optimized models.Second,modules such as data preprocessing,model training,and prediction often involve more complicated programming implementations,which bring some difficulties in getting started for non-professional users who have no programming skill background.In view of the above two issues of usability,this paper proposes an easy-to-use graphical deep learning model construction system,DragDL.The purpose of DragDL is to reduce the difficulty of data preprocessing,model training,monitoring,online prediction and other tasks for users.The system is based on the PaddlePaddle framework and supports building a deep learning network structure on the canvas by dragging graphical operators,supporting inference and prediction functions,and abstracting the data preprocessing operation process into a dataflow graph,which is convenient for users to understand and debug.The system also provides visualization functions for performance monitoring during the training process.At the same time,DragDL provides a classic model library,which allows users to build new DL network by tuning the existing classic model network.DragDL is deployed based on a centralized server and Web client.The server provides a virtual machine service for submitted tasks and supports large-scale asynchronous task scheduling to have concurrent processing capabilities.

Key words: Dataflow graph, Deep learning, Graphical programming, PaddlePaddle, Pre-trained model

CLC Number: 

  • TP319
[1]PAI Studio[OL].https://data.aliyun.com/pai/studio.
[2]GUO T,XU J,YAN X,et al.Ease the process of machine lear-ning with dataflow[C]//Proceedings of the 25th ACM International Conference on Information and Knowledge Management (CIKM 2016).2016:2437-2440.
[3]PaddlePaddle[OL].https://www.PaddlePaddle.org.cn/.
[4]WANG W,GAO J,ZHANG M,et al.Rafiki:Machine learning as an analytics service system[J].Proceedings of the VLDB Endowment,2018,12(2):128-140.
[5]PaddleHub[OL].https://www.PaddlePaddle.org.cn/hub.
[6]ABADI M,BARHAM P,CHEN J,et al.Tensorflow:A system for large-scale machine learning[C]//Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2016).2016:265-283.
[7]SONKA M,HLAVAC V,BOYLE R.Image processing,analysis,and machine vision[M].United States:Cengage Learning,2014.
[8]AntV[OL].https://antv.vision/.
[9]YOSINSKI J,CLUNE J,BENGIO Y,et al.How transferable are features in deep neural networks?[C]//Proceedings of Advances in Neural Information Processing Systems (NIPS 2014).2014:3320-3328.
[10]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016).2016:770-778.
[11]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[12]LONG M,ZHU H,WANG J,et al.Deep transfer learning with joint adaptation networks[C]//International Conference on Machine Learning.2017:2208-2217.
[13]HOWARD J,RUDER S.Universal Language Model Fine-tuning for Text Classification[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.2018:328-339.
[14]CRANKSHAW D,WANG X,ZHOU G,et al.Clipper:a low-latency online prediction serving system[C]//Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation.2016:192-209.
[1] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[2] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[3] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[4] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[5] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[6] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[7] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[8] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[9] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[10] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[11] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[12] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[13] WANG Jun-feng, LIU Fan, YANG Sai, LYU Tan-yue, CHEN Zhi-yu, XU Feng. Dam Crack Detection Based on Multi-source Transfer Learning [J]. Computer Science, 2022, 49(6A): 319-324.
[14] CHU Yu-chun, GONG Hang, Wang Xue-fang, LIU Pei-shun. Study on Knowledge Distillation of Target Detection Algorithm Based on YOLOv4 [J]. Computer Science, 2022, 49(6A): 337-344.
[15] ZHU Wen-tao, LAN Xian-chao, LUO Huan-lin, YUE Bing, WANG Yang. Remote Sensing Aircraft Target Detection Based on Improved Faster R-CNN [J]. Computer Science, 2022, 49(6A): 378-383.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!