Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
- Title
- Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
- Author
- 이상근
- Keywords
- compressed learning; regularization; proximal point algorithm; debiasing; embedded systems; OpenCL
- Issue Date
- 2019-04
- Publisher
- MDPI
- Citation
- APPLIED SCIENCES-BASEL, v. 9, No. 8, Article no. 1669
- Abstract
- Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using l(1)-based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of l(1)-based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices.
- URI
- https://www.mdpi.com/2076-3417/9/8/1669https://repository.hanyang.ac.kr/handle/20.500.11754/113147
- ISSN
- 2076-3417
- DOI
- 10.3390/app9081669
- Appears in Collections:
- COLLEGE OF COMPUTING[E](소프트웨어융합대학) > COMPUTER SCIENCE(소프트웨어학부) > Articles
- Files in This Item:
- 2019.04_이상근_Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems.pdfDownload
- Export
- RIS (EndNote)
- XLS (Excel)
- XML