To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

Qin, Q, Ren, J, Yu, J et al. (6 more authors) (2019) To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference. In: 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom). 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom), 11-13 Dec 2018, Melbourne, Australia. IEEE , pp. 729-736. ISBN 978-1-7281-1141-4

Abstract

Metadata

Authors/Creators:
Copyright, Publisher and Additional Information: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Keywords: Deep learning; embedded systems; parallelism; energy efficiency; deep inference
Dates:
  • Published (online): 21 March 2019
  • Published: 21 March 2019
Institution: The University of Leeds
Academic Units: The University of Leeds > Faculty of Engineering & Physical Sciences (Leeds) > School of Computing (Leeds)
Depositing User: Symplectic Publications
Date Deposited: 24 Jun 2020 12:55
Last Modified: 24 Jun 2020 12:55
Status: Published
Publisher: IEEE
Identification Number: https://doi.org/10.1109/bdcloud.2018.00110
Related URLs:

Share / Export

Statistics