Program

Tutorials

Home ProgramTutorials

Notice for Tutorial registrants

  • The fee entitles attendance to 1 full-day or 2 half-day tutorials on May 22 - 23.
  • Please select 1 full-tutorial or 2 half-day tutorials by clicking the button “Select Your Tutorials Here” below.
  • If you want to additionally register for the tutorial(s), please send an e-mail to (info@iscas2021.org)


Full-day Tutorial

1

Revolutionizing AI Hardware with Processing-in-Memory: From Architectures to Circuits


May 23, 2021 09:00~18:00 (KST)


Joo-Young Kim, KAIST, Korea

Bongjin Kim, University of California, USA

Tony Tae-Hyoung Kim, Nanyang Technological University, Singapore


Details
Abstract.

Artificial intelligence (AI) and machine learning (ML) technology are revolutionizing many fields of study as well as a wide range of industry sectors such as information technology, mobile communication, automotive, and manufacturing. As more industries are adopting the technology, we are facing an ever-increasing demand for a new type of hardware that enables faster and more energy efficient processing for AI workloads. Traditional compute-centric computers such as CPU and GPU, which fetch data from the memory devices to on-chip processing cores, have been improving their compute performances rapidly with the scaling of process technology. However, in the era of AI and ML, as most workloads involve simple but data-intensive processing between large-scale model parameters and activations, data transfer between the storage and compute device becomes the bottleneck of the system (i.e., von-Neumann bottleneck). Memory-centric computing takes an opposite approach to solve this data movement problem. Instead of fetching data from the storage to compute, data stays in the memory while the processing units are merged into it so that computations can be done in the same location without moving any data. This tutorial consists of three parts. In Part I, we will briefly summarize the challenges of the latest AI accelerators. Then, we will go through various processing-in-memory (PIM) architectures, notable circuit techniques for PIM, and a holistic approach for a practical and feasible PIM-based solution for AI hardware. In Part II, we will introduce SRAM-based PIM and its challenges. Various state-of-the-art SRAM-based PIM architectures and circuits will be explained. In Part III, we will present ReRAM-based PIM design covering from devices to architectures. Several recent PIM works using ReRAM will be discussed.


Contents.
  • 1. Part I
    1. 1-1. AI Accelerators and Challenges
    2. 1-2. Processing-In-Memory (PIM) Architectures
    3. 1-3. DRAM-based PIM Circuits
    4. 1-4. PIM Solution
  • 2. Part II
    1. 2-1. SRAM-based PIM Basics
    2. 2-2. Challenges and Limitations in SRAM-based PIM
    3. 2-3. State-of-the-art SRAM-based PIM
  • 3. Part III
    1. 3-1. Resistive RAM (RRAM) design basics
    2. 3-2. Challenges and Limitations in RRAM-based PIM
    3. 3-3. State-of-the-art RRAM-based Compute-In-Memory

Biographies.

Prof. Joo-Young Kim received the B.S., M.S., and Ph. D degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), in 2005, 2007, and 2010, respectively. He is currently an Assistant Professor in the School of Electrical Engineering at KAIST. He is also the Director of AI Semiconductor Systems (AISS) research center. His research interests span various aspects of hardware design including VLSI design, computer architecture, FPGA, domain specific accelerators, hardware/software co-design, and agile hardware development. Before joining KAIST, Joo-Young was a Senior Hardware Engineering Lead at Microsoft Azure working on hardware acceleration for its hyper-scale big data analytics platform named Azure Data Lake. Before that, he was one of the initial members of Catapult project at Microsoft Research, where he deployed a fabric of FPGAs in data centers to accelerate critical cloud services such as machine learning, data storage, and networking. Joo-Young is a recipient of the 2016 IEEE Micro Top Picks Award, the 2014 IEEE Micro Top Picks Award, the 2010 DAC/ISSCC Student Design Contest Award, the 2008 DAC/ISSCC Student Design Contest Award, and the 2006 A-SSCC Student Design Contest Award. He serves as Associate Editor for the IEEE Transactions on Circuits and Systems I: Regular Papers (2020-2021).


Prof. Bongjin Kim received BS and MS degrees from Pohang University of Science and Technology (POSTECH), Pohang, Korea, in 2004 and 2006, respectively and PhD degree from University of Minnesota, Minneapolis, MN, USA in 2014. After PhD, He worked on design techniques and methodologies for communication circuits and microarchitectures at Rambus and Stanford University as a senior staff and a postdoctoral research fellow. After working as an assistant professor at Nanyang Technological University in Singapore for three years (from 2017 to 2020), he joined Department of Electrical and Computer Engineering (ECE) at University of California, Santa Barbara. From 2006 to 2010, he was with System LSI, Samsung Electronics, Yongin, South Korea. In 2012, he joined Wireless Business, Texas Instruments, Dallas, TX, USA as a SRC Summer Intern. He also joined Mixed-Signal Communication IC Design Group, IBM T.J. Watson Research Center as a Research Summer Intern in 2013. He was an Engineering Intern and a senior technical staff in Memory and Interface Division, Rambus Inc., Sunnyvale, CA, USA, from 2014 to 2016. Prof. Kim is the recipient of a Doctoral Dissertation Fellowship Award, a ISLPED Low Power Design Contest Award and an Intel/IBM/Catalyst Foundation CICC Student Award. His research works appeared in top circuit conferences and journals including ISSCC, VLSI symposium, CICC and JSSC. His current research focuses on memory-centric computing circuits/architecture using embedded memories for artificial intelligence, machine learning, and alternative computing solutions for solving combinatorial optimization problems.


Prof. Tony Tae-Hyoung Kimreceived the B.S. and M.S. degrees in electrical engineering from Korea University, Seoul, Korea, in 1999 and 2001, respectively. He received the Ph.D. degree in electrical and computer engineering from the University of Minnesota, Minneapolis, MN, USA in 2009. From 2001 to 2005, he worked for Samsung Electronics where he performed research on the design of high-speed SRAM memories, clock generators, and IO interface circuits. In 2007 ~ 2009 summer, he was with IBM T. J. Watson Research Center and Broadcom Corporation where he performed research on isolated NBTI/PBTI measurement circuits and SRAM mismatch measurement test structure, and battery backed memory design, respectively. In November 2009, he joined Nanyang Technological University where he is currently an associate professor. His current research interests include low power and high performance digital, mixed-mode, and memory circuit design, ultra-low voltage sub-threshold circuit design for energy efficiency, variation and aging tolerant circuits and systems, approximate computing, and circuit techniques for 3D ICs. He received Best Demo Award at 2016 IEEE APCCAS, International Low Power Design Contest award at 2016 IEEE/ACM ISLPED, a best paper award at 2014 and 2011 ISOCC, 2008 AMD/CICC Student Scholarship Award, 2008 Departmental Research Fellowship from U. of Minnesota, 2008 IEEE DAC/ISSCC Student Design Contest Award, 2008 Samsung Humantec Thesis Award (Bronze Prize), 2005 ETRI Journal Paper of the Year Award, 2001 Samsung Humantec Thesis Award (Honor Prize), and 1999 Samsung Humantec Thesis Award (Silver Prize). He is an author/co-author of +160 journal and conference papers and holds 17 US and Korean patents. He serves as an Associate Editor of IEEE Transactions on Very Large Scale Integration (VLSI) Systems, IEEE Access, and IEIE Journal of Semiconductor Technology and Science (JSTS). He has also served as a technical committee member of various conferences such as IEEE Asian Solid-State Circuits Conference (A-SSCC), IEEE International Symposium on Circuits and Systems (ISCAS), IEEE Asia and South Pacific Design Automation Conference (ASP-DAC), IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), etc. He was the Chair of IEEE SSCS Singapore Chapter in 2015~2016. He is a senior member of IEEE.

Half-day Tutorials

1

Approximate Computing: from Circuits to Emerging Applications


May 22, 2021 09:00~12:00 (KST)


Weiqiang Liu, Nanjing University of Aeronautics and Astronautics, China

Fabrizio Lombardi, Northeastern University, US


Details
Abstract.

Computing systems are conventionally designed to operate as accurately as possible. However, this trend faces severe technology challenges, such as power dissipation, circuit reliability, and performance. There are a number of pervasive computing applications (such as machine learning, pattern recognition, digital signal processing, communication, robotics, and multimedia), which are inherently error-tolerant or error-resilient, i.e., in general, they require acceptable results rather than fully exact results. Approximate computing has been proposed for highly energy-efficient systems targeting the above-mentioned emerging error-tolerant applications; approximate computing consists of approximately (inexactly) processing data to save power and achieve high performance, while results remain at an acceptable level for subsequent use. This tutorial starts with the motivation of approximate computing and then it reviews current techniques for approximate hardware designs. This tutorial will cover the following topics: 1) approximate arithmetic circuit designs; 2) algorithmic and approximate circuit co-design methods; 3) Applications using approximate computing, such as deep neural networks (DNNs) and security are presented in details. Directions for future works in approximate computing will also be provided. This tutorial will be presented and tailored to the CAS community and its technical interests.


Contents.
  • 1. Motivation
  • 2. Techniques for Approximate Computing
  • 3. Approximate Arithmetic Designs
    1. 3-1. Approximate Adders
    2. 3-2. Approximate Multipliers
    3. 3-3. Approximate Dividers
    4. 3-4. Approximate CORDIC
  • 4. Approximate Hardware and Algorithm Co-design
    1. 4-1. DNN Accelerator with Approximate Multipliers
    2. 4-2. Hardware and Algorithm Co-designs for Large Scale DNNs
  • 5. Security and Approximate Computing
    1. 5-1. Security in Approximate Computing
    2. 5-2. Approximate Computing for Security
  • 6. Discussion and Potential Future Works

Biographies.

Weiqiang Liu is currently a full Professor and the Vice Dean of College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics (NUAA), Nanjing, China. He received the B.Sc. degree in Information Engineering from NUAA and the Ph.D. degree in Electronic Engineering from Queen's University Belfast (QUB), Belfast, United Kingdom, in 2006 and 2012, respectively. He has served as an Associate Editor of IEEE Transactions on Computers (2015.5-2019.4), IEEE Transactions on Circuits and Systems I: Regular Papers (2020.1-2021.12), and IEEE Transactions on Emerging Topics in Computing (2020.2-2022.1), the Guest Editors of Proceedings of the IEEE (Special Issue on Approximate Computing), and Elsevier Microelectronic Journal (Special Issue on Approximate Computing: Circuits, Architectures and Algorithms). He is the Program Committee Co-Chair of IEEE ARITH 2020. He has been a TPC for a number of international conferences including ARITH, DATE, ASAP, ISCAS, ASP-DAC, ICCD, ISVLSI, GLSVLSI, and SiPS. His research interests include approximate computing, emerging technologies in computing systems, computer arithmetic, and hardware security. He has published one research book by Artech House and over 110 leading journal and conference papers (including two papers in Proceedings of the IEEE and over 40 IEEE Transactions papers). One of his papers was selected as the Feature Paper of IEEE TC in the 2017 December issue. He has been awarded the prestigious Excellent Young Scholar Award by NSFC in 2020. He is a Senior Member of the IEEE.


Fabrizio Lombardi graduated in 1977 from the University of Essex (UK) with a B.Sc. (Hons.) in Electronic Engineering. In 1977 he joined the Microwave Research Unit at University College London, where he received the Master in Microwaves and Modern Optics (1978), the Diploma in Microwave Engineering (1978) and the Ph. D. from the University of London (1982). He is currently the holder of the International Test Conference (ITC) Endowed Chair at Northeastern University, Boston. Dr. Lombardi is a member of the Executive Board of both the IEEE Nanotechnology Council (NTC) and the IEEE Computer Society (CS). He has been two-term member of Board of Governors of IEEE CS (2012-2017)) and an appointed member of the Future Directions Committee (2014-2017). In the past, Dr. Lombardi has been a two 2-year term Editor-in-Chief (2007-2010), Associate Editor-in-Chief (2000-2006) and Associate Editor (1996-2000) of the IEEE Transactions on Computers, the inaugural two-term Editor-in-Chief of the IEEE Transactions on Emerging Topics in Computing (2013-2017); Editor-in-Chief of the IEEE Transactions on Nanotechnology (2014-2019), and the Guest Editor of Proceedings of the IEEE (Special Issue on Approximate Computing). He was the recipient of the 2011 Meritorious Service Award and elevated to Golden Core membership in the same year by the IEEE CS; he was the Chair of the 2016 and 2017 IEEE CS Fellow Evaluation Committee. He has been awarded the 2019 NTC Distinguished Service Award and the 2019 “Spirit of the CS” Award. He received best paper awards at technical events/meeting such as IEEE DFT and IEEE/ACM NANOARCH. His research interests are emerging technologies, memory systems, VLSI design and fault/defect tolerance of digital systems. He has extensively published in these areas and coauthored/edited ten books. He is a Life Fellow of the IEEE. He is currently the Vice President for Publications of both the IEEE Computer Society (CS) and IEEE Nanotechnology Council (NTC). In 2021, he will be the 2nd Vice President of the CS, a member of the IEEE Publication Services and Products Board (PSPB) (2019-2023) as well as the President-Elect of the IEEE NTC (so its 2022-2023 President).

2

Emerging Electromagnetic-Acoustic Sensing and Imaging beyond Radar and Ultrasound Systems


May 22, 2021 09:00~12:00 (KST)


Zheng Yuanjin, Nanyang Technological University, Singapore

Gao Fei, ShanghaiTech University, China


Details
Abstract.

Traditional electromagnetic sensing technique (e.g. Radar and Lidar) and acoustic imaging technique (e.g. microphone and ultrasound) have gotten wide applications in military, automotive, consumer, medical, and healthcare etc. fields. Emerging Electromagnetic-Acoustic (EMA) technique combines the merits of electromagnetic sensing with acoustic imaging, and goes beyond to fuse the sensors. In this tutorial, we will discuss the implementations, functions and limitations of the respective sensors from circuits to systems, and therein to demonstrate their emerging applications. We would thus present first session on realization of three types of EMA sensors: (1) Low power phase array Radar chips for Synthetic Aperture Radar (SAR) imaging, (2) EMA sensing systems for non-destructive testing (NDT), and (3) Photoacoustics sensors for blood oxygen and blood glucose sensing. Next focusing on photoacoustic imaging (PAI) session, we will present from three perspectives: (1) circuits and systems, (2) algorithm and AI, and (3) Biomedical applications. Regarding PAI circuits and systems perspective, we will mainly introduce several novel hardware designs to achieve low-cost portable PAI systems, such as miniaturization of laser source, reduction of data acquisition channels, etc. Regarding PAI algorithm and AI perspective, we will talk about signal/image processing algorithms and deep learning frameworks for de-noising, image reconstruction, quantification, classification, segmentation, and PAI assisted light/sound treatment. Regarding biomedical applications, we will briefly introduce several application scenarios of PAI, including breast cancer screening, microscopic skin imaging, etc.


Contents.
  • 1. Introduction of Electromagnetic-Acoustic Sensing and Imaging Circuits and Systems
  • 2. Low power phase array Radar chips for Synthetic Aperture Radar (SAR) imaging
  • 3. EMA sensing systems for non-destructive testing (NDT)
  • 4. Photoacoustics sensors for blood oxygen and blood glucose sensing
  • 5. Photoacoustic imaging: circuits and systems
  • 6. Photoacoustic imaging: algorithms and AI
  • 7. Photoacoustic imaging: biomedical applications
  • 8. Conclusions

Biographies.

Dr. Yuanjin Zheng received his B.Eng. from Xian Jiaotong University, P. R. China in 1993 with the honor of the first class, M. Eng. from Xian Jiaotong University, P. R. China in 1996 with the honor of the best graduate student thesis award, and Ph.D. from Nanyang Technological University, Singapore in 2001. From July 1996 to April 1998, he worked at the national key lab of optical communication technology, university of electronic science and technology of china. He joined Institute of Microelectronics, A*STAR on 2001 and developed as a group technical manager. Since then, he has leaded in developing various wireless systems and CMOS integrated circuits, such as Bluetooth, WLAN, WCDMA, UWB, RF SAW/MEMS, Radar, and wireless implant sensor and wearable interface circuits etc. Since July 2009, he joined Nanyang Technological University, and now as a director for Center of Integrated Circuits and Systems, working on various radar system development and hybrid circuit and device (GaN, SAW, MEMS) designs, and flexible noninvasive sensor circuits and system for the applications of monitoring ECG, EEG, Spo2, SaO2, neural spike and blood glucose etc. He has authored or coauthored over 400 international journal and conference papers, 26 patents filed, and 5 book chapters. He is currently an associate editor of Journal of Circuits, Journal of X-Acoustics: Sensing and Imaging, and IEEE Trans. On Biomedical Circuit and Systems. He has been organizing dozens of IEEE conferences as TPC Chairs and Session chairs.


Dr. Fei Gao received his bachelor degree in Microelectronics from Xi’an Jiaotong University in 2009, and PhD degree in Electrical and Electronic Engineering from Nanyang Technological University, Singapore in 2015. He worked as postdoctoral researcher in Nanyang Technological University and Stanford University in 2015-2016. He joined School of Information Science and Technology, ShanghaiTech University as an assistant professor in Jan. 2017, and established Hybrid Imaging System Laboratory (www.hislab.cn). During his PhD study, he has received Integrated circuits scholarship from Singapore government, and Chinese Government Award for Outstanding Self-financed Students Abroad (2014). His PhD thesis was selected as Springer Thesis Award 2016. He has published more than 50 journal papers on top journals, such as Photoacoustics, IEEE TBME, IEEE TMI, IEEE JSTQE, IEEE TCASII, IEEE TBioCAS, IEEE Sens. J., IEEE Photon. J., IEEE Sens. Lett. He also has more than 60 top conference papers published in MICCAI, ISBI, ISCAS, BioCAS, EMBC, IUS etc. He has one paper selected as oral presentation in MICCAI2019 (53 out of 1700 submissions). In 2017, he was awarded the Shanghai Eastern Scholar Professorship. In 2018 and 2019, he received excellent research award from ShanghaiTech University. His interdisciplinary research topics include hybrid imaging physics, biomedical and clinical applications, as well as biomedical circuits, systems and algorithm design.

3

Advances in Design and Implementation of End-to-End Learned Image and Video Compression


May 22, 2021 09:00~12:00 (KST)


Wen-Hsiao Peng, National Chiao Tung University, Taiwan

Heming Sun, Waseda University, Japan


Details
Abstract.

The DCT-based transform coding technique was adopted by the international standards (ISO JPEG, ITU H.261/264/265, ISO MPEG-2/4/H, and many others) for nearly 30 years. Although researchers are still trying to improve its efficiency by fine-tuning its components and parameters, the basic structure has not changed in the past two decades.

The deep learning technology recently developed may provide a new direction for constructing a high-compression image/video coding system. Recent results, particularly from the Challenge on Learned Image Compression (CLIC) at CVPR, indicate that this new type of schemes (often trained end-to-end) may have good potential for further improving compression efficiency.

In the first part of this tutorial, we shall (1) summarize briefly the progress of this topic in the past 3 or so years, including an overview of CLIC results and JPEG AI Call-for-Evidence Challenge on Learning-based Image Coding (issued in early 2020). Because Deep Neural Network (DNN)-based image compression is a new area, several techniques and structures have been tested. The recently published autoencoder-based schemes can achieve similar PSNR to VVC Intra (the state-ofthe-art codec standardized and published in 2020) and has superior subjective quality, especially at the very low bit rates. In the second part, we shall (2) address the detailed design concepts of image compression algorithms using the autoencoder structure. We will also explore some recent low-complexity and hardware-oriented methods such as fixed-point learned image compression framework. In the third part, we shall switch gears to (3) explore the emerging area of DNN-based video compression. Recent publications in this area have indicated that end-to-end trained video compression can achieve comparable or superior ratedistortion performance to HEVC/H.265. The CLIC at CVPR 2020 also created for the first time a new track dedicated to P-frame coding.


Contents.

1. Overview of Learning-based Image and Video Compression

  • 1-1. Introduction to neural network-based image and video compression
  • 1-2. Recent developments in CLIC and JPEG AI Call-for-Evidence
    [Learning outcomes] At the end of this module, the attendees will be able to o Contrast the learned and traditional image/video codecs. o Tell the compression performance and complexity characteristics of learned image/video codecs, as compared to the traditional codecs (e.g. BPG and VVC).

2. End-to-End Learned Image Compression

  • 2-1. Elements of DNN-based image compression systems
  • 2-2. Review of a few notable systems
  • 2-3. Considerations for low-complexity implementation
    [Learning outcomes] At the end of this module, the attendees will be able to o List the common elements in end-to-end learned image codecs. o Identify key prior works in this area. o Describe the implementation challenges and some possible solutions.

3. End-to-End Learned Video Compression

  • 3-1. Elements of DNN-based video compression systems
  • 3-2. Motion estimation and compensation
  • 3-3. Review of a few notable systems
  • 3-4. Future overlook
    [Learning outcomes] At the end of this module, the attendees will be able to o List the common elements in end-to-end learned video codecs. o Describe how inter-frame prediction is done differently in learned codecs. o Identify key prior works in this area. o Indicate the current developments and future trends.

Biographies.

Dr. Wen-Hsiao Peng (M’09-SM’13) received his Ph.D. degree from National Chiao Tung University (NCTU), Taiwan, in 2005. He was with the Intel Microprocessor Research Laboratory, USA, from 2000 to 2001, where he was involved in the development of ISO/IEC MPEG-4 fine granularity scalability. Since 2003, he has actively participated in the ISO/IEC and ITU-T video coding standardization process and contributed to the development of SVC, HEVC, and SCC standards. He was a Visiting Scholar with the IBM Thomas J. Watson Research Center, USA, from 2015 to 2016. He is currently a Professor with the Computer Science Department, NCTU. He has authored over 75+ journal/conference papers and over 60 ISO/IEC and ITU-T standards contributions.

His research interests include learning-based video/image compression, deep/machine learning, multimedia analytics, and computer vision.

Dr. Peng is Chair of IEEE CASS Visual Signal Processing (VSPC) Technical Committee. He was Technical Program Co-chair for 2021 IEEE VCIP, 2011 IEEE VCIP, 2017 IEEE ISPACS, and 2018 APSIPA ASC; Publication Chair for 2019 IEEE ICIP; Area Chair/Session Chair/Tutorial Speaker/Special Session Organizer for IEEE ICME, IEEE VCIP, and APSIPA ASC; and Track/Session Chair and Review Committee Member for IEEE ISCAS. He serves as AEiC for Digital Communications for IEEE JETCAS and Associate Editor for IEEE TCSVT. He was Lead Guest Editor, Guest Editor and SEB Member for IEEE JETCAS, and Guest Editor for IEEE TCAS-II. He was Distinguished Lecturer of APSIPA. Dr. Peng is also a Fellow of the Higher Education Academy (FHEA).


Dr. Heming Sun received the B.E. degree in electronic engineering from Shanghai Jiao Tong University, Shanghai, China, in 2011, and received the M.E. degree from Waseda University and Shanghai Jiao Tong University, in 2012 and 2014, respectively, through a double-degree program. In 2017 he earned his Ph.D. degree from Waseda University through the embodiment informatics program supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), where he is currently a junior researcher. He was a researcher at NEC Central Research Laboratories from 2017 to 2018. He is selected as Japan Science and Technology Agency (JST) PRESTO Researcher, during 2019 to 2023. His interests are in algorithms and VLSI architectures for image/video processing and neural networks. He participated in the 8K HEVC decoder chip design which won the ISSCC 2016 Takuo Sugano Award for Outstanding Far-East Paper, and his module design won the VLSI Design and Education Center, University of Tokyo VDEC Design Award. He also actively participates in learned image/video coding competitions and got Picture Coding Symposium Grand Challenge on Short Video Coding Silver Award.

Regarding the academic achievements and activities, he has published over 50 journal and conference papers (e.g. TMM, JSSC, TCAS-I, ISSCC, CVPR, VCIP, ISCAS). He held a special session on "Neural Network Technology in Future Image/Video Coding" at Picture Coding Symposium 2019. He is invited to give a talk about “Deep Learning Method for Image Compression” by Information Processing Society of Japan. He also served as reviewers for many flagship CASsociety journals such as TCSVT, TCAS-I, TCAS-II.

4

AI-Managed Analog/Digital Interfaces – Application to Cognitive Radio Digitizers


May 22, 2021 15:00~18:00 (KST)


José M. de la Rosa, Institute of Microelectronics of Seville, Spain


Details
Abstract.

Embedding Artificial Intelligence (AI) in integrated circuits is one of the technology drivers towards an intelligent society. Nowadays, the vast majority of electronic devices benefits from digital signal processing, which can be enhanced by the action of AI algorithms. As the analog/digital interfaces are moving closer and closer to the point where the information is either acquired or transmitted, the so-called AI-managed data converters are becoming key building blocks in an increasingly number of interconnected cyberphysical systems. In this scenario, this tutorial gives an overview of the cutting-edge circuits and systems techniques to implement efficient AI-managed Analog-to-Digital Converters (ADCs). State of the art, design trends and challenges will be surveyed, going from system-level architectures and optimization-design methods, to Artificial Neural-Networks (ANNs) algorithms and digital-based/scaling-friendly analog circuit techniques, as well as the pros and cons derived from their integration in deep nanometer processes. Software defined electronics and Cognitive Radio (CR) systems intended for 5G/6G communications will be considered as an application scenario, and some case studies – mostly based on AI-managed Sigma-Delta (SD) ADCs – will be shown. The tutorial is addressed to a general audience interested in learning the fundamentals, state of the art, and research opportunities derived from the use of AI-based digitizers, as one of the technology pillars towards the digital transformation of our society.


Contents.
  • 1. AI-Managed ADCs: Fundaments, State-of-the-Art & Trends
    1. Introduction and motivation: towards the digital transformation
    2. A quick look at the fundamentals and state of the art on ADCs
    3. Emerging ADC circuits and systems techniques, trends and challenges
    4. AI-managed ADCs: basic concepts on AI algorithms and neural networks
  • 2. Application to Cognitive-Radio Digitizers
    1. Towards Software-Defined-Radio terminals and Cognitive Radio
    2. Digital-assisted analog circuits for AI-managed mobile terminals
    3. AI/Machine-learning techniques for CR digitizers
    4. Case studies: application to AI-managed CR (SD-based) receivers

Biography.

José M. de la Rosa (Fellow, IEEE) received the M.S. degree in Physics in 1993 and the Ph.D. degree in Microelectronics in 2000, both from the University of Seville, Spain. Since 1993 he has been working at the Institute of Microelectronics of Seville (IMSE), which is its turn part of the Spanish Microelectronics Center (CNM) of the Spanish National Council of Scientific Research (CSIC). He is presently the vice-director of IMSE and he is also a Full Professor at the Dept. of Electronics and Electromagnetism of the University of Seville. His main research interests are in the field of analog and mixed-signal integrated circuits, especially high-performance data converters, including analysis, behavioral modeling, design and design automation of such circuits. In these topics, Dr. de la Rosa has participated in a number of National and European research and industrial projects, and has co-authored some 250 international publications, including journal and conference papers, and 6 books, the most recent being Sigma-Delta Converters: Practical Design Guide (Wiley-IEEE Press, 2nd Edition, 2018). Dr. de la Rosa served as Distinguished Lecturer of the IEEE Circuits and Systems Society (CASS) during the term 2017-2018, he is a member of the Analog Signal Processing Technical Committee of IEEE-CASS, and he has been the Chair of the Spain Chapter of IEEE-CASS (term 2016-2017). He is currently the Editor in Chief of IEEE Transactions on Circuits and Systems II: Express Briefs, where he served as Deputy EiC since 2016 to 2019. He also served as Associate Editor for IEEE Transactions on Circuits and Systems I: Regular Papers, where he received the 2012-2013 Best Associate Editor Award and was Guest Editor for the Special Issue on the Custom Integrated Circuits Conference (CICC) in 2013 and 2014. He served as Guest Editor of the Special Issue of the IEEE J. on Emerging and Selected Topics in Circuits and Systems on Next-Generation Delta-Sigma Converters. He is a member of the Steering Committee of IEEE MWSCAS and he has also involved in the organizing and technical committees of diverse international conferences, among others IEEE ISCAS, IEEE MWSCAS, IEEE ICECS, IEEE LASCAS, IFIP/IEEE VLSI-SoC and DATE. He served as TPC chair of IEEE MWSCAS 2012, IEEE ICECS 2012, IEEE LASCAS 2015 and IEEE ISICAS (2018, 2019). He has been a member of the Executive Committee of the IEEE Spain Section (terms 2014-2015 and 2016-2017), where he served as Membership Development Officer during the term 2016-2017. (More details about the Speaker available at: www.imse-cnm.csic.es/~jrosa.)

5

Machine Learning Methods in VLSI Computer-Aided Design


May 22, 2021 15:00~18:00 (KST)


Ibrahim (Abe) M. Elfadel, Khalifa University, Abu Dhabi, UAE


Details
Abstract.

The objective of this tutorial is to give a broad yet up-to-date overview of machine learning as applied in the area of VLSI computer-aided design. The tutorial will be based on the content of the Springer book: “Machine Learning in VLSI CAD,” Editors: Elfadel, Boning, and Li, 2019. The targeted audience is the ISCAS community of professionals and graduate students who are interested in the application of machine learning frameworks, methodologies, algorithms and techniques in the context of computeraided design for very-large-scale integrated circuits. The tutorial will address the various levels of the VLSI CAD methodology flows from high-level synthesis to silicon fabrication, and will cover the fascinating variety of machine learning methods the VLSI CAD community has used in lithography, physical design, yield prediction, post-silicon performance analysis, reliability, failure analysis, power analysis, analog design, logic synthesis, verification, and neuromorphic design. Furthermore, it will provide an indepth analysis of the usage of machine learning in VLSI CAD for compact modeling, library characterization, and performance prediction. It will also discuss the use of machine learning techniques in the context of analog and digital synthesis and demonstrate how to formulate VLSI CAD objectives as machine learning problems and provides a comprehensive treatment of their efficient solutions. Finally, it will address the tradeoffs between the cost of collecting post-silicon data and CAD model prediction accuracy and provides a methodology for using prior data to reduce cost of data collection in the design, testing and validation of both analog and digital VLSI designs.



Contents.
  • 1. Introduction to Machine Learning in VLSI Computer-Aided Design
  • 2. Part I: Machine Learning for VLSI Lithography and Physical Design
    1. 2-1 Machine Learning for Compact Lithographic Process Models
    2. 2-2 Machine Learning for Mask Synthesis
    3. 2-3 Machine Learning in Physical Verification
  • 3. Part II: Machine Learning for VLSI Manufacturing, Yield, and Reliability
    1. 3-1 Machine Learning Approaches for IC Manufacturing Yield Enhancement
    2. 3-2 Statistical Learning for Wafer-Level Correlation Modeling and Its Applications
    3. 3-3 Efficient Process Variation Characterization using Compressive Sensing
    4. 3-4 Machine Learning for VLSI Chip Testing
    5. 3-5 Machine Learning for VLSI Chip Aging Analysis
    6. 3-6 Extreme Statistics in VLSI Memories
    7. 3-7 Fast Statistical Analysis of Rare VLSI Failure Events
    8. 3-8 Rare Events and Learning from Limited Data in VLSI CAD
  • 4. Part III: Machine Learning for VLSI Analog Design
    1. 4-1. Analog Circuit Performance Modeling using Bayesian Learning
    2. 4-2 Sparse Relevance Kernel Machines and their use for Performance Estimation
    3. 4-3 Projection Pursuit for Analog Response Surface Modeling
    4. 4-4 Uncertainty Quantification of Analog Designs using Machine Learning
    5. 4-5 Compact Modeling using Technology Priors
  • 5. Part IV: Machine Learning for VLSI Logic Design and Optimization
    1. 5-1 AutoML System for Logic Synthesis of High-End Processors
    2. 5-2 Statistical Library Characterization using Bayesian Methods
    3. 5-3 Power Proxies Using Least-Angle Regression for Multi-core Processors
    4. 5-4 Assertion Mining Algorithms using Machine Learning
  • 6. The Way Forward: Machine Learning CAD for Machine Learning VLSI


Biography.

Ibrahim (Abe) M. Elfadel is Professor of Electrical and Computer Engineering at the Khalifa University of Science and Technology, Abu Dhabi, UAE. Between May 2014 and May 2019, he was the Program Manager of TwinLab MEMS, a joint collaboration with GLOBALFOUNDRIES and the Singapore Institute of Microelectronics on microelectromechanical systems. Between May 2013 and May 2018, he was the founding codirector of the Abu Dhabi Center of Excellence on Energy-Efficient Electronic Systems (ACE4S). Between November 2012 and October 2015, he was the founding co-director of Mubadala's TwinLab 3DSC, a joint research center on 3D integrated circuits with the Technical University of Dresden, Germany. He also headed the Masdar Institute Center for Microsystems (iMicro) from November 2013 until March 2016. From 1996 to 2010, he was with the corporate CAD organizations at IBM Research and the IBM Systems and Technology Group, Yorktown Heights, NY, where he was involved in the research, development, and deployment of CAD tools and methodologies for IBM's high-end microprocessors. His current research interests span a broad spectrum of topics in the area of cyber physical systems, and include IoT platform prototyping; IoT communications; energy-efficient edge and cloud computing; low-power, embedded digital-signal processing; MEMS sensors and energy harvesters; 3D integration; and CAD for VLSI and MEMS. Dr. Elfadel is the recipient of six Invention Achievement Awards, one Outstanding Technical Achievement Award and one Research Division Award, all from IBM, for his contributions in the area of VLSI CAD. He is the inventor or co-inventor of 50 issued US patents with several more pending. In 2014, he was the co-recipient of the D. O. Pederson Best Paper Award from the IEEE Transactions on Computer-Aided Design for Integrated Circuits and Systems. In 2018, He received (with Prof. Mohammed Ismail) the SRC Board of Director Special Award for “pioneering semiconductor research in Abu Dhabi.” Dr. Elfadel is the lead co-editor of three Springer books: "3D Stacked Chips: From Emerging Processes to Heterogeneous Systems," 2016, "The IoT Physical Layer: Design and Implementation," 2019, and “Machine Learning in VLSI CAD,” 2019. From 2009 to 2013, Dr. Elfadel served as an Associate Editor of the IEEE Transactions on Computer-Aided Design. He is currently serving as Associate Editor of the IEEE Transactions on VLSI Systems and on the Editorial Board of the Microelectronics Journal (Elsevier). Dr. Elfadel has also served on the Technical Program Committees of several leading conferences, including ISCAS, DAC, ICCAD, ASPDAC, DATE, ICCD, ICECS, and MWSCAS. He is on the Steering Committee of VLSISoC and was the General Co-chair of its 25th edition (VLSI-SoC 2017), in Abu Dhabi, UAE. Most recently, he was the co-organizer of the ICCAD co-located workshop on Accelerator Computer Aided Design (ACCAD 2019 and 2020) with focus on machine learning CAD for machine learning VLSI. Dr. Elfadel received his PhD from MIT in 1993.

6

Vibration/Motion-powered IoT: Electromechanical Dynamics and Low-power Circuit-and-system Designs


May 23, 2021 09:00~12:00 (KST)


Junrui Liang, ShanghaiTech University, China

Dong Sam Ha, Virginia Polytechnic Institute and State University, USA


Details
Abstract.

Energy harvesting technology has been extensively investigated during the last decade, with a purpose to seek for maintenance-free and ever-lasting replacements for the capacity-limited and environmental-unfriendly chemical batteries. Ambient vibrations are among the most potential energy sources to be exploited to power future massively distributed IoT devices. The co-design challenges of vibration-powered IoT devices result from their mechanical, electrical, and cyber constraints, which are unable to be independently and perfectly solved by researchers from either engineering background. Interdisciplinary research efforts are necessary for the implementation and optimization of such cyber-electromechanically coupled systems.

In this tutorial, we will provide a comprehensive introduction to these cutting-edge technologies from the analysis and design of vibration energy harvesting systems to their application towards the massively distributed low-power IoT devices. The system overview and the joint dynamic model of the electromechanical vibration energy harvesting systems are introduced in the first part. The design criteria of power conditioning interface circuits, which are derived from the electromechanically coupled dynamics, as well as the evolutionary trajectory of existing interface circuits, are elaborated in the second part. Some typical circuit-and-system solutions are investigated, with an emphasis on their integrated-circuit (IC) implementations, in the third part. Applications and possible holistic optimization considering the mechanical, electrical, and cyber constraints are discussed in the fourth part.

Participants attending this tutorial will not only gain some ideas on low-power circuit-and-system designs for a transiently or intermittently powered IoT system but also think beyond the disciplinary boundaries towards the holistic codesign of a cyber-electromechanically coupled system.



Contents.
  • 1. System overview and dynamic model
  • 2. Interface circuit development
  • 3. Systematic solutions and IC designs
  • 4. Applications and possible optimization
  • 5. Brain-storming with audiences


Biographies.

Junrui Liang (S’09–M’10–SM’20) is an Assistant Professor at the School of Information Science and Technology, ShanghaiTech University, China, since 2013. He is also the Director of the Center for Intelligent Power and Energy Systems (CiPES) at ShanghaiTech University. He has received his B.E. and M.E. degrees from Shanghai Jiao Tong University, China in 2004 and 2007, respectively, and his Ph.D. degree from The Chinese University of Hong Kong, China, in 2010. Dr. Liang is an Associate Editor of journal IET Circuits, Devices & Systems, the General Chair of the 2nd International Conference on Vibration and Energy Harvesting Applications (VEH 2019), a member in the Technical Committee of Power and Energy Circuits and Systems (PECAS), IEEE Circuits and Systems Society, and the Energy Harvesting Technical Committee (EHTC), Adaptive Structures and Material Systems Branch, ASME Aerospace Division. He has also served as a Program Committee Member in SPIE Smart Structures + Nondestructive Evaluation Conference and Review Committee Member (RCM) and Special Session Organizer in IEEE International Symposium on Circuits and Systems (ISCAS). Dr. Liang’s research interests include energy conversion and power conditioning circuits, kinetic energy harvesting and vibration suppression, mechatronics, and IoT devices. His research has led to the publications of over 80 technical papers in international journals and conference proceedings, and two China patent applications. Dr. Liang is a recipient of one Best Student Contributions Award in the 19th International Conference on Adaptive Structures and Technologies (2008), two Best Paper Awards in the IEEE International Conference on Information and Automation (2009 and 2010), the Postgraduate Research Output Award from The Chinese University of Hong Kong (2011), and Excellent Research Award from ShanghaiTech University (2018).



Dong Sam Ha (M’86–SM’97–F’08–LF’18) received the B.S. degree in electrical engineering from Seoul National University, Seoul, South Korea, in 1974, and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of Iowa, Iowa City, IA, USA, in 1984 and 1986, respectively. Since Fall 1986, he has been a faculty member with the Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, VA, USA, where he is a Professor and Director of the Multifunctional Integrated Circuits and Systems Group composed of five faculty members and about 30 graduate students. Along with his students, he has developed four computeraided design tools for digital circuit testing and CMOS standard cell libraries. The source code for the four tools and the cell libraries have been distributed to about 380 universities and research institutions worldwide each. His research interests include analog and RF circuits and systems. His group currently focuses on power management circuits for energy harvesting and hightemperature RF circuits and systems for down hole communications, engine monitoring, and NASA Venus explorations.

7

Artificial Intelligent & Deep Learning Hardware Accelerators for Smart Technology and Intelligent Society


May 23, 2021 09:00~12:00 (KST)


Shiho Kim, Yonsei University, South Korea

Ashutosh Mishra, Yonsei University, South Korea

Hyunbin Park, Samsung Electronics, South Korea


Details
Abstract.

Smart Technology and Intelligent Society are the demand of this era. Artificial Intelligence (AI) and deep learning (DL) algorithms are playing a vital role to cater these demands to meet the expectations of the smart world and intelligent systems. However, performance of implementation of these algorithms are solely accomplished by efficient hardware. Therefore, AI accelerators are the state-of-the-art research area for the circuits and systems designers and academicians. In this tutorial, we are going to explain about the demands and requirements of the AI accelerators with their multifaceted applications. Here, we will focus broadly on the hardware aspects of training, inferencing, mobile devices, and autonomous vehicles (AVs) based AI accelerators. These areas will be covered in a half-day tutorial comprising of three separate sessions. First session will cover the general overview of AI accelerator for training as well as inference. Other two sessions will provide the insight of the SOTA technological developments in AI hardware for neural processing units (NPUs) used in smartphones, and AI hardware used in autonomous vehicles (AVs), respectively.



Contents.

1. AI Accelerators (Training & Inference)

  • Overview, challenges, SOTA AI accelerators


2. . On-device AI of Smartphone

  • Overview of On-device AI for Smartphone: Hardware, Software, Model Optimization and Benchmarking


3. AI Accelerators for Autonomous Vehicles

  • Overview, Survey and challenges, HW Accelerators and AI for Avs


Biographies.

Shiho Kim is a Professor in the School of Integrated Technology, Yonsei University, South Korea. He received his B.S. degree in Electronic Engineering from Yonsei University, South Korea in 1986. He received his M.S. and Ph.D. degrees in Electrical Engineering in 1988 and 1995, respectively from KAIST, South Korea. His main research interest includes the development of software and hardware technologies for intelligent vehicles, and reinforcement learning for autonomous vehicles. He is a member of the editorial board and reviewer for various Journals and International conferences. So far he has organized two International Conference as Technical Chair/General Chair. He is a member of IEIE (Institute of Electronics and Information Engineers of Korea), KSAE (Korean Society of Automotive Engineers), vice president of KINGC (Korean Institute of Next Generation Computing), and a senior member of IEEE. He is the coauthor for over 100 papers and holding more than 50 patents in the field of information and communication technology.


Dr. Ashutosh Mishra has received his B.Tech. degree in Electronics and Communication engineering from Uttar Pradesh Technical University Lucknow, India, in 2008, and the M.Tech. degree in Microelectronics & VLSI design from National Institute of Technology Allahabad, India, in 2011. He has received his Ph.D. degree in Electronics Engineering from Indian Institute of Technology (BHU), Varanasi, India, in 2018. He has worked as Assistant professor in Electronics & Telecommunication Engineering at National Institute of Technology Raipur, India. He is recipient of the Korea Research Fellowship (KRF) 2019 provided by National Research Foundation of Korea through the Ministry of Science & ICT, South Korea. Currently, he is working as Research faculty & Post-doc in Seamless Transportation Lab at School of Integrated Technology, Yonsei University, South Korea. His research interests include Smart sensors, Intelligent systems, Autonomous vehicles, and Artificial Intelligence, etc.


Dr. Hyunbin Park has received his B.S. degree from School of Electrical and Electronics Engineering, Yonsei University in 2013. He has received his M.S. and Ph.D. from School of Integrated Technology, Yonsei University, South Korea, in 2019. He has also worked as Postdoctoral researcher at the School of Integrated Technology, Yonsei University, after obtaining his Ph.D. degree. Currently, he is working as Staff Engineer in Samsung Electronics, South Korea. He is expertise in design of Inference accelerator for deep neural networks, Deep learning processor inside camera, etc. His research interests are in NPU designs for smartphones, Autonomous vehicle processor, Artificial Intelligent & Deep Learning based Hardware Accelerators etc.

8

Spiking Neural Networks - Algorithms, Circuits, and Systems based on CMOS and emerging devices


May 23, 2021 15:00~18:00 (KST)


Bipin Rajendran, King’s College London, UK

Osvaldo Simeone, King’s College London, UK

Angeliki Pantazi, IBM Research Zurich, Switzerland

Yulia Sandamirskaya, Intel Labs, Intel


Details
Abstract.

The tutorial will provide an overview of the fundamentals and of the state of the art in the area of brain-inspired Spiking Neural Networks (SNNs). The main topics to be covered include models, algorithms, and hardware implementations for Spiking Neural Networks (SNNs), along with applications, use cases, and future challenges. The tutorial will first discuss the technological landscape, including main players, applications, and use cases. Then, models for spiking neurons in SNNs will be presented by emphasizing the differences with respect to standard artificial neural networks and recurrent neural networks and by differentiating between deterministic and probabilistic models. Learning algorithms are presented next, discussing conversion-based, backpropagation-based, and probabilistic learning approaches, as well as bio-inspired local learning and adaptation. A presentation of hardware implementations of these models is provided by discussing conventional CMOS and post-CMOS technologies. Applications of these neuromorphic platforms will be then covered, before concluding with an outlook, including future challenges.


Contents.
  • 1. Introductions, logistics
  • 2. Neural dynamics and learning in SNNs
  • 3. Learning in SNNs with probabilistic models
  • 4. Silicon and emerging memory based neuromorphic hardware
  • 5. Neuromorphic Systems and Applications
  • 6. Conclusions, future outlook

Biographies.

Bipin Rajendran is a Reader in Engineering at King’s College London (KCL). He received a B. Tech degree from I.I.T. Kharagpur in 2000, and M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 2003 and 2006, respectively. He was a Master Inventor and Research Staff Member at IBM T. J. Watson Research Center in New York during 2006-’12 and has held faculty positions in India and the US. His research focuses on building algorithms, devices, and systems for brain-inspired computing. He has co-authored 89 papers in peer-reviewed journals and conferences, one monograph, one edited book, and 59 issued U.S. patents. He has also delivered 7 tutorials and 42 invited talks at top-tier universities and conferences. He is a recipient of the IBM Faculty Award (2019), IBM Research Division Award (2012), and IBM Technical Accomplishment Award (2010). He was elected a senior member of the US National Academy of Inventors in 2019. While he was a Researcher at IBM T. J. Watson Research Center (2006-’12), he was a Lead Investigator of the Phase 0 and Phase 1 of the DARPA SyNAPSE program. His research was supported by prestigious funding agencies, including the U.S. National Science Foundation, Semiconductor Research Corporation, and the Indo-French Centre for Promotion of Advanced Scientific Research, as well as by industries such as Intel, IBM, and Cisco.


Osvaldo Simeone is a Professor of Information Engineering with the Centre for Telecommunications Research at the Department of Engineering of King’s College London, where he directs the King’s Communications, Learning and Information Processing lab. He received an M.Sc. degree (with honors) and a Ph.D. degree in information engineering from Politecnico di Milano, Milan, Italy, in 2001 and 2005, respectively. From 2006 to 2017, he was a faculty member of the Electrical and Computer Engineering (ECE) Department at New Jersey Institute of Technology (NJIT), where he was affiliated with the Center for Wireless Information Processing (CWiP). His research interests include information theory, machine learning, wireless communications, and neuromorphic computing. Dr Simeone is a co-recipient of the 2019 IEEE Communication Society Best Tutorial Paper Award, the 2018 IEEE Signal Processing Best Paper Award, the 2017 JCN Best Paper Award, the 2015 IEEE Communication Society Best Tutorial Paper Award and of the Best Paper Awards of IEEE SPAWC 2007 and IEEE WRECOM 2007. He was awarded a Consolidator grant by the European Research Council (ERC) in 2016. His research has been supported by the U.S. NSF, the ERC, the Vienna Science and Technology Fund, as well as by a number of industrial collaborations. He currently serves in the editorial board of the IEEE Signal Processing Magazine and is the vice-chair of the Signal Processing for Communications and Networking Technical Committee of the IEEE Signal Processing Society. He was a Distinguished Lecturer of the IEEE Information Theory Society in 2017 and 2018. Dr Simeone is a co-author of two monographs, two edited books published by Cambridge University Press, and more than one hundred research journal papers. He is a Fellow of the IET and of the IEEE.


Angeliki Pantazi is a Principal Research Staff Member and a Research Manager at the IBM Research – Zurich in Switzerland. She received her Diploma and Ph.D. degrees in Electrical Engineering and Computer Technology from the University of Patras, Greece. Since 2006, she is a Research Staff Member at IBM Research – Zurich and currently she is managing the Neuromorphic Computing and I/O Links group. She was named IBM Master Inventor in 2014 and became a senior member of the IEEE in 2015 and a Fellow of IFAC in 2019. She was a co-recipient of the 2009 IEEE Control Systems Technology Award for contributions to nanopositioning for MEMS-based storage and other applications, the 2009 IEEE Transactions on Control Systems Technology Outstanding Paper Award and the 2014 IFAC Industrial Achievement Award for the application of advanced control technologies in the nano-domain to magnetic tape data storage. In 2017, she received an IBM Corporate Award for Archival storage for big data and analytics and the IEEE Control Systems Society Transition to Practice Award for the development of advanced control technologies for magnetic tape data storage and nanopositioning applications. She has contributed in several control-related projects in data storage systems and in particular magnetic tape storage. Recently, her research is focusing on neuromorphic computing combined with phase-change memory concepts. She has published more than 100 refereed articles and holds over 40 granted patents.


Yulia Sandamirskaya is a research scientist at Intel. She is leads the Applications research team at the Neuromorphic computing Lab of Intel Labs. Her group resides at Intel Germany GmbH in Munich. Before joining intel, she was a Group leader in the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. Her group “Neuromorphic Cognitive Robots” studied movement control, memory formation, and learning in embodied neuronal systems and implemented neuronal architectures in neuromorphic devices, interfaced to robotic sensors and motors. She has a degree in Physics from the Belarussian State University in Minsk and a Dr. rer. nat. from the Institute for Neural Computation in Bochum, Germany. She was the chair of EUCOG — European network for Artificial Cognitive Systems and coordinator of the NEUROTECH project (neurotechai.eu) that organises a community around neuromorphic computing technology.

9

Embedded Deep Neural Nets for the IoT: Training and Optimization Methods


May 23, 2021 15:00~18:00 (KST)


Andrea Calimera, Politecnico di Torino

Enrico Macii, Politecnico di Torino

Daniele Jahier Pagliari, Politecnico di Torino

Valentino Peluso, Intel Labs, Intel


Details
Abstract.

The promise of the Internet-of-Things (IoT) is to improve the quality of services using the information inferred from data collected over a large pool of users. To make this feasible and sustainable at large scale, the sensemaking process shall be distributed over the largest possible number of end-nodes. This encompasses the deployment of complex data-analytics based on Deep Neural Networks (DNNs) on lowpower devices with limited computational resources and small storage capacity.

The objective of this tutorial is to provide a comprehensive overview of the most effective training and optimization methods to shrink down the complexity of DNNs and make inference engines able to fit tiny cores still preserving enough expressive power to discover useful information from raw data.

The tutorial is structured in four main parts. The first part provides an overview of the research topic, with emphasis on the IoT applications that may benefit most from an efficient deployment of embedded DNN, the main figures related to the their complexity, and the limiting factors that prevent a massive use on resource constrained devices. We report on industrial use-cases taken from our portfolio of industrydriven projects funded by the European Commission and private companies. The remaining parts focus on the technical aspects. Specifically, the second part covers static optimization methods used at designtime to reduce the cardinality of DNNs. Traditional techniques, such as quantization and pruning, but also more recent strategies based on net-/operator-topology restructuring and search, will be introduced. The third part present dynamic optimization methods, a new branch of design strategies thought to give DNNs the ability to adapt to the surrounding context at run-time. Finally, the fourth part will touch upon issues related to deep-learning frameworks available from the market, showing a comparative analysis over several figures-of-merit, such as performance, quality-of-design, variability and stability. This latter part integrates live demos by which attendees will learn how to flash DNNs onto off-the-shelf embedded cores, e.g. the mobile CPUs and MCUs of the Cortex family by ARM.



Contents.

1. Introduction

  • The rise of AIoT: research context and challenges. The challenge: big nets for tiny cores. Taxonomy: classification of the optimization techniques discussed in the tutorial

2. Static Optimization Methods

  • Quantization & Pruning: schemes and methods from the old ML school. Topology optimization: operator-level and architectural-level restructuring via automatic search. Joint optimizations.

3. Dynamic Optimization Methods

  • Introduction to adaptive DNNs. Energy-Accuracy scalable DNNs. Complexity-driven optimization

4. Optimization Frameworks with Hands-on Session

  • Neural frameworks and libraries. From training to deployment: tiny applications onto low-power CPUs/MCUs for the IoT. Quality assessment and comparison of different optimization pipelines



Biographies.

Andrea Calimera Calimera is Associate Professor of Computer Engineering at Politecnico di Torino, Torino, Italy. Prior to that, he was Assistant Professor at the same institution. He received an MSc in Electronic Engineering and a PhD in Computer Engineering, both from Politecnico di Torino. His research interests cover the areas of Electronic Design Automation of digital circuits and embedded systems with emphasis on optimization techniques for low-power and reliable circuits and systems, dynamic energy/quality management, logic synthesis for emerging devices, design flows and methodologies for the efficient processing of machine learning and deep learning algorithms. He was visiting professor in Singapore, first at the School of Electrical and Computer Engineering of the National University of Singapore and then at the School of Computer Science and Engineering of the Nanyang Technological University of Singapore, contributing to research projects in the field of design automation for ultra-low power digital ICs and emerging technologies. Andrea Calimera is member of the International Federation for Information Processing (IFIP) and he has served on the technical program committee of many EDA and VLSI conferences, including the conference on Design and Test in Europe (DATE) and the International Conference on Computer Aided Design (ICCAD). He is member of the IEEE CAS society, Associate Editor of the IEEE Transactions on Circuits and Systems II, Associate Editor of the MDPI AI Journal.


Enrico Macii is a Full Professor of Computer Engineering at Politecnico di Torino, Torino, Italy. He holds a Laurea Degree in Electrical Engineering from Politecnico di Torino, a Laurea Degree in Computer Science from Università di Torino and a PhD degree in Computer Engineering from Politecnico di Torino. He was the Vice Rector for Research at Politecnico di Torino; he was also the Rector’s Delegate for European Affairs, for Technology Transfer and for International Affairs. His research interests are in the design of electronic digital circuits and systems. In the last decade, he has extended his research activities to the broad area of data analysis for different applications in the industrial and biomedical domain. More recently, he has been growingly involved in projects focusing on the development of new technologies, methodologies and policies for achieving energy efficiency in buildings, districts and cities, sustainable urban mobility, clean and intelligent manufacturing and sustainable urban development. In these fileds he was the coordinator of more than 30 research projects funded by the EU commission under the FP5, FP6, FP7 and H2020 framework. Enrico Macii is a Fellow of the IEEE. He was a Member of the Board of Governors of the IEEE Circuits and Systems Society for two terms, a Member of the Board of Governors of the IEEE Council on EDA, the Vice President for Publications of the IEEE Circuits and Systems Society (for two consecutive terms), the Chair of the Distinguished Lecturer Program (DLP) of the IEEE Circuits and Systems Society, the Chair of the Fellow Committee of the IEEE Circuits and Systems Society, the Chair of the Long Term Strategy Committee of the IEEE Circuits and Systems Society, the Chair of the Best Paper Award Selection Committee of the IEEE Circuits and Systems Society. On January 1, 2020, he started his term of duty as Vice President for Strategy of the IEEE Council on EDA.


Daniele Jahier Pagliari is an Assistant Professor at the Department of Control and Computer Engineering of Politecnico di Torino. He received the Ph.D. in Computer and Control Engineering from Politecnico di Torino. In 2012, Daniele was an intern at Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin (IT), where he worked on the FPGA acceleration of digital synthesis algorithms for a high precision signal generator. In 2014, he was a visiting researcher at Columbia University in New York City (US), working on the design of hardware accelerators for medical imaging algorithms using High Level Synthesis. In 2016, he was a visiting researcher at the CEA Leti research center in Grenoble (FR), where he worked on the development of CAD and technological solutions for the design of scalable-precision digital arithmetic hardware for DSP and machine/deep learning applications. Daniele’s research interests are in the field of Computer Aided-Design of digital circuits and systems. Recently, his main research focus has been on techniques and tools for the deployment and optimizations of deep learning algorithms on embedded devices. Daniele is a member of the IEEE.


Valentino Peluso is a Post-doc Research Fellow in the Computer and Control Engineering Department at Politecnico di Torino. He received the M.S. degree in Electronic Engineering with honors from Politecnico di Torino in 2015. During his studies, he joined the “Alta Scuola Politecnica”, a two-year program restricted to 150 master students belonging to Politecnico di Torino and Politecnico di Milano. In February 2016, he joined the EDA group in Politecnico di Torino as a research assistant. In May, he started the Ph.D. program in Computer Engineering at the same research group and in September 2020 received the Ph.D. degree. His main research interests focus on AI-based applications for low-power systems, specifically, on the development of automation tools for the training, optimization and compression of neural networks that can be deployment on embedded microcontrollers and mobile CPUs. Valentino is a member of the IEEE.

Top