Compression Distortion and Refocussing Analysis in 3D Imaging May 26, 2021 15:00-15:45 (KST)
Vladan Velisavljevic, University of Bedfordshire
Abstract.
Three-dimensional (3D) imaging has significantly contributed to various immersive multimedia technologies, such as free viewpoint television, virtual and augmented reality or 6 degree-of- freedom video. In these applications, virtual view synthesis plays a critical role to enable a user to navigate the 3D immersion environment. Virtual view synthesis comprises synthesis of virtual viewpoints from a small subset of captured (camera) viewpoints. However, despite the success in providing a seamless visual experience, real-time delivery of high perceptual quality in 3D imaging remain challenging because of a huge amount of data for transmission and processing and a tremendous computational load.
In this talk, we present an analysis of the virtual view quality constraints caused by signal compression distortion and refocussing to variable view depths. The first part of the talk addresses signal compression of the large portion of data required to deliver an acceptable virtual view quality. The captured viewpoint signal consists of texture and depth components used by the virtual view synthesis system to create a new viewpoint. Based on the rate-distortion characteristics of the compressed components, we model the virtual view distortion so that the distortion bounds can be estimated without a significant computational cost in order to optimize the compression system configuration. The second part of the talk tackles the problem of scene refocusing in 3D images captured by plenoptic cameras. This special type of cameras allows for capturing 3D image signals at multiple viewpoints in a single turn and using a single device making it more efficient as compared to the case when a standard calibrated camera set is used. The captured signal allows for virtual view synthesis within a range of viewpoint locations and for refocussing to various scene depths. We present an analysis of scene depth estimation based on the captured plenoptic signal, which is used to enable refocussing.
Biography.
Vladan Velisavljevic received PhD from EPFL, Switzerland, in 2005. Since then, he was Senior Research Scientist at Deutsche Telekom Labs (University of Technology Berlin), Germany, between 2006 and 2011 and Senior Lecturer, then Reader (Associate Professor), at University of Bedfordshire, United Kingdom, until present. Since 2014, Vladan has been the Head of the Research Center for Sensing, Signals and Wireless Technology (SSWT) at the University of Bedfordshire. He has authored and co-authored more than 60 journal and conference papers and book chapters and he also holds 4 patents granted by the European Patent Office. Vladan has been granted research income throughout his career of more than $1.1M. He was general co-chair of IEEE ICME 2020 and MMSP 2017. He serves as a member of IEEE CAS Visual Signal Processing and Communications and CAS Multimedia Systems and Applications TCs and he was also a co-chair of IEEE ComSoc Multimedia Communications TC Interest Group on 3D Processing and Communications. Vladan was the Lead Guest Editor for SI “Visual Signal Processing for Wireless Networks” at the IEEE Journal of Selected Topics in Signal Processing, February 2015 and he serves as associate editor for IEEE Trans. on Circuits and Systems for Video Technology, Elsevier Signal Processing: Image Communications and IET Journal of Engineering. He is also regularly area chair, TPC member and session chair for a number of IEEE conferences. Vladan won the Outstanding Area Chair Award at IEEE ICME 2019 and the Best Paper Award given by the IEEE SPS Japan Chapter in 2016.
Memristive Devices for Computation-in-Memory Applications May 26, 2021 15:00-15:45 (KST)
Stephan Menzel, Forschungszentrum Juelich
Abstract.
Memristive devices change their resistance state upon appropriate voltage stimuli. The resistance change can have a different physical origin, e.g. configuration of ionic defects, phase changes or the orientation of the magnetization. These devices offer at least two stable (non‐volatile) resistance states and could be thus used as memory devices. In recent times, however, further applications moved into the focus of world‐wide research activities. Different computation‐in‐memory (CIM) concepts were demonstrated, such as stateful logic operations, vector‐matrix multiplication or solving of linear equation systems. These concepts rely on the non‐volatile storage of multi‐level/analog resistance states. In addition, other properties of memristive devices were exploited. The intrinsic switching variability could be exploited for learning algorithm using stochastic synaptic updates, creating true random number generators, or for security applications as in physical unclonable functions (PUF).
Moreover, volatile switching properties have been used to build resonator circuits for reservoir computing applications. In this lecture, an overview over different CIM applications of memristive devices will be given. The focus will lie on explaining the dynamic properties of memristive devices which enable these applications.
Biography.
Stephan Menzel received his PhD degree (summa cum laude) from the RWTH Aachen University in 2012. Since 2012, he has been at the Peter Grünberg Institut (PGI‐7) at Forschungszentrum Juelich GmbH as senior scienticst. He is now the head of the simulation group at the PGI‐7, Forschungszentrum Juelich. His group developed simulation tools for resistive switching devices which are commonly available (www.emrl.de/Jart.html). He is associate editor of Scientific Reports (since 2015). editor of MDPI Materials (since 2020), and editor of the IEEE Journal of the Electron Device Society (since 2020).
He is listed as golden reviewer for IEEE EDL and IEEE T‐ED. He is member of the IEEE since 2012 and member of the EDS and CAS society and member of the CASS NC‐TG. He was co‐organizer of IEEE NANO 2018 Cork, publication chair of Memrisys 2019 and member of the technical committee of Memrisys 2020. His research interests include the physics, characterization, modeling, and simulation of resistive switching (memristive) devices and computing‐in‐memory and neuromorphic computing circuits exploiting memristive devices. Stephan Menzel co‐authored overall more than 100 papers counting more than 2,500 citations, 8 book chapters and gave 22 invited talks (e.g. at NVMTS, IEEE Nano, ISCAS).
Wearable and Portable Microwave Medical Sensing and Imaging Devices and Systems May 26, 2021 15:00-15:45 (KST)
Tughrul Arslan, Imran Saied, University of Edinburgh
Abstract.
In recent years, there have been considerable developments in smart wearable devices and unobtrusive monitoring systems that can be used in detecting and monitoring a patient's health. However, these technological advances have not been implemented in clinical practice, where most hospitals still rely on using conventional imaging systems, such as: MRI or CT scans, which are bulky and expensive. Microwave imaging (MWI) for medical applications is a novel method that has been the focus of extensive research over the past two decades through computer simulations and experimental work.
Since the 1970’s, microwave technology for medical applications has been investigated to detect different types of diseases including breast and lung cancers, traumatic brain injuries, bone fractures, stroke, and most recently, Alzheimer’s disease. Microwave medical imaging can be classified into two major groups: microwave tomography and radar-based technique. Microwave tomography aims to reconstruct the actual dielectric profile of a human body while radar-based imaging systems operate by transmitting short pulses to detect the main electromagnetic (EM) signal scatterers inside the body. Both techniques rely on the scientific study that shows the malignant tissues (i.e. cancerous tissue or haemorrhagic stroke) that have different dielectric properties than the surrounding healthy tissues. The scattered waves are then collected and analysed to produce a valuable image which would determine the location of the cancer or the stroke inside the imaged body part.
Rapid development in the field of compact, wearable, and portable electronics has driven innovation in the health sectors where significant focus have been given to implement a remote health monitoring system. The monitoring devices must be designed to be comfortable, easy to use and low cost. One of the main advantages of wearable devices for medical applications is that they can be comfortably worn and used by patients with intermediate risk at home instead of being hospitalised. To alert medical professionals in case of emergency, an alerting system that directly connects the patients to rapid response teams can be established over wireless communication channels.
This lecture will focus on the components that make up a wearable and portable microwave sensing devices, specifically the antenna sensors, vector network analyser (VNA), and switching networks. In addition, current state-of-the-art trends for development of more flexible conformal components will be discussed. Finally, the last part of the lecture will provide a look into future advancements that can be considered for intelligent and personalised wearable and portable microwave based biomedical systems, including the use of machine learning, cloud technology, big data techniques, and edge-AI to process data from the sensors and provide diagnostics, prediction, and other data for range of personnel including carers and medical professionals with a number of currently running projects targeting Neurodegenerative conditions and elderly care.
Biography.
Professor Tughrul Arslan holds the personal chair of Integrated Electronic Systems with the School of Engineering, University of Edinburgh, Edinburgh, U.K. He is a member of the Integrated Micro and Nano Systems (IMNS) Institute and leads the Embedded Mobile and Wireless Sensor Systems (Ewireless) Group at the University (ewireless.eng.ed.ac.uk). His research interests include developing low power radio frequency sensing systems for wearable and portable biomedical applications. He is the author of more than 500 refereed papers and inventor of more than 20 patents. During the past 10 years his research has focused on Radio Frequency based sensing and the identification of unobtrusive wearable, portable, and/or IoT based sensing systems for a range of healthcare conditions. He is a member of a number of national healthcare projects including the Advanced Care Research Centre (ACRC), where he leads the Engineering activities for New Technologies of Care theme and the COG-MHEAR Project.
Prof. Arslan is currently an Associate Editor for the IEEE Transactions on VLSI Systems and was previously an Associate Editor for the IEEE Transactions on Circuits and Systems I (2005–2006)
and IEEE Transactions on Circuits and Systems II (2008–2009). He is also a member of the IEEE CAS executive committee on VLSI Systems and Applications (1999 to date), and a member of the steering and technical committees of a number of international conferences. He is a co-founder of the NASA/ESA conference on Adaptive Hardware and Systems (AHS) and currently serves as a member of its steering committee.
Dr. Imran M. Saied obtained the B.Sc. degree in electrical engineering from the Georgia Institute of Technology, Atlanta, GA, USA, in 2009, M.Sc. degree in electrical engineering from California State University-Fullerton, Fullerton, CA, USA, in 2011, and Ph.D. degree with the University of Edinburgh, Edinburgh, Scotland, in 2020, where he focuses his research on investigating the use of RF and microwaves for monitoring and detecting neurodegenerative diseases.
He has an extensive global research experience spanning across the USA, India, and the U.A.E. Prior to beginning his Ph.D., he worked as a Research Assistant with the Petroleum Institute (now Khalifa University) in Abu Dhabi, U.A.E. from 2013 to 2017. He worked on developing several tomography and spectroscopy systems for real-time oil and gas pipeline monitoring systems. In particular, he focused on THz spectroscopy, ECT/ECAT tomography, and development of sensors and imaging algorithms for these systems. The results from his work have led to several refereed journal and conference papers that have been published in IEEE and SPE.
Neuromorphic Technologies for Biomedical Applications May 26, 2021 15:00-15:45 (KST)
Arindam Basu, Nanyang Technological University
Abstract.
Neuromorphic engineering, while originally focused on brain-inspired analog circuits, has now evolved to cover non von Neumann computer architectures and spiking neural network (SNN) algorithms. The major advantage of using neuromorphic systems is expected to be low-energy implementation of machine learning and pattern recognition algorithms. The savings arise from event-driven operation or using physics of analog circuits for computing or a combination of the two. Such low-energy machine learners are essential for the increasingly popular edge-computing paradigm.
An intriguing applications of such edge-computing systems is in wearable and implantable devices for biomedical systems. Such devices operate from an extremely low energy budget due to problems in frequent replacement of batteries—hence, neuromorphic low-power circuits are relevant. Moreover, due to the continuous time operation of SNN, they may be more suited to handle continuous time biomedical signals like ECG, EMG or EEG than their artificial neural network (ANN) counterparts.
In this talk, I will first review the recent progress in the area of neuromorphic circuits and algorithms for deep neural networks. Then I will provide concrete examples of using such circuits and algorithms in two systems: (i) implantable brain-machine interfaces for intention decoding and (ii) wearable devices to decode gestures using EMG. I will conclude the talk with some directions of future research.
Biography.
Arindam Basu received the B.Tech. and the M.Tech. degrees in electronics and electrical communication engineering from IIT Kharagpur in 2005, and the MS degree in mathematics and the Ph.D. degree in electrical engineering from the Georgia Institute of Technology, Atlanta, in 2009 and 2010, respectively. He joined Nanyang Technological University, Singapore in 2010, where he currently holds a tenured Associate Professor position.
Dr. Basu received the Prime Minister of India Gold Medal in 2005 from IIT Kharagpur. He was a Distinguished Lecturer of the IEEE Circuits and Systems Society for the 2016–2017 term. He received the Best Student Paper Award from the Ultrasonics symposium in 2006, the best live demonstration at ISCAS 2010 and a finalist position in the best student paper contest at ISCAS 2008. He also received the MIT Technology Reviews inaugural TR35@Singapore Award in 2012 for being among the top 12 innovators under the age of 35 in Southeast Asia, Australia, and New Zealand. He has served as Guest Editor in several IEEE journals and is currently an Associate Editor of the IEEE Sensors Journal, the IEEE Transactions on Biomedical Circuits and Systems, and the Frontiers in Neuroscience.
Dr. Basu has spent more fifteen areas in the area of neuromorphic engineering with his research spanning circuits, architectures and algorithms. His group is one of the pioneers in using neuromorphic circuits for brain-machine interfaces (BMI) and is the first to demonstrate closed-loop control using BMI in primates. He has authored many book chapters and review papers (IEEE JETCAS) on this topic, published in high impact journals like Nature Communications and also delivered tutorials at IEEE BioCAS, IEEE IJCNN and IEEE ISCAS. He is a member of the IEEE CAS technical committees on Biomedical Circuits and Systems, Sensory systems and is Chair-elect for the Neural Systems and Applications TC. He has been regularly involved in the TPC of many conferences such as ISCAS, BioCAS, SOCC etc.
Low Power Challenges in IoT and IoE May 27, 2021 09:00-09:45 (KST)
Ricardo Reis, Federal University of Rio Grande do Sul (UFRGS)
Abstract.
The increasing number of devices connected to the internet is providing the concept of Internet of Things, that together with Internet of Health, Internet of People and Internet of Something is constructing the Internet of Everything (IoE). There is also an overlapping between IoT and CPS (Cyber Physical Systems) that have as components not only electronic ones, but also mechanical components, optical components, organic components, chemical components, etc. A keyword in IoT is optimization, mainly power optimization. Power optimization must be done in all levels of design abstraction, and at physical level is related to the number of transistors. Also, many systems are critical ones, like in Internet of Heath, where reliability is a major issue. Most of the circuits designed nowadays use much more transistors than it is needed. The increasing leakage power and routing issues are an important reason to optimize the number of transistors, as leakage power is related to the number of transistors. Also, the replacement of a set of basic gates by a complex gate reduces the number of connections to be implemented using metal layers as well the number of vias. The reduction of the number of connections to be implemented using metal layers helps to improve routing and also helps to improve reliability. To cope with this goal, it is needed to provide tools to automatically generate the layout of any transistor network.
Biography.
Ricardo Reis received a Bachelor degree in Electrical Engineering from Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Brazil, in 1978, and a Ph.D. degree in Microelectronics from the National Polytechnic Institute of Grenoble (INPG), France, in 1983. Doctor Honoris Causa by the University of Montpellier in 2016. He is a full professor at the Informatics Institute of Federal University of Rio Grande do Sul. His main research includes physical design automation, design methodologies, fault tolerant systems and microelectronics education.
He has more than 700 publications including books, journals and conference proceedings. He was vice-president of IFIP (International Federation for Information Processing) and he was also president of the Brazilian Computer Society (two terms) and vice-president of the Brazilian Microelectronics Society. He is an active member of CASS and he received the 2015 IEEE CASS Meritorious Service Award. He was vice-president of CASS for two terms (2008/2011). He is the founder of the Rio Grande do Sul CAS Chapter, which got the World CASS Chapter of The Year Award 2011, 2012, and 2018, and R9 Chapter of The Year 2013, 2014, 2016, 2017 and 2020. He is a founder of several conferences like SBCCI and LASCAS, the CASS Flagship Conference in Region 9. He was the General or Program Chair of several conferences like IEEE ISVLSI, SBCCI, IFIP VLSI-SoC, ICECS, PATMOS. Ricardo was the Chair of the IFIP/IEEE VLSI-SoC Steering Committee, vice-chair of the IFIP WG10.5 and he is Chair of IFIP TC10. He also started with the EMicro, an annually microelectronics school in South Brazil. In 2002 he received the Researcher of the Year Award in the state of Rio Grande do Sul. He is a founding member of the SBC (Brazilian Computer Society) and also founding member of SBMicro (Brazilian Microelectronics Society). He was member of CASS DLP Program (2014/2015), and he has done more than 70 invited talks in conferences. Member of CASS BoG and CEDA BoG.
Recent Progresses of Compute-in-Memory for Deep Learning Inference Engine May 27, 2021 09:00-09:45 (KST)
Shimeng Yu, Georgia Institute of Technology
Abstract.
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall problem in the deep learning hardware accelerator. SRAM and resistive random access memory (RRAM) are identified as two promising embedded memories to store the weights of the deep neural network (DNN) models. In this lecture, first we will review the recent progresses of SRAM and RRAM-CIM macros that are integrated with peripheral analog-to-digital converter (ADC). The bit cell variants (e.g. 6T SRAM, 8T SRAM, 1T1R, 2T2R) and array architectures that allow parallel weighted sum are discussed. State-of-the-art silicon prototypes are surveyed with normalized metrics such as energy efficiency (TOPS/W) and compute efficiency (TOPS/mm2). Second, we will discuss the array-level characterizations of non-ideal device characteristics of RRAM, e.g. the variability and reliability of multilevel states, which may negatively affect the inference accuracy. Third, we will discuss the general challenges in CIM chip design with regards to the imperfect device properties, ADC overhead, and chip to chip variations. Finally, we will discuss future research directions including monolithic 3D integration of memory tier on top of the peripheral logic tier to fully unleash the potentials of the CIM with RRAM technologies.
Biography.
Shimeng Yu is currently an associate professor of electrical and computer engineering at Georgia Institute of Technology. He received the B.S. degree in microelectronics from Peking University in 2009, and the M.S. degree and Ph.D. degree in electrical engineering from Stanford University in 2011 and 2013, respectively. From 2013 to 2018, he was an assistant professor at Arizona State University.
Prof. Yu’s research interests are the semiconductor devices and integrated circuits for energy-efficient computing systems. His research expertise is on the emerging non-volatile memories for applications such as deep learning accelerator, in-memory computing, 3D integration, and hardware security.
Among Prof. Yu’s honors, he was a recipient of NSF Faculty Early CAREER Award in 2016, IEEE Electron Devices Society (EDS) Early Career Award in 2017, ACM Special Interests Group on Design Automation (SIGDA) Outstanding New Faculty Award in 2018, Semiconductor Research Corporation (SRC) Young Faculty Award in 2019, ACM/IEEE Design Automation Conference (DAC) Under-40 Innovators Award in 2020, and IEEE Circuits and Systems Society (CASS) Distinguished Lecturer for 2021-2022, etc.
Prof. Yu served or is serving many premier conferences as technical program committee, including IEEE International Electron Devices Meeting (IEDM), IEEE Symposium on VLSI Technology, IEEE International Reliability Physics Symposium (IRPS), ACM/IEEE Design Automation Conference (DAC), ACM/IEEE Design, Automation & Test in Europe (DATE), ACM/IEEE International Conference on Computer-Aided-Design (ICCAD), etc. He is a senior member of the IEEE.
In Memory Computing with Emerging Memory Technologies May 27, 2021 09:00-09:45 (KST)
Qiangfei Xia, University of Massachusetts
Abstract.
It becomes increasingly difficult to improve the speed-energy efficiency of traditional digital processors because of limitations in transistor scaling and the von Neumann architecture. To address this issue, computing systems augmented with emerging devices, in particular resistance switches, offer an attractive solution. Built into large-scale crossbar arrays, resistance switches perform in-memory computing by utilizing physical laws, such as Ohm’s law for multiplication and Kirchhoff’s current law for accumulation. The current readout at all columns is finished simultaneously regardless of the array size, offering a massive parallelism and hence superior computing throughput. The ability to directly interface with analog signals from sensors, without analog/digital conversion, could further reduce the processing time and energy overhead.
In this overview lecture, a number of emerging memory devices will be introduced and compared first, including phase change memory, magnetoresistance random access memory, resistance random access memory, and ferroelectric memory. Challenges and solutions in the integration of these devices into large-scale arrays, such as the scability and manufacturability, will then be discussed. Finally, the operation of the arrays and the implementation of multilayer neural networks for machine intelligence applications will be demonstrated.
Biography.
Dr. Xia is a professor of Electrical & Computer Engineering at UMass Amherst and head of the Nanodevices and Integrated Systems Lab (http://nano.ecs.umass.edu). Before joining UMass, he spent three years at the Hewlett-Packard Laboratories. He received his Ph.D. in Electrical Engineering in 2007 from Princeton University. Dr. Xia's research interests include beyond-CMOS devices, integrated systems and enabling technologies, with applications in machine intelligence, reconfigurable RF systems and hardware security. He is a recipient of DARPA Young Faculty Award, NSF CAREER Award, and the Barbara H. and Joseph I. Goldstein Outstanding Junior Faculty Award.
Programmable Analog IC Design and Computation May 27, 2021 09:00-09:45 (KST)
Jennifer Hasler, Georgia Institute of Technology
Abstract.
This lecture reviews the growing area of Analog Computation, and its energy and area efficient computing opportunities. The lecture will discuss a short history of analog computation and its potential promises. The discussion will continue towards the analog IC design issues around analog computation, particularly considering the importance of programmable analog techniques. These discussions will continue to looking at potential tool aspects, abstraction, numerics, and architectures that arise for analog computing, as well as potential physical platforms for implementing these techniques (e.g. FPAAs).
Biography.
Jennifer Hasler is a full professor in the School of Electrical and Computer Engineering at Georgia Institute of Technology. Dr. Hasler received her M.S. and B.S.E. in Electrical Engineering from Arizona State University in 1991, received her Ph.D. from California Institute of Technology in Computation and Neural Systems in 1997, and received her Master of Divinity from Emory University in 2020. Dr. Hasler received the NSF CAREER Award in 2001, and the ONR YIP award in 2002. Dr. Hasler has been involved in multiple startup companies, including GTronix, founded in 2002 and aquired by Texas Instruments in 2010. Dr. Hasler received the Paul Raphorst Best Paper Award, IEEE Electron Devices Society, 1997, a Best paper award at SCI 2001, Best Paper at CICC 2005, Best Sensor Track paper at ISCAS 2005, Best paper award at Ultrasound Symposium, 2006, Best Demonstration paper award, ISCAS 2010, Best paper award at SCI 2001, 2nd Place, Student Paper Award, IEEE Sensors Conference. Dr. Hasler has been an author on over 350 journal and refereed conference papers.
Lightweight Deep Learning on IoT Devices May 27, 2021 15:00-15:45 (KST)
Wen-Huang Cheng, National Yang Ming Chiao Tung University
Abstract.
Deep learning has dominated the artificial intelligence landscape but it is not a good fit for numerous applications out there with limited computational resources. For example, Internet of Things (IoT) is amongst a few key enabling technologies that are driving digital transformation and the number of IoT devices worldwide is estimated to become triple from 8.74 billion in 2020 to over 25 billion in 2030. IoT devices often run on microcontrollers where the resources are limited (i.e. several orders of magnitude less memory and storage compared to smartphones), making it challenging to fulfil the intensive computation and storage demand of deep learning models. Therefore, this overview lecture will present approaches that are being explored to obtain lightweight deep learning models. Especially, popular model reduction techniques, such as pruning and group convolutions, have been shown to be not a guarantee for computational efficiency in terms of required memory access. This overview lecture will focus on the context of establishing hardware-friendly and tiny-scale models of neural network computations.
Biography.
Wen-Huang Cheng is Distinguished Professor with the Institute of Electronics, National Yang Ming Chiao Tung University (NYCU), Hsinchu, Taiwan. He is also Jointly Appointed Professor with the Artificial Intelligence and Data Science Program, National Chung Hsing University (NCHU), Taichung, Taiwan. Before joining NYCU, he led the Multimedia Computing Research Group at the Research Center for Information Technology Innovation (CITI), Academia Sinica, Taipei, Taiwan, from 2010 to 2018. His current research interests include multimedia signal processing, deep learning, computer vision, and social media. He has actively participated in international events and played important leading roles in prestigious conferences, journals and professional organizations, like General co-chair for IEEE ICME (2022) and ACM ICMR (2021), Associate Editor for IEEE Transactions on Multimedia and IEEE Multimedia Magazine, secretary for IEEE MSA technical committee, governing board member for IAPR. He has received numerous research and service awards, including the 2020 Outstanding Associate Editor Award of IEEE Transactions on Multimedia, the 2018 MSRA Collaborative Research Award, the 2017 Ta-Yu Wu Memorial Award from Taiwan’s Ministry of Science and Technology (the highest national research honor for young Taiwanese researchers under age 42), the 2017 Significant Research Achievements of Academia Sinica, the Top 10% Paper Award from the 2015 IEEE MMSP, and the K. T. Li Young Researcher Award from the ACM Taipei/Taiwan Chapter in 2014. He is Fellow of Institution of Engineering and Technology (IET) and ACM Distinguished Member.
Posit Arithmetic and Its Applications in Deep Learning Computation May 27, 2021 15:00-15:45 (KST)
Seokbum Ko, Hao Zhang, University of Saskatchewan, Ocean University of China
Abstract.
Posit is designed as an alternative to IEEE 754 floating-point format for many applications. With the same bit-width, it can provide a much larger dynamic range than IEEE floating-point format. Therefore, for applications where the dynamic range of numbers is important, posit can provide an efficient solution. In addition, posit has non-uniformed number distribution. This distribution fits well with the data distribution of deep learning and signal processing applications. Therefore, posit format can provide a more efficient way to represent data in such applications. Due to these advantages, posit format is nowadays popular in many applications and one of them is deep learning. In recent years, more and more posit based deep learning hardware accelerators appear in the literature. However, due to the dynamic component bit-width of posit format, the costs of current posit arithmetic units are more expensive than floating-point units. Potentially, there are many optimizations that can be investigated specifically for posit arithmetic to reduce their hardware cost. In addition, although posit is popular for deep learning, there are still many works to do to make the posit environment complete. One example is that pure posit training of deep learning models is still absent from the literature.
In this overview lecture, the basics of posit arithmetic and the current status of its applications in deep learning computation will be discussed. Then the trends and challenges of posit based deep learning will be discussed to motivate more related research works.
We believe this will be a hot and timely topic within the interest of ISCAS participants. On one hand, designing power and energy efficient deep learning hardware processors is a hot topic in IEEE-CAS. The use of posit will help in the exploration of more power efficient solution for deep learning processing. On the other hand, the optimization of posit arithmetic is of interest for many ISCAS participants and the optimization also needs the contributions from ISCAS participants.
Biography.
Hao Zhang received the BEng degree in Communication Engineering from Shandong University in China in 2012, the MSc and PhD degrees in Electrical Engineering from the University of Saskatchewan in Canada in 2015 and 2019, respectively. From 2019 to 2020, he worked as a postdoctoral fellow in the Department of Electrical and Computer Engineering at the University of Saskatchewan, Saskatoon, Canada. He is currently working as an associate professor in the College of Information Science and Engineering at the Ocean University of China, Qingdao, China. His research interests include computer arithmetic, computer architecture, reconfigurable computing, AI hardware processor, and deep learning applications. He had published multiple research papers on Posit arithmetic units and their applications in deep learning.
Dr. Seokbum Ko is currently a Professor at the Department of Electrical and Computer Engineering and the Division of Biomedical Engineering, University of Saskatchewan, Canada. He got his PhD degree from the University of Rhode Island, USA in 2002.
His research interests include computer architecture/arithmetic, efficient hardware implementation of compute-intensive applications, deep learning processor architecture and biomedical engineering.
He is a senior member of IEEE circuits and systems society, a member of IEEE VLSI Systems and Application Technical Committee and associate editors of IEEE Transactions on Circuits and Systems I and IEEE Access.
Energy Efficiency in Video Communications May 27, 2021 15:00-15:45 (KST)
Christian Herglotz, Friedrich-Alexander University Erlangen-Nürnberg
Abstract.
During the past years, we have witnessed an impressive growth in the demand for online video streaming applications. Examples are on-demand services, online media centers, teleconferencing services, and social networks. As a global consequence, studies reported that in 2018, 1% of the world-wide greenhouse gas emissions were caused by online video applications, with yearly growth rates close to 10%. Hence, research targeting the energy efficiency and the sustainable use of online video technology is of growing importance in the fight against climate change.
This lecture provides an overview on the current knowledge and the state of the art in research on the energy consumption of online video applications. First, the major sources for greenhouse gas emissions are discussed, which are the production of devices, the energy demand of data centers, of the transmission, and of end-user devices. Afterwards, we take a detailed look on systems performing streaming tasks such as encoding, decoding, and rendering. In particular, the energy efficiency of end-user devices such as smartphones or tablet PCs will be discussed. Finally, an outlook is presented highlighting open problems and upcoming research directions.
Biography.
Christian Herglotz received his diploma in electrical engineering and information technology in 2011 and his diploma in business administration in 2012, both from RWTH Aachen, Germany. He received the Dr.-Ing. from the Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany in 2017. From August 2018 till March 2019, he worked as a Post-Doc in a cooperation between Ècole de technologie supérieure (ÉTS) and Summit Tech Multimedia, Montréal, Canada. Since April 2019, he is again with Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany as a senior scientist. In this position, he is working on energy-efficient video communications as well as compression for machine learning and has contributed to international standardization activities in the JVET group of MPEG and ITU. Furthermore, he has authored and co-authored more than 25 papers in the field of video coding, energy efficient video communications, and video coding for machines. He is member of the IEEE Visual Signal Processing and Communications Technical Committee (VSPC-TC).
Controlling the Chaos: Adaptive Symmetry Approach May 27, 2021 15:00-15:45 (KST)
Denis Butusov, Saint-Petersburg Electrotechnical University “LETI”
Abstract.
The lecture will present the recent achievements in numerical simulation, identification and applications of nonlinear circuits and systems. After briefly outlining the reasons for the noticeable “new wave” of increasing interest in the chaos theory and applications taking chaos-based multisensory devices as an example, we will discuss the latest developments in the field. First, the most relevant techniques to obtain mathematical models of real-world nonlinear systems will be summarized. One of the recent improvements of the known phase-space reconstruction technique is the integration embedding procedure. Another development is a novel way to refine the obtained numerical model called the hybrid synchronization approach. It is a method of fine-tuning the system parameters using chaotic synchronization between a continuous system (e.g. nonlinear circuit) and its digital counterpart. Next, the recently discovered adaptive symmetry phenomenon will be reported considering digital chaotic maps and numerical models of continuous nonlinear systems as examples. The lecture will guide the auditory from known phase-space preserving symplectic methods to a new approach of controlling the phase space of both conservative and dissipative systems using adaptive symmetry of the applied discrete operator. Next, we will show multiple applications of the proposed approach, which include but are not limited to chaos-based cryptography, rapid chaotic synchronization, and chaos control. The lecture ends with a brief overview of promising directions for further research.
Biography.
Dr. Denis Butusov is a Professor of Computer-Aided Engineering at the St. Petersburg Electrotechnical University, Russia, where he is also a Head of Youth Research Institute. He has published extensively in nonlinear systems simulation, specifically numerical methods, geometrical integration, memristive circuits, discrete chaotic maps, and a wide variety of applications, including chaotic sensors, nonlinear systems identification, power systems, and chaos-based cryptography. Dr. Butusov and his collaborators are the authors of the controllable symmetry concept in numerical simulations of nonlinear systems. He is an IEEE Member (IEEE Council on Electronic Design Automation) since 2014.
Generalized and Algorithmic Logic Locking May 28, 2021 09:00-09:45 (KST)
Xinmiao Zhang, The Ohio State University
Abstract.
Due to the high cost of fabrication facilities, integrated circuits (ICs) are being designed and produced in a multi-vendor environment these days. Netlists of the ICs may be obtained from untrusted foundries or reverse engineering. Intellectual property (IP) piracy and counterfeiting cause severe economic loss to the IC designers. Various logic-locking schemes have been developed to protect IPs and prevent counterfeiting. The basic idea is to insert key-controlled logic components so that the chip does not function correctly without the right key, which is handled and distributed by the IC designer. On the other hand, many techniques have also been proposed to attack logic-locking schemes, including the powerful satisfiability (SAT)-based attack and its variations, removal attacks, and bypass attacks. Existing logic-locking designs that are better at resisting one type of attacks are often vulnerable to another.
The first part of this talk discusses the generalization of the constraints needed to resist the SAT attack as well as the insights it provided to the understanding of existing designs and achievable tradeoffs on the resiliencies to various attacks. The generalization unifies many previous logiclocking schemes, such as the Anti-SAT, SARLock, and cascaded (CAS)-lock, under the same framework. Without sacrificing the SAT-attack resiliency, the logic-locking designs based on the generalized constraints can achieve higher corruptibility. They also allow a large variation of functions to be utilized and accordingly thwart any attacks based on functional analyses.
Additionally, the constraints can be further relaxed in the striped functional logic-locking (SFLL) setting to resist removal attacks with lower complexity.
The second part of this talk focuses on the new direction of logic locking utilizing algorithmic variations. Unlike logic-level methods as used in all previous designs, algorithmic-level schemes are very difficult to attack without detailed knowledge on the algorithms implemented in the ICs.
Such schemes are essential to securing the design given the many available and ever-developing attacks for logic-level approaches. In particular, this talk will show that the large number of variations of finite field construction can be utilized to obfuscate a broad range of en/decryptors and error-correcting en/decoders that are used in numerous digital communication and storage systems, such as those for the Advanced Encryption Standard (AES), various light-weight cryptography, Reed-Solomon codes, and BCH codes. Additionally, existing logic-locking schemes only cause negligible loss on the performance of the systems that are self-correcting or fault-tolerating even if a wrong key is used. This talk will also discuss new algorithmic obfuscation schemes that significantly degrade the performance of the self-correcting lowdensity parity-check decoders, which are used in many digital communication systems and postquantum cryptography.
Biography.
Xinmiao Zhang received her Ph.D. degree in Electrical Engineering from the University of Minnesota, Twin Cities. She joined The Ohio State University as an Associate Professor in 2017.
Prior to that, she was a Timothy E. and Allison L. Schroeder Assistant Professor 2005-2010 and Associate Professor 2010-2013 at Case Western Reserve University. Between her academic positions, she was a Senior Technologist at Western Digital/SanDisk Corporation. She also held visiting positions at the University of Washington, Seattle, 2011-2013 and Qualcomm in 2008.
Prof. Zhang’s research spans the areas of VLSI architecture design, hardware security, cryptography, digital communications and storage, and signal processing. She authored the book “VLSI Architectures for Modern Error-Correcting Codes” (CRC 2016), co-edited “Wireless Security and Cryptography: Specifications and Implementations” (CRC 2007), and was a guest editor of Springer Mobile Networks and Applications (MONET) Journal special issue on “Next Generation Hardware Architectures for Secure Mobile Computing” (2007). Her work on the Advanced Encryption Standard (AES) received close to 1000 citations. Prof. Zhang received an NSF CAREER Award in 2009. She is also the recipient of the Best Paper Award at 2004 ACM Great Lakes Symposium on VLSI and 2016 International SanDisk Technology Conference.
Prof. Zhang is elected to serve on the Board of Governors of the IEEE Circuits and Systems Society for the 2019-2021 term and as a Vice-Chair of the Data Storage Technical Committee (DSTC) of the IEEE Communications Society 2017-2020. She was the Chair of the Seasonal Schools Program of the IEEE Signal Processing Society 2013-2015. She has been a member of the CASCOM, VSA and DISPS Technical Committees of the IEEE Circuits and Systems Society and Signal Processing Society. Prof. Zhang has served on the organization and technical program committees of many conferences, including the IEEE International Symposium on Circuits and Systems (ISCAS), IEEE Workshop on Signal Processing Systems (SiPS), IEEE International Conference on Communications (ICC), and IEEE Global Communications Conference (GLOBECOM). She has been an associate editor for the IEEE Transactions on Circuits and Systems-I 2010-2019 and IEEE Open Journal of Circuits and Systems since 2019. She was a recipient of the Best Associate Editor Award in 2013.
Low Power Convolutional Neural Network (CNN) Accelerator Design Techniques for both Inference and Training May 28, 2021 09:00-09:45 (KST)
Jongsun Park, Korea University
Abstract.
Deep neural networks (DNNs) have been showing very impressive performance in a variety of tasks including image classification and object detection and speech recognition. As recent DNNs adopt deeper and larger network architectures for improved accuracy, larger computation resources and memory capacities are needed for both inference and training of such large DNNs. Although low power deep learning accelerator design has been of interest in cloud computing, it is becoming even more critical as many applications are trying to run large DNNs on resource-constrained edge computing devices. This lecture covers various low power design techniques for DNN, especially convolutional neural network (CNN) inference/training accelerator design. It first introduces the basic operations of CNN inference/training. Then, an error resilient technique for CNN inference to enable aggressive voltage scaling is presented. The aggressive voltage scaling in CNN accelerator is possible by exploiting asymmetric error resilience (sensitivity) with respect to CNN layers, filters, and channels. The last part of this lecture introduces an input-dependent approximation of the weight gradient for improving energy efficiency of CNN training. Considering that the output predictions of network (confidence) changes with training inputs, the relations between the confidence and the magnitudes of weights gradient is efficiently exploited to skip the gradient computations without accuracy drop, especially for high confidence inputs.
Biography.
Jongsun Park received the B.S. degree in electronics engineering from Korea University, Seoul, Korea in 1998 and M.S. and Ph.D. degrees in electrical and computer engineering from Purdue University, West Lafayette, IN, in 2000 and 2005, respectively. He joined the electrical engineering faculty of Korea University, Seoul, Korea 2008. He was with the digital radio processor system design group, Texas Instruments, Dallas, TX. From 2005 to 2008, he was with the signal processing technology group, Marvell Semiconductor, Santa Clara, CA, where he developed efficient hardware architectures of sophisticated digital signal processing and digital communication algorithms. His research interests focus on variation-tolerant, low-power, highperformance VLSI architectures and circuit designs for deep learning and digital signal/image processing.
He is a member of the circuits and systems for communication technical committee of the IEEE Circuits and Systems Society. He served as an Associate Editor of IEEE Transactions on Circuits and Systems II: Express Briefs and a Guest Editor of the IEEE Transactions on Multi-Scale Computing Systems. He has also served in the technical program committees of various IEEE/ACM conferences, including DAC, ICCAD, ISLPED, ISCAS, ASP-DAC, HOST, and VLSI-SoC. He is an Associate Editor of the IEEE Open Journal of Circuits and Systems, IEEE Circuits and Systems Magazine.
Learning Based Visual Data Compression – Technologies and Standards May 28, 2021 09:00-09:45 (KST)
Shan Liu, Tencent Media Lab
Abstract.
Machine learning has demonstrated its superior capability of solving computer vision and image processing problems in the last decade. Witnessing such success, researchers and engineers are motivated to investigate learning based technologies for visual data, e.g., image and video compression, and some encouraging progress have been demonstrated in recent years. These research works may be classified into two categories: end-to-end learning-based compression schemes, and learning-based coding tools that are embedded into conventional compression schemes, such as block-based hybrid coding schemes. In this lecture, recent technology advances in both of these categories will be introduced, together with updates from relevant standard development activities, such as in JPEG AI, JVET NNVC and IEEE FVC. Challenges that the current learning-based compression solutions are facing, as well as some related research directions and standards such as MPEG NNR and video coding for machine (VCM) will also be discussed.
Biography.
Shan Liu received the B.Eng. degree in Electronic Engineering from Tsinghua University, the M.S. and Ph.D. degrees in Electrical Engineering from the University of Southern California, respectively. She is currently a Tencent Distinguished Scientist and General Manager of Tencent Media Lab. She was formerly Director of Media Technology Division at MediaTek USA. She was also formerly with MERL and Sony, etc. Dr. Liu has been actively contributing to international standards since the last decade and chaired or co-chaired many ad-hoc groups (AHG) including the AHG on neural network-based video coding. She has numerous technical proposals adopted into various standards, such as VVC, HEVC, OMAF, DASH, MMT and PCC, etc. and served co-Editor of H.265/HEVC SCC and H.266/VVC. She has also directly contributed to and led development effort of technologies and products that have served a few hundred million users. Dr. Liu holds more than 200 granted US and global patents and has published more than 100 journal and conference papers. She was in the committee of Industrial Relationship of IEEE Signal Processing Society (2014-2015). She served the VP of Industrial Relations and Development of Asia-Pacific Signal and Information Processing Association (2016-2017) and was named APSIPA Industrial Distinguished Leader in 2018. She is on the Editorial Board of IEEE Transactions on Circuits and Systems for Video Technology (2018-present) and received the Best AE Award in 2019 and 2020. She also served/serves as a guest editor for a few T-CSVT special issues and special sections including T-CSVT special section on Learning-based on Image and Video Compression. She has been a Vice Chair of IEEE Data Compression Standards Committee since 2019. Her research interests include audio-visual, high volume, immersive and emerging media compression, intelligence, transport and systems.
Interacting Dynamics Revaluation for High Renewable Penetrated Power System May 28, 2021 09:00-09:45 (KST)
Zhen Li, Beijing Institute of Technology
Abstract.
Renewable energy sources (RES), particularly wind power and solar photovoltaic (PV), are experiencing high
growth worldwide during the last decades in a row, which has been pushed to now account for a third of global
power capacity. With the continuous reduction on the cost of RES and strong environmental protection pressure,
many countries/regions are planning to integrate the high penetration of wind power and solar PV into the traditional
power grid. With this effort, large-scale RES are technically pooled together among wide area grids geographically
dispersed and then transmit them to load centers located far away onshore. Such transition from the traditional
power system to high renewable penetrated grid challenges the state-of-the-art research in the circuit and system.
The dynamic characteristics of traditional power grid are governed by large synchronous generators with substantial
inertia. However, the growing integration of RES can alter the system characteristics with the dominant adoption
of electronic power converters in fast response but less inertia. Power converters are commonly equipped with
control system in a wide-timescale for regulating the current and exchanging power with the power grid. The widetimescale
control dynamics of converters can result in cross couplings with both the electromechanical dynamics
of electrical generators and the electromagnetic transients of power networks, leading to oscillatory stability across
a wide frequency range. A number of incidents have been reported due to the grid integration of renewable energy,
e.g., 250 Hz oscillation in BARD Offshore 1, Germany, in 2013, and also sub-/super-synchronous oscillations in
the wide-timescale in Xinjiang wind farms, China, in 2015, where the undesired harmonics, inter-harmonics, or
resonances caused disruption to the power generation and transmission. All these urge us to re-examine the dynamic
behavior about high renewable penetrated power system that have been overlooked from a conventional view."
Biography.
Dr. Zhen Li is currently Associate Professor, Beijing Institute of Technology (BIT), China, and CHAIR ELECT of IEEE CASS PECAS TC.
Dr. Zhen Li received the B. Eng. and M. Eng. degrees in measurement science and control technology from the Harbin Institute of Technology, Harbin, China, in 2006 and 2008, respectively, and the Ph. D. degree from Hong Kong Polytechnic University, Hong Kong, in 2012.In 2012, he joined the School of Automation, Beijing Institute of Technology as a Lecturer, where he is currently a Associate Professor. He was a Research Assistant/Associate with the Applied Nonlinear Circuits and System Research Group, Department of Electronic and Information Engineering, Hong Kong Polytechnic University, from 2011 to 2012, and from 2013 to 2014.
He served as Associate Editor in IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, Associate Editor in IEEE ACCESS, Guest Editor in IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEM (2019), Guest Associate Editor in INTERNATIONAL JOURNAL OF BIFURCATION
AND CHAOS. He is an Active Committee Member of Electrical Automation Professional Committee in Chinese Association of Automation, and China Information Association for Traditional Chinese Medicine and Pharmacy Council. His main research interests include stability analysis and control of renewable energy system and state estimation for power system.