Akinori Konno
Graduate School of Science and Technology,
Shizuoka University, 3-5-1 Johoku, Hamamatsu, Japan

Title: Development of Pre-dyed Dye-sensitized and Perovskite Solar Cells


Developing highly efficient and low-cost solar cells as an alternative to conventional silicone solar cell has been attracted much interest. In this regard, the dye-sensitized solar cell (DSC) has been intensively studied as a promising system due to its reasonably high energy conversion efficiency and its much lower fabrication cost. We reported the pre-dye method based on dye-sensitized solar cells (DSCs) of ZnO compact layer has been fabricated on transparent conducting oxide glass and plastic polymer substrates at low temperaturePlastic substrate because of the low heat resistance, cannot undertake high temperature calcination.In order to improve lower porosity and bad interparticle connectivity, hot-press method was applied to plastic substrate photoelectrode. The results demonstrate that the compact layer can effectively reduce the short circuit from transparent conducting oxide to electrolyte in dye-sensitized solar cells, leading to an increase of open-circuit photovoltage, and the power conversion efficiency. Also, in order to increase conversion efficiency, one of important points is how to enhance absorption of photon in wide wavelength region. The research showed two different kinds of pigments which have been mixed to improve absorption band.1 In addition to DSCs, perovskite solar cells (PSCs) have been developed intensively because of rapid increase in its efficiency.2 We have also developed PSCs using CuI as a hole transport material. In early studies, application of CuI to PSC gave only poor efficiencies (1-2 %). Recently remarkable improvement has been achieved by introducing carbon nanotube into CuI layer. The contact between perovskite and CuI seems to play essential role for fabricating efficient PSCs.



Pui-In (Elvis) Mak
University of Macau, China

Title: Towards Energy-Autonomous Bluetooth-Low-Energy Radios for IoT Applications


This talk overviews the motivation, challenges and solutions of realizing energy-harvesting Bluetooth Low-Energy (BLE) receiver and transmitter in 28-nm CMOS; both are the key enablers of energy-autonomous IoT devices. They feature a fully-integrated micropower manager to customize the internal supply and bias voltages for both active and sleep modes. Circuitries enabling sub-0.3V operation are introduced for the low-noise amplifier (LNA), voltage-controlled oscillator (VCO), phased-locked loop (PLL) and power amplifier (PA).



Massimo Alioto
National University of Singapore

Title: Intelligent Systems with Ultra-Wide Power-Performance Adaptation – Going Well beyond the Diminishing Returns of Voltage Scaling


Wide power-performance adaptation has become crucial in always-on nearly real-time and energy-autonomous SoCs that are subject to wide variability in the power availability and the performance target, as required by applications such as AI, vision and audio cognition. Wide adaptation is indeed a prerequisite to assure sustained operation in spite of the widely fluctuating energy/power source, and to grant swift response upon the occurrence of events of interest (e.g., on-chip data analytics), while maintaining extremely low consumption in the common case. These requirements have led to a strong demand for SoCs having an extremely wide performance-power scalability and adaptation, which vastly exceeds the capabilities of conventional wide voltage scaling. In this talk, recent directions to drastically extend the performance-power scalability of digital, analog and power management circuits and architectures are presented. Silicon demonstrations of better-than-voltage-scaling adaptation to the workload are illustrated for the data path, the clock path and the memory sub-system. Several silicon demonstrations are presented for accelerators, processors and SRAMs with enhanced peak performance above traditionally allowed at nominal voltage, yet at reduced minimum energy. Energy-quality scaling is explored as additional dimension to break the conventional performance-energy tradeoff in error-resilient applications such as AI and vision, from networks on chip to memories and accelerators. Further performance and energy improvements are discussed through uncommonly flexible in-memory broad-purpose computing frameworks for true data locality, from buffering to signal conditioning and neural net workloads. Finally, challenges and opportunities for the decade ahead are discussed to enable a new generation of always-on intelligent systems with divergently high peak performance and low minimum power.



Shimeng Yu
Associate Professor,
School of Electrical and Computer Engineering,
Georgia Institute of Technology

Title: RRAM for Compute-in-Memory: From Inference to Training


To efficiently deploy machine learning applications to the edge, compute-in-memory (CIM) based hardware accelerator is a promising solution with improved throughput and energy efficiency. Instant-on inference is further enabled by emerging non-volatile memory technologies such as resistive random access memory (RRAM). This presentation reviews the recent progresses of the RRAM based CIM accelerator design. First, the multilevel states RRAM characteristics are measured from a test vehicle to examine the key device properties for inference.  Second, a benchmark is performed to study the scalability of the RRAM CIM inference engine and the feasibility towards monolithic 3D integration that stacks RRAM arrays on top of advanced logic process node. Third, grand challenges associated with in-situ training are presented. To support accurate and fast in-situ training and enable subsequent inference in an integrated platform, a hybrid precision synapse that combines RRAM with volatile memory (e.g. capacitor) is designed and evaluated at system-level. Prospects and future research needs are discussed.



Chip Hong Chang
Associate Professor,
Nanyang Technology University (NTU) of Singapore

Title: Security of Edge AI – A new challenge to deep learning accelerators


The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This “model once, run optimized anywhere” paradigm shift in deep learning computations introduces new attack surfaces and threat models that are methodologically different from existing software-based attack and defense mechanisms. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or stealthy on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of pristine samples of the target object are required. The feasible perturbations are substantial and visually perceptible. A new glitch injection attack on edge DNN accelerator capable of misclassifying a target under variational viewpoints will be presented. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. I will also address the limitation of existing detectors of input-based adversarial examples, which are mostly designed based on sophisticated offline analyses. A new hardware-oriented and lightweight countermeasure will be introduced for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge.



Harikrishnan Ramiah
Associate Professor,
University Malaya, Malaysia
Title: Radio Frequency Energy Harvesting for Healthcare Wearable


The use of the wearable device for human as well as domestic/wildlife animals for the continuous monitoring of health care will facilitate the involvement of the patients in the prevention, management of chronic diseases, even for pandemic and post-pandemic situations. So that wearable devices have huge market potential in upcoming days, but their usage has been limited due to frequent powering up of such devices is still a challenge. Batteries are the most widespread solution but add size, weight is unattractive to humans and periodic recharging is not possible for wildlife animals. However, the widespread adoption of these devices depends very much on their ability to operate for long periods without the need to frequently change, recharge, or even use batteries. In this context, energy harvesting (EH) is a disruptive technology that can pave the road towards the massive utilization of wireless wearable sensors for patient self-monitoring and daily healthcare. Radio-frequency (RF) transmissions from commercial telecommunication networks represent reliable ambient energy that can be harvested as they are ubiquitous in urban and suburban areas. The state-of-the-art in RF EH for wearable devices specifically targeting the global system of mobile 900/1800 cellular, 700 MHz digital terrestrial television networks, and Wi-Fi coverage as ambient RF energy sources are showcased. Furthermore, Limitations, challenges, and design considerations of fully integrated RF energy harvester are presented, which is useful for other researchers that work in the same area. The author will deliver recent advances towards the development of an efficient RF energy harvester and thorough ideas for the future development of RFEH.



Mr. Masami Ikura
Former Toyota Tsusho Nexty Electronics, Thailand

Research Fellow
Faculty of Electronics Engineering, Prince of Songkla University, Thailand

Title: New issues and proposals for new technology development seen from application examples of AI + CONNECTED technology for automotives


Innovative new technologies such as AR, 5G, and artificial intelligence are being applied and applied to the consumer, industry, automotive, and medical markets without restrictions. Looking on an automotive industry, it has just started to be applied to private cars one after another, but it is also being applied to commercial vehicles by on-time logistics, multimodal, real-time asset monitoring, etc., and it is expanding more and more. In the automotive market, which supports people’s lives with the keywords of safety and security, “AI + CONNECTED” has begun to be put into practical use. Optimal route search algorithms using artificial intelligence used for these, technologies for improving processing performance, and communication networks Remote monitoring / remote update in cooperation with, and test methods that previously required a huge amount of time are now being adopted more and more in other fields. On the contrary, it is expected that the development efficiency and quality assurance will be improved by incorporating the semiconductor technology of advanced information processing system, security field, and consumer / industrial into the in-vehicle system. In this lecture, we will introduce an example of applying communication / artificial intelligence technology for automotives to the industrial world based on the above situation, and it may happen in the future that was found in the case from the standpoint of an ASIC / FPGA semiconductor designer. We describe the limits of possible technological development and the future prospects for new technological development in terms of hardware and software.



Mr. Suresh Kumar
Design Engineering Vice President, Chipset Silicon Group, Intel

Title: Technology as an Enabler For Sustainable Growth – The Opportunity Ahead


As communities globally emerge from the experience of the pandemic, expectations and aspirations have been fundamentally reset. In many ways, technology in its various forms has been at the core of sustaining life for the last 18 months. The adoption of digitalization has accelerated and transformed every aspect of life, perhaps by a decade. This has created opportunities for us, Electrical and Electronics engineers to contribute to reshaping society and helping build a new norm which is fairer, more inclusive and sustainable for the long term. Technology, when developed responsibly will allow us to touch & transform the lives of every person on the planet in ways, as never before imagined. This keynote discusses the facets of life that have changed through the pandemic as well as the huge opportunities that lie ahead for the IEEE community.


Dr. Bo Chen

Title: The State-of-the-Art and Next Generation Simulation Technologies for Analog, Mixed-Signal and Memory Design


With the modern and most advanced semiconductor process technologies, the circuit design complexity and the device count on a single chip have been increased tremendously, far beyond the reach of the traditional SPICE technology. Over the years, Cadence has developed the state-of-the-art technologies from the distributed simulation including the multi-threading and cloud infrastructure to parasitics optimized for large postlayout design, the partition-based FastSpice technology for memory simulation as well as IEEE-1801-based mixed-signal simulation methodology for low power design. In this session, an overview on how current Cadence simulation technologies address the design challenges will be introduced and the best practices for verifying mixed-signal design will be briefed.