IEEE

Tutorials

 
Tutorial A 1300-1500

AI-Focused Tools and Hardware Supporting the Future of Autonomy presented by Intel (Ballroom)
Presented By:
Greg Nash, Intel PSG

Greg Nash (Intel PSG MAG BU) is a System Architect for HPC, AI, and Government Analytics in the Military, Aerospace, and Government business group at Intel Programmable Solutions Group. He is responsible for putting together solutions in these areas between IP, tools, and devices. Previous to this role, he was a tools specialist for signal processing applications, developing PoC’s and writing papers in FFT’s and radar processing, and designing radio heads at Motorola. He holds a MSEE from UM Ann Arbor.

ABSTRACT: Develop applications and solutions that emulate human vision with the Intel® Distribution of OpenVINO™ toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance.

  • Enables CNN-based deep learning inference at the edge
  • Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API
  • Speeds up time to market via a library of functions and preoptimized kernels
  • Includes optimized calls for OpenCV and OpenVX*

Intel CPUs offer the most universal option for computer vision tasks. With multiple product lines to choose from, you can find a range of price and performance options to meet your application and budget needs.
Many Intel processors contain integrated graphics, including Intel HD Graphics and Intel® UHD Graphics. The GPUs have a range of general-use and fixed-function capabilities (including Intel® Quick Sync Video) that can be used to accelerate media, inference, and general computer vision operations.
Intel FPGA’s allow users to gain cost savings and revenue growth from integrated circuits that retrieve and classify data in real time. Use these accelerators for AI inferencing as a low-latency solution for safer and interactive experiences that can be applied to autonomous vehicles, robotics, IoT, and data centers.
The Intel® Movidius® Vision Processing Unit (VPU) enables visual intelligence at a high compute per watt. It supports camera processing, computer vision, and deep learning inferences.
We will walk though the flow from model import, to compression, to inference engine which with multiple hardware targets used.

INTENDED AUDIENCE: Engineers, researchers, and scientists interested in AI problems including sensing for autonomous platforms.



Tutorial B 1515-1715

Xilinx AI Edge Tutorial and Versal Portfolio presented by Xilinx (Ballroom)
Presented By:
Terry O’Neal, Xilinx Machine Learning Specialist FAE
Jason Vidmar, Xilinx System Architect, Military & Satellite Communications

Terry O’Neal is a Machine Learning Specialist FAE at Xilinx, based in Dallas, TX. For the past 12 years, he has been helping Xilinx customers across multiple market segments design and deploy embedded systems on various Xilinx platforms, including Zynq-7000 and Zynq UltraScale+ MPSoC. Prior to Xilinx, Terry designed and implemented FPGAs, SoCs, and embedded software for the wired communications market while working at Cisco Systems and Alcatel.
Jason Vidmar is a System Architect at Xilinx focusing on MILCOM, Satcom and Machine Learning for the Aerospace & Defense market. His professional background includes tactical and commercial communications, as well as military and commercial aerospace applications, having designed FPGA and SoC-based systems for Motorola, General Electric, Northrop Grumman and others. Mr. Vidmar has a Masters in Electrical and Computer Engineering from Illinois Institute of Technology.

ABSTRACT:

  • Xilinx DNNDK and DPU IP Overview Presentation
    • The Xilinx Deep Neural Network Development Kit (DNNDK) is designed as an integrated framework, which aims to simplify and accelerate deep learning application development and deployment on Deep learning Processor Unit (DPU).
    • The Xilinx® Deep Learning Processor Unit (DPU) is a programmable engine dedicated for convolutional neural network. The unit contains register configure module, data controller module, and convolution computing module.
    • The DPU IP can be integrated as a block in the programmable logic (PL) of the selected Zynq®-7000 SoC and Zynq UltraScale™+ MPSoC devices with direct connections to the processing system (PS).
  • Xilinx DPU Integration Tutorial
    • This tutorial demonstrates how to build a custom system that utilizes the 1.3.0 version of Xilinx® Deep Learning Processor (DPU) IP to accelerate machine learning algorithms using the following development flow:
      • Build the hardware platform in the Vivado® Design Suite
      • Generate the Linux platform in PetaLinux
      • Use Xilinx SDK to build two machine learning applications that take advantage of the DPU
  • Versal ACAP Overview Presentation

Versal is the first product family from Xilinx to implement the new Adaptive Compute Acceleration Platform (ACAP) architecture, built on TSMC’s 7-nanometer FinFET process. Versal ACAPs combine the latest Xilinx programmable logic and flexible memory hierarchy capabilities with Scalar Processing Engines and AI Engines (an array of mixed-precision, vector processors for real-time DSP and machine learning workloads), delivering powerful heterogeneous acceleration for a variety of Aerospace & Defense applications. In this overview, we will introduce the fundamental architecture and highlight how Versal can address multi-function and multi-sensor mission requirements from Edge to Data Center

INTENDED AUDIENCE: Hardware developers, software developers, system architects and data scientists who are interested on implementing Machine Learning on Xilinx FPGAs & SoCs.



Tutorial C 1300-1600 (SLIDES AVAILABLE HERE)

Hidden outlier noise and its mitigation (Kittyhawk)
Presented By:
Alexei V. Nikitin, Nonlinear LLC, Wamego, Kansas 66547, US

Alexei V. Nikitin is a co-founder and Chief Science Officer of the Kansas-based Nonlinear LLC. He initiated his undergraduate and graduate studies in physics, chemistry, and engineering in the former USSR at Novosibirsk State University in Novosibirsk and Karpov Institute of Physical Chemistry in Moscow. After receiving a PhD in physics from the University of Kansas in 1998, Dr. Nikitin led R&D work focused on methods and tools in nonlinear signal processing at several startup companies, some of which were subsequently acquired, specializing in applications in communications, power electronics, navigation, geophysical sciences, neurology, and biometrics. He currently holds 27 U.S. patents. Ruslan L. Davidchack is Professor of Mathematical Modelling in the Department of Mathematics at the University of Leicester, United Kingdom. His research interests are in developing computational methods and tools for applications in molecular simulations, nonlinear dynamics and signal processing.

ABSTRACT:

In addition to ever-present thermal noise, various communication and sensor systems can be affected by interfering signals that originate from a multitude of other natural and technogenic (man-made) phenomena. Such interfering signals often have intrinsic \outlier” temporal and/or amplitude structures, which are different from the Gaussian structure of the thermal noise. The presence of different types of such outlier noise is widely acknowledged in multiple applications, under various general and application-specific names, most commonly as impulsive, transient, burst, or crackling noise. For example, outlier electromagnetic interference (EMI) is inherent in digital electronics and communication systems, transmitted into a system in various ways, including electrostatic coupling, electromagnetic induction, or RF radiation. However,
while the detrimental effects of EMI are broadly acknowledged in the industry, its outlier nature often remains indistinct, and its omnipresence and impact, and thus the potential for its mitigation, remain greatly underappreciated.
There are two fundamental reasons why the outlier nature of many technogenic interference sources is often dismissed as irrelevant. The rst one is a simple lack of motivation. Without using nonlinear filtering techniques the resulting signal quality is largely invariant to a particular time-amplitude makeup of the interfering signal and depends mainly on the total power and the spectral composition of the interference in the passband of interest. Thus, unless the interference results in obvious, clearly identfiable outliers in the signal’s band, the \hidden” outlier noise does not attract attention. The second reason is highly elusive nature of outlier noise, and inadequacy of tools used for its consistent observation and/or quantification. More important, the amplitude distribution of a non-Gaussian signal is generally modifiable by linear
filtering, and such ltering can often convert the signal from sub-Gaussian into super-Gaussian, and vice versa. Thus apparent outliers in a signal can disappear and reappear due to various filtering effects, as the signal propagates through media and/or the signal processing chain.

This tutorial provides a concise overview of the methodology and tools, including their analog and digital implementations, for real-time mitigation of outlier interference in general and \hidden” wideband outlier noise in particular. Such mitigation is performed as a first line of defense” against interference ahead of, or in the process of, reducing the bandwidth to that of the signal of interest. Either used by itself, or in combination with subsequent interference mitigation techniques, this approach provides interference mitigation levels otherwise unattainable, with the effects, depending on particular interference scenarios, ranging from \no harm” to spectacular. While the main focus of this filtering technique is mitigation of wideband outlier noise affecting a band-limited signal of interest, it can also be used, given some a-priori knowledge of the signal of interest’s structure, to reduce outlier interference that is conned to the signal’s band.

INTENDED AUDIENCE: This tutorial is intended for researchers and practitioners concerned with intentional and/or unintentional interference in communications, radar, and sensor systems.



Tutorial D 1300-1715
AI, Visual Perception and Deep Learning With Examples (Auditorium)
Presented By:
Robert Williams, Discovery Lab Global
Vijayan Asari, University of Dayton
Zahangir Alom, University of Dayton

TUTORIAL DESCRIPTION: This tutorial is designed to provide attendees a general overview of key AI fundamentals. It will be presented in three parts:

1. Fundamentals of Artificial Intelligence (AI). AI has become widely adopted in a large variety of areas and is creating new exciting applications.
2. Computational intelligence associated with autonomous systems. Various concepts will be presented at the tutorial, like data preprocessing, feature extraction, neural network-based classification, and high-level decision making. Participants will see demonstrations of autonomous systems for a variety of applications and will be able to utilize the concepts learned for application in other areas.
3. Deep Learning fundamentals. Deep learning is a set of neural network based machine learning algorithms and has become one of the most popular AI approaches.

TUTORIAL OUTLINE:
1. History of Artificial Intelligence (AI) with Examples
2. Basic Taxonomy and Math of AI and Machine Learning
3. Introduction to Autonomous Systems and Methodologies in Visual Perception.
4. Deep Learning Fundamentals – Terminology, Tools, and Techniques
5. Example DLG AI Project – Video Game That Teaches Itself to Play and Win

OBJECTIVES: Attendees will walk away from the tutorial with a better understanding of key AI technologies, tools, and techniques. Invitation will be extended to interested attendees to pursue a hands-on AI project to continue their learning beyond the tutorial.

REQUIREMENTS AND TARGET AUDIENCE: This tutorial is for the uninitiated individuals interested in learning more about the fundamentals of the broad field of Artificial Intelligence with specific focus on Deep Learning. No prior knowledge or experience in Artificial Intelligence areas is assumed. Some programming background in Python, Java, or other similar languages will help the attendee to better visualize how they might pursue AI studies on their own following the tutorial.


Tutorials are included with conference registration and will be held on the first day of the program, July 15, 2019