19' MICRO Tutorial - ONNC Compiler Porting and Optimization for NVDLA-Based Neural Network Inference Engines

Dr. Wei-Fen Lin, Dr. Cheng-Tao Hsieh

Date: Saturday, October 12 Morning

Location: Columbus, Ohio, USA

Speaker BIO

Dr. Wei-Fen Lin is the VP of Engineering at Skymizer Taiwan Inc., where she leads the R&D teams and oversees the development of Skymizer products. Prior to joining Skymizer, she was a computer architect at high-tech companies in silicon valley and Taiwan. Her research interests include high-performance computing, software/hardware co-design, performance modeling and optimization. She founded Mijotech Inc and Play Lab to promote STEAM education in her spare time.

Lab Speaker

Dr. Cheng-Tao Hsieh, Software Engineering Manager

Abstract

The NVIDIA Deep Learning Accelerator provides free intellectual property licensing to anyone wanting to build a chip that uses deep neural networks for inference applications. With extensive documentation and tools, many business proposals and research projects choose NVDLA as their inference engine design. However, lack of extensible compiler support becomes the major bottleneck for supporting more AI models and optimizations. This tutorial presents the first open source compiler that supports NVDLA-based designs. The ONNC compiler has more support than the official NVDLA compiler and relieves programmers from manually specifying the low-level details of models that are not supported by the official NVDLA compiler. It also enables the opportunities for hardware customization and proprietary optimization. We will cover the overview, porting and optimizations in three subsections. In each subsection, we will have hands-on labs to demonstrate how to run and customize the NVDLA backend in ONNC for product development and research projects.

Intended Audience

Researchers and practitioners in academia or industry looking for an open-source AI compiler for NVDLA-based neural network inference engines.

Download

Prerequisite

Since this tutorial invloves hands-on exercises and labs, please pre-install the docker, follow the instruction in Lab 1. ONNC Working Environment Setup to setup your working environment, and bring your laptop to the session.

Tutorial Outline

  • ONNC Overview (55 mins)

    • presentation
      • Lab 1. ONNC Working environement setup
    • ONNC live demo
      • Lab 2. Hand-Writting Recognition With ONNC & Arm Cotext-M Development Board
  • When ONNC meets NVDLA (55 mins)

    • NVDLA Overview

    • NVDLA backend in ONNC

      • Porting ONNC to NVDLA-based design
        • Hardware Primitives and Hybrid Layer
        • Transforming a Graph into a Loadable
      • Lab 3. How to add a new backend
    • Operator Support

      • Operator support list
      • Lab 4. How to add a new operator
    • Fall-back to CPU

      • NVDLA Runtime
      • Lab 5. How to fall-back to CPU for execution
  • Compiler Optimizations (55 mins)

    • Pass and Pass manger
      • Lab 6. How to add a pass and minipulate compute graph?
    • Model-Layer optimization
      • Lab 7. ONNC IR Extension
    • NVDLA-Dependent optimization
      • Lab 8. Hardware-Specific Optimization
Back to Top