19' MICRO Tutorial - ONNC Compiler Porting and Optimization for NVDLA-Based Neural Network Inference Engines

Dr. Wei-Fen Lin, Dr. Cheng-Tao Hsieh

Speaker BIO

Dr. Wei-Fen Lin is the VP of Engineering at Skymizer Taiwan Inc., where she leads the R&D teams and oversees the development of Skymizer products. Prior to joining Skymizer, she was a computer architect at high-tech companies in silicon valley and Taiwan. Her research interests include high-performance computing, software/hardware co-design, performance modeling and optimization. She founded Mijotech Inc and Play Lab to promote STEAM education in her spare time.

Lab Speaker

Dr. Cheng-Tao Hsieh, Software Engineering Manager

Abstract

The NVIDIA Deep Learning Accelerator provides free intellectual property licensing to anyone wanting to build a chip that uses deep neural networks for inference applications. With extensive documentation and tools, many business proposals and research projects choose NVDLA as their inference engine design. However, lack of extensible compiler support becomes the major bottleneck for supporting more AI models and optimizations. This tutorial presents the first open source compiler that supports NVDLA-based designs. The ONNC compiler has more support than the official NVDLA compiler and relieves programmers from manually specifying the low-level details of models that are not supported by the official NVDLA compiler. It also enables the opportunities for hardware customization and proprietary optimization. We will cover the overview, porting and optimizations in three subsections. In each subsection, we will have hands-on labs to demonstrate how to run and customize the NVDLA backend in ONNC for product development and research projects.

Intended Audience

Researchers and practitioners in academia or industry looking for an open-source AI compiler for NVDLA-based neural network inference engines.

Preliminary Outline

    ONNC Overview (55 mins)

    • presentation (30 mins)
      • Lab 1. ONNC Working environement setup
    • ONNC live demo (25 mins)
      • Lab 2. Hand-Writting Recognition With ONNC & Arm Cotext-M development board

    When ONNC meets NVDLA (55 mins)

    • NVDLA Overview

    • NVDLA backend in ONNC

      • PORTING ONNC TO NVDLA-BASED DESIGN
        • Hardware Primitives and Hybrid Layer
        • Transforming a Graph into a Loadable
      • Lab 3. How to add a new backend
    • Operator Support

      • Operator support list
      • Lab 4. How to add a new operator
    • Fall-back to CPU

      • NVDLA Runtime
      • Lab 5. How to fall-back to CPU for execution

    Compiler Optimizations (55 mins)

    • Pass and Pass manger
      • Lab 6. How to add a pass and minipulate compute graph?
    • Model-Layer optimization
      • Lab 7. CONV+RELU Layer Fusion
    • NVDLA-Dependent optimization
      • Lab 8. Mul + Add -> Add + Mul Reordering & Fusion

More Information

Welcome to join us in October. We will update more information as time gets closer. Stay tuned.

Back to Top