Get in Touch

Course Outline

Performance Concepts and Metrics

  • Analyzing latency, throughput, power consumption, and resource utilization
  • Distinguishing between system-level and model-level bottlenecks
  • Profiling strategies for inference versus training workloads

Profiling on Huawei Ascend

  • Leveraging CANN Profiler and MindInsight
  • Diagnostics for kernels and operators
  • Understanding offload patterns and memory mapping

Profiling on Biren GPU

  • Utilizing Biren SDK performance monitoring features
  • Kernel fusion, memory alignment, and execution queue management
  • Profiling with consideration for power and temperature

Profiling on Cambricon MLU

  • Employing BANGPy and Neuware performance tools
  • Gaining kernel-level visibility and interpreting logs
  • Integrating MLU profiler with deployment frameworks

Graph and Model-Level Optimization

  • Strategies for graph pruning and quantization
  • Operator fusion and restructuring of computational graphs
  • Standardizing input sizes and tuning batch parameters

Memory and Kernel Optimization

  • Optimizing memory layouts and facilitating reuse
  • Managing buffers efficiently across different chipsets
  • Platform-specific kernel-level tuning techniques

Cross-Platform Best Practices

  • Ensuring performance portability through abstraction strategies
  • Developing shared tuning pipelines for multi-chip environments
  • Case Study: Optimizing an object detection model across Ascend, Biren, and MLU

Summary and Next Steps

Requirements

  • Professional experience with AI model training or deployment pipelines
  • Solid understanding of GPU/MLU compute principles and model optimization techniques
  • Familiarity with basic performance profiling tools and metrics

Target Audience

  • Performance engineers
  • Machine learning infrastructure teams
  • AI system architects
 21 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories