Kursplan

Overview of Chinese AI GPU Ecosystem

  • Comparison of Huawei Ascend, Biren, Cambricon MLU
  • CUDA vs CANN, Biren SDK, and BANGPy models
  • Industry trends and vendor ecosystems

Preparing for Migration

  • Assessing your CUDA codebase
  • Identifying target platforms and SDK versions
  • Toolchain installation and environment setup

Code Translation Techniques

  • Porting CUDA memory access and kernel logic
  • Mapping compute grid/thread models
  • Automated vs manual translation options

Platform-Specific Implementations

  • Using Huawei CANN operators and custom kernels
  • Biren SDK conversion pipeline
  • Rebuilding models with BANGPy (Cambricon)

Cross-Platform Testing and Optimization

  • Profiling execution on each target platform
  • Memory tuning and parallel execution comparisons
  • Performance tracking and iteration

Managing Mixed GPU Environments

  • Hybrid deployments with multiple architectures
  • Fallback strategies and device detection
  • Abstraction layers for code maintainability

Case Studies and Best Practices

  • Porting vision/NLP models to Ascend or Cambricon
  • Retrofitting inference pipelines on Biren clusters
  • Handling version mismatches and API gaps

Summary and Next Steps

Krav

  • Experience programming with CUDA or GPU-based applications
  • Understanding of GPU memory models and compute kernels
  • Familiarity with AI model deployment or acceleration workflows

Audience

  • GPU programmers
  • System architects
  • Porting specialists
 21 timer

Antall deltakere


Price per participant

Upcoming Courses

Related Categories