Model Compression based on Quantization, Pruning, Knowledge Distillation, and AutoML
Opt-AI’s optimization solution compresses AI models using state-of-the-art techniques
including quantization, pruning, knowledge distillation, and AutoML.
Moreover, AI models are accelerated on any target AI chipset including CPUs, GPUs, ASICs and FPGAs,
with the Opt-AI’s hardware-aware automated profiling solution