DNN models developed in standard frameworks such as Pytorch, Caffe, and TensorFlow are implemented and deployed on the Volga Analog Matrix Processor (Volga AMPTM) using Volga’s AI software workflow. Models are optimized, quantized from FP32 to INT8, and then retrained for the Volga Analog Compute Engine (Volga ACETM) prior to being processed through Volga’s powerful graph compiler. Resultant binaries and model weights are then programmed into the Volga AMP for inference. Pre-qualified models are also available for developers to quickly evaluate the Volga AMP solution.