page loader

ML/AI Frameworks Bridging the gap between High performance Hardware and High level languages.

Reference Framework

Gyrus can perform benchmarking of new hardware architectures measuring Performance, Power and Memory for CNN, RNN, LSTM, and MLP etc. networks comparing against well-recognized hardware platforms. The models are optionally tuned to the specific framework of choice such as Tensorflow, TensorRT, Pytorch etc

Advancements in algorithm research, high performance compute infrastructures, and large datasets helped in the resurgence of the AI. With Moore’s law coming to an end, the compute is not growing as expected. So, the latest breed of architecture for neural network rely on being non-generic processors with lots of cores / engines. Even small micro-controllers are touting 1Tera Operations per second with these custom engines. However they are not compatible for high-level languages to exploit the hardware to their full potential. On the other end of the spectrum ML/AI frameworks have emerged to ease of development of models.
A reference framework and the layers in the software tool chain is as in below

browser iphone browser iphone

browser iphone browser iphone

There is a big gap between the frameworks and the new breed of hardware architectures in exploiting the full potential with respect to utilization and power. Gyrus has worked with several architectures and developed middle-ware, compiler optimizations, micro-code to help improve the utilization of hardware engines.

browser iphone

Gyrus develops middleware components to realize the full potential of the hardware engines bridging the gap between hardware and the frameworks. Gyrus also does competitive benchmarking based on EMBCC, ML-Perf, different network architectures, and compare with known hardware architectures such as Nvidia Titan, Nvidia Jetson, AWS, Xilinx FPGA etc. for Power, Performance and Efficiency.