Technology
Optimal mapping of Neural Networks to run on an AI processor is a data management problem. Kinara’s Polymorphic Dataflow Architecture is premised on achieving breakthrough performance by minimizing data movement between compute and various levels of memory hierarchy.
Polymorphic Dataflow Architecture
All computing resources and the compiler toolchain work seamlessly to achieve the objectives of efficient silicon utilization and minimizing data movement to realize the optimal deployment of AI workloads. Unlike other architectures that restrict the mapping of AI models to a single data-flow, Kinara’s polymorphic dataflow architecture maps any data-flow optimally onto the processor.
Kinara Compiler
Data flow required to optimize AI workloads varies from one neural network to another and even from one layer to another within a network. The Kinara Compiler determines the best data flow for a given neural network to minimize data movement within the Kinara Ara Edge AI Processor. Neural network models are optimally scheduled to work across the architecture’s various compute units to streamline the execution and provide deterministic, best-in-class performance.
Neural ISA Core
Kinara Edge AI Processors bring the Polymorphic Dataflow approach down to the core Instruction Set Architecture level. Each SIMD-like instruction maps a unique micro-dataflow pattern ensuring that data expansion and reduction stay very close to the compute units, resulting in minimal data movement between compute and memory. The efficient micro-dataflow instructions allow the compiler to create an energy-efficient scheduling of any neural network operator as well as extending support for new models and operators.