Intel® oneAPI AI Analytics Toolkit
The Intel® oneAPI AI Analytics Toolkit gives data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architectures. The components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance from preprocessing through machine learning, and provides interoperability for efficient model development.
Using this toolkit, you can:
- Deliver high-performance deep learning (DL) training on Intel® XPUs and integrate fast inference into your AI development workflow with Intel-optimized DL frameworks: TensorFlow* and PyTorch*, pretrained models, and low-precision tools.
- Achieve drop-in acceleration for data preprocessing and machine learning workflows with compute-intensive Python* packages: Modin*, scikit-learn*, and XGBoost* optimized for Intel.
- Gain direct access to Intel analytics and AI optimizations to ensure that your software works together seamlessly.
Features
Optimized Deep Learning
- Leverage popular, Intel-optimized frameworks—including TensorFlow and PyTorch—to use the full power of Intel® architecture and yield high performance for training and inference.
- Expedite development by using the open source pretrained machine learning models that are optimized by Intel for best performance.
- Take advantage of automatic accuracy-driven tuning strategies along with additional objectives like performance, model size, or memory footprint using low-precision optimizations.
Data Analytics & Machine Learning Acceleration
- Increase machine learning model accuracy and performance with algorithms in scikit-learn and XGBoost, optimized for Intel® architectures.
- Scale out efficiently to clusters and perform distributed machine learning by using Intel® Extension for Scikit-learn*.
High-Performance Python*
- Take advantage of the most popular and fastest growing programming language for AI and data analytics with underlying instruction sets optimized for Intel® architectures.
- Process larger scientific data sets more quickly using drop-in performance enhancements to existing Python code.
- Achieve highly efficient multithreading, vectorization, and memory management, and scale scientific computations efficiently across a cluster.
Simplified Scaling across Multi-node DataFrames
- Seamlessly scale Pandas workflows to multicores and multi-nodes with only one line of code change using the Intel® Distribution of Modin*, an extremely lightweight parallel DataFrame.
- Accelerate data analytics with high-performance backends, such as OmniSci.
What’s Included
Intel® Optimization for TensorFlow*
Intel® Optimization for PyTorch*
Model Zoo for Intel® Architecture
Intel® Low Precision Optimization Tool
Intel® Extension for Scikit-learn*
Intel Optimized XGBoost
Intel® Distribution of Modin*
Intel® Distribution for Python*