Intel® oneAPI Base Toolkit
The Intel® oneAPI Base Toolkit (Base Kit) is a core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures. It features an industry-leading C++ compiler and the Data Parallel C++ (DPC++) language, an evolution of C++ for heterogeneous computing.
Domain-specific libraries and the Intel® Distribution for Python* provide drop-in acceleration across relevant architectures. Enhanced profiling, design assistance, and debug tools complete the kit.
Features
Future-Ready Programming Model Provides Freedom of Choice
Apply your skills to the next innovation, not to rewriting software for the next hardware platform.
Top Performance for Accelerated Architectures
Take full advantage of accelerated compute by maximizing performance across Intel CPUs, GPUs, and FPGAs.
Fast and Efficient Development
Use a complete set of cross-architecture libraries and advanced tools.
Easy Integration with Legacy Code
The Intel® DPC++ Compatibility Tool lets you migrate CUDA code to DPC++ code.
What’s Included
Intel® oneAPI Collective Communications Library
Implement optimized communication patterns to distribute deep learning model training across multiple nodes.
Intel® oneAPI Data Analytics Library
Boost machine learning and data analytics performance.
Intel® oneAPI Deep Neural Network Library
Develop fast neural networks on Intel® CPUs and GPUs with performance-optimized building blocks.
Intel® oneAPI DPC++/C++ Compiler
Compile and optimize DPC++ code for CPU, GPU, and FPGA target architectures.
Intel® oneAPI DPC++ Library
Speed up data parallel workloads with these key productivity algorithms and functions.
Intel® oneAPI Math Kernel Library
Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
Intel® oneAPI Threading Building Blocks
Simplify parallelism with this advanced threading and memory-management template library.
Intel® oneAPI Video Processing Library
Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing for broadcasting, live streaming and VOD, cloud gaming, and more.
Intel® Advisor
Design code for efficient vectorization, threading, and offloading to accelerators.
Intel® Distribution for GDB*
Enable deep, system-wide debug of DPC++, C, C++, and Fortran code.
Intel® Distribution for Python*
Achieve fast math-intensive workload performance without code changes for data science and machine learning problems.
Intel® DPC++ Compatibility Tool
Migrate legacy CUDA code to a multi-platform program in DPC++ code with this assistant.
Intel® FPGA Add-on for oneAPI Base Toolkit (Optional)
Program these reconfigurable hardware accelerators to speed specialized, data-centric workloads. Requires installation of the Intel oneAPI Base Toolkit.
Intel® Integrated Performance Primitives
Speed up performance of imaging, signal processing, data compression, cryptography, and more.
Intel® VTune™ Profiler
Find and optimize performance bottlenecks across CPU, GPU, and FPGA systems.