Jump to content

OneAPI (compute acceleration): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Citation bot (talk | contribs)
Altered date. Add: website, date. | Use this bot. Report bugs. | Suggested by Whoop whoop pull up | Category:Application programming interfaces | #UCB_Category 167/232
 
(7 intermediate revisions by 4 users not shown)
Line 15: Line 15:


'''oneAPI''' is an [[open standard]], adopted by Intel,{{sfn|Fortenberry|Tomov|2022|p=22}} for a unified [[application programming interface]] (API) intended to be used across different computing [[Hardware acceleration|accelerator]] ([[coprocessor]]) architectures, including [[GPU]]s, [[AI accelerator]]s and [[field-programmable gate array]]s. It is intended to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture.<ref>{{Cite web|url=https://www.hpcwire.com/2019/12/09/intel-expands-its-silicon-portfolio-and-oneapi-software-initiative-for-next-generation-hpc/|title=Intel Expands its Silicon Portfolio, and oneAPI Software Initiative for Next-Generation HPC|date=2019-12-09|website=HPCwire|language=en-US|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.hpcwire.com/2019/11/17/intel-debuts-new-gpu-ponte-vecchio-and-outlines-aspirations-for-oneapi/|title=Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI|date=2019-11-18|website=HPCwire|language=en-US|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.extremetech.com/computing/302284-sc19-intel-unveils-new-gpu-stack-oneapi-development-effort|title=SC19: Intel Unveils New GPU Stack, oneAPI Development Effort - ExtremeTech|website=www.extremetech.com|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.servethehome.com/intel-one-api-to-rule-them-all-is-much-needed/|title=Intel One API to Rule Them All Is Much Needed to Expand TAM|last=Kennedy|first=Patrick|date=2018-12-24|website=ServeTheHome|language=en-US|access-date=2020-02-11}}</ref>
'''oneAPI''' is an [[open standard]], adopted by Intel,{{sfn|Fortenberry|Tomov|2022|p=22}} for a unified [[application programming interface]] (API) intended to be used across different computing [[Hardware acceleration|accelerator]] ([[coprocessor]]) architectures, including [[GPU]]s, [[AI accelerator]]s and [[field-programmable gate array]]s. It is intended to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture.<ref>{{Cite web|url=https://www.hpcwire.com/2019/12/09/intel-expands-its-silicon-portfolio-and-oneapi-software-initiative-for-next-generation-hpc/|title=Intel Expands its Silicon Portfolio, and oneAPI Software Initiative for Next-Generation HPC|date=2019-12-09|website=HPCwire|language=en-US|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.hpcwire.com/2019/11/17/intel-debuts-new-gpu-ponte-vecchio-and-outlines-aspirations-for-oneapi/|title=Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI|date=2019-11-18|website=HPCwire|language=en-US|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.extremetech.com/computing/302284-sc19-intel-unveils-new-gpu-stack-oneapi-development-effort|title=SC19: Intel Unveils New GPU Stack, oneAPI Development Effort - ExtremeTech|website=www.extremetech.com|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.servethehome.com/intel-one-api-to-rule-them-all-is-much-needed/|title=Intel One API to Rule Them All Is Much Needed to Expand TAM|last=Kennedy|first=Patrick|date=2018-12-24|website=ServeTheHome|language=en-US|access-date=2020-02-11}}</ref>

oneAPI competes with other GPU computing stacks: [[CUDA]] by [[Nvidia]] and [[ROCm]] by [[AMD]].


== Specification ==
== Specification ==
Line 64: Line 66:


== Hardware abstraction layer ==
== Hardware abstraction layer ==
oneAPI Level Zero,<ref>{{Cite web|url=https://www.tomshardware.com/news/intel-releases-bare-metal-oneapi-level-zero-specification|title=Intel Releases Bare-Metal oneAPI Level Zero Specification|last=Verheyde 2019-12-08T16:11:19Z|first=Arne|website=Tom's Hardware|language=en|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=news_item&px=Intel-oneAPI-Level-Zero|title=Intel's Compute Runtime Adds oneAPI Level Zero Support - Phoronix|website=www.phoronix.com|access-date=2020-03-10}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=article&item=intel-level-zero&num=1|title=Initial Benchmarks With Intel oneAPI Level Zero Performance - Phoronix|website=www.phoronix.com|access-date=2020-04-13}}</ref> the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.
oneAPI Level Zero,<ref>{{Cite web|url=https://www.tomshardware.com/news/intel-releases-bare-metal-oneapi-level-zero-specification|title=Intel Releases Bare-Metal oneAPI Level Zero Specification|last=Verheyde 2019-12-08T16:11:19Z|first=Arne|website=Tom's Hardware|date=8 December 2019 |language=en|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=news_item&px=Intel-oneAPI-Level-Zero|title=Intel's Compute Runtime Adds oneAPI Level Zero Support - Phoronix|website=www.phoronix.com|access-date=2020-03-10}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=article&item=intel-level-zero&num=1|title=Initial Benchmarks With Intel oneAPI Level Zero Performance - Phoronix|website=www.phoronix.com|access-date=2020-04-13}}</ref> the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.


== Implementations ==
== Implementations ==
Line 73: Line 75:
[[Heidelberg University|University of Heidelberg]] has developed a SYCL/DPC++ implementation for both AMD and Nvidia GPUs.<ref>{{Cite web|last=Salter|first=Jim|date=2020-09-30|title=Intel, Heidelberg University team up to bring Radeon GPU support to AI|url=https://arstechnica.com/gadgets/2020/09/intel-heidelberg-university-team-up-to-bring-radeon-gpu-support-to-ai/|access-date=2021-10-07|website=Ars Technica|language=en-us}}</ref>
[[Heidelberg University|University of Heidelberg]] has developed a SYCL/DPC++ implementation for both AMD and Nvidia GPUs.<ref>{{Cite web|last=Salter|first=Jim|date=2020-09-30|title=Intel, Heidelberg University team up to bring Radeon GPU support to AI|url=https://arstechnica.com/gadgets/2020/09/intel-heidelberg-university-team-up-to-bring-radeon-gpu-support-to-ai/|access-date=2021-10-07|website=Ars Technica|language=en-us}}</ref>


[[Huawei]] released a DPC++ compiler for their Ascend AI Chipset<ref>{{Citation|title=Extending DPC++ with Support for Huawei Ascend AI Chipset|url=https://www.youtube.com/watch?v=7foee4_QkbU|language=en|access-date=2021-10-07}}</ref>
[[Huawei]] released a DPC++ compiler for their Ascend AI Chipset<ref>{{Citation|title=Extending DPC++ with Support for Huawei Ascend AI Chipset| date=27 April 2021 |url=https://www.youtube.com/watch?v=7foee4_QkbU|language=en|access-date=2021-10-07}}</ref>

[[Fujitsu]] has created an open-source [[ARM architecture|ARM]] version of the oneAPI Deep Neural Network Library (oneDNN)<ref>{{Cite web|last=fltech|date= |title=A Deep Dive into a Deep Learning Library for the A64FX Fugaku CPU - The Development Story in the Developer's Own Words|url=https://blog.fltech.dev/entry/2020/11/19/fugaku-onednn-deep-dive-en|access-date=2021-02-10|website=fltech - 富士通研究所の技術ブログ|language=ja}}</ref> for their [[Fugaku (supercomputer)|Fugaku CPU]].

== Unified Acceleration Foundation (UXL) and the future for oneAPI ==

Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the contiuation of the OneAPI initiative, with to goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.<ref>https://www.reuters.com/technology/behind-plot-break-nvidias-grip-ai-by-targeting-software-2024-03-25/</ref>

== Comparison with competitors ==
oneAPI competes with other GPU computing stacks: [CUDA|CUDA by NVIDIA]] and [[ROCm|ROCm by AMD]].

Where as Nvidia's CUDA is closed-source, AMD ROCm and Intel's OneAPI are open source.

=== Nvidia CUDA ===
{{Main|CUDA}}


[[Fujitsu]] has created an open-source [[ARM architecture|ARM]] version of the oneAPI Deep Neural Network Library (oneDNN)<ref>{{Cite web|last=fltech|date= 19 November 2020|title=A Deep Dive into a Deep Learning Library for the A64FX Fugaku CPU - The Development Story in the Developer's Own Words|url=https://blog.fltech.dev/entry/2020/11/19/fugaku-onednn-deep-dive-en|access-date=2021-02-10|website=fltech - 富士通研究所の技術ブログ|language=ja}}</ref> for their [[Fugaku (supercomputer)|Fugaku CPU]].
'''CUDA (Compute Unified Device Architecture)''' is closed source [[parallel computing]] platform and [[application programming interface]] (API) that allows software to use certain types of [[graphics processing units]] (GPUs) from [[Nvidia]] for accelerated general-purpose processing, an approach called general-purpose computing on GPUs ([[GPGPU]]).


== Unified Acceleration Foundation (UXL) and the future for oneAPI{{anchor|UXL}} ==
=== AMD ROCm ===
{{Main|ROCm}}


Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the contiuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.<ref>{{Cite web |title=Exclusive: Behind the plot to break Nvidia's grip on AI by targeting software |website=[[Reuters]] |url=https://www.reuters.com/technology/behind-plot-break-nvidias-grip-ai-by-targeting-software-2024-03-25/ |access-date=2024-04-05}}</ref>
'''ROCm'''<ref>{{Cite web|url=https://github.com/RadeonOpenCompute/ROCm/issues/1628|title=Question: What does ROCm stand for? · Issue #1628 · RadeonOpenCompute/ROCm|website=Github.com|access-date=January 18, 2022}}</ref> is an open source software stack for [[graphics processing unit]] (GPU) programming from [[Advanced Micro Devices]] (AMD).


==References==
==References==

Latest revision as of 22:20, 20 November 2024

oneAPI
Repositorygithub.com/oneapi-src
Operating systemCross-platform
PlatformCross-platform
TypeOpen-source software specification for parallel programming
Websitewww.oneapi.io Edit this at Wikidata

oneAPI is an open standard, adopted by Intel,[1] for a unified application programming interface (API) intended to be used across different computing accelerator (coprocessor) architectures, including GPUs, AI accelerators and field-programmable gate arrays. It is intended to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture.[2][3][4][5]

oneAPI competes with other GPU computing stacks: CUDA by Nvidia and ROCm by AMD.

Specification

[edit]

The oneAPI specification extends existing developer programming models to enable multiple hardware architectures through a data-parallel language, a set of library APIs, and a low-level hardware interface to support cross-architecture programming. It builds upon industry standards and provides an open, cross-platform developer stack.[6][7]

Data Parallel C++

[edit]

DPC++[8][9] is a programming language implementation of oneAPI, built upon the ISO C++ and Khronos Group SYCL standards.[10] DPC++ is an implementation of SYCL with extensions that are proposed for inclusion in future revisions of the SYCL standard, including: unified shared memory, group algorithms, and sub-groups.[11][12][13]

Libraries

[edit]

The set of APIs[6] spans several domains, including libraries for linear algebra, deep learning, machine learning, video processing, and others.

Library Name Short

Name

Description
oneAPI DPC++ Library oneDPL Algorithms and functions to speed DPC++ kernel programming
oneAPI Math Kernel Library oneMKL Math routines including matrix algebra, FFT, and vector math
oneAPI Data Analytics Library oneDAL Machine learning and data analytics functions
oneAPI Deep Neural Network Library oneDNN Neural networks functions for deep learning training and inference
oneAPI Collective Communications Library oneCCL Communication patterns for distributed deep learning
oneAPI Threading Building Blocks oneTBB Threading and memory management template library
oneAPI Video Processing Library oneVPL Real-time video encode, decode, transcode, and processing

The source code of parts of the above libraries is available on GitHub.[14]

The oneAPI documentation also lists the "Level Zero" API defining the low-level direct-to-metal interfaces and a set of ray tracing components with its own APIs.[6]

Hardware abstraction layer

[edit]

oneAPI Level Zero,[15][16][17] the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.

Implementations

[edit]

Intel has released oneAPI production toolkits that implement the specification and add CUDA code migration, analysis, and debug tools.[18][19][20] These include the Intel oneAPI DPC++/C++ Compiler,[21] Intel Fortran Compiler, Intel VTune Profiler[22] and multiple performance libraries.

Codeplay has released an open-source layer[23][24][25] to allow oneAPI and SYCL/DPC++ to run atop Nvidia GPUs via CUDA.

University of Heidelberg has developed a SYCL/DPC++ implementation for both AMD and Nvidia GPUs.[26]

Huawei released a DPC++ compiler for their Ascend AI Chipset[27]

Fujitsu has created an open-source ARM version of the oneAPI Deep Neural Network Library (oneDNN)[28] for their Fugaku CPU.

Unified Acceleration Foundation (UXL) and the future for oneAPI

[edit]

Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the contiuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[29]

References

[edit]
  1. ^ Fortenberry & Tomov 2022, p. 22.
  2. ^ "Intel Expands its Silicon Portfolio, and oneAPI Software Initiative for Next-Generation HPC". HPCwire. 2019-12-09. Retrieved 2020-02-11.
  3. ^ "Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI". HPCwire. 2019-11-18. Retrieved 2020-02-11.
  4. ^ "SC19: Intel Unveils New GPU Stack, oneAPI Development Effort - ExtremeTech". www.extremetech.com. Retrieved 2020-02-11.
  5. ^ Kennedy, Patrick (2018-12-24). "Intel One API to Rule Them All Is Much Needed to Expand TAM". ServeTheHome. Retrieved 2020-02-11.
  6. ^ a b c "oneAPI Specification". oneAPI.
  7. ^ "Preparing for the Arrival of Intel's Discrete High-Performance GPUs". HPCwire. 2021-03-23. Retrieved 2021-03-29.
  8. ^ "Data Parallel C++: Mastering DPC++ for Programming of Heterogeneous Systems Using C++ and SYCL". Apress.
  9. ^ Team, Editorial (2019-12-16). "Heterogeneous Computing Programming: oneAPI and Data Parallel C++". insideBIGDATA. Retrieved 2020-02-11.
  10. ^ "The Khronos Group". The Khronos Group. 2020-02-11. Retrieved 2020-02-11.
  11. ^ "Khronos Steps Towards Widespread Deployment of SYCL with Release of SYCL 2020 Provisional Specification". The Khronos Group. 2020-06-30. Retrieved 2020-07-06.
  12. ^ staff (2020-06-30). "New, Open DPC++ Extensions Complement SYCL and C++". insideHPC. Retrieved 2020-07-06.
  13. ^ "SYCL 2020 Launches with New Name, New Features, and High Ambition". HPCwire. 2021-02-09. Retrieved 2021-02-16.
  14. ^ "oneAPI-SRC". GitHub.
  15. ^ Verheyde 2019-12-08T16:11:19Z, Arne (8 December 2019). "Intel Releases Bare-Metal oneAPI Level Zero Specification". Tom's Hardware. Retrieved 2020-02-11.{{cite web}}: CS1 maint: numeric names: authors list (link)
  16. ^ "Intel's Compute Runtime Adds oneAPI Level Zero Support - Phoronix". www.phoronix.com. Retrieved 2020-03-10.
  17. ^ "Initial Benchmarks With Intel oneAPI Level Zero Performance - Phoronix". www.phoronix.com. Retrieved 2020-04-13.
  18. ^ "Intel Champions XPU Vision With oneAPI, Data Center GPUs - SDxCentral". SDxCentral. 2020-11-11. Retrieved 2020-11-11.
  19. ^ "Intel Debuts oneAPI Gold and Provides More Details on GPU Roadmap". HPCwire. 2020-11-11. Retrieved 2020-11-11.
  20. ^ Moorhead, Patrick. "Intel Announces Gold Release Of OneAPI Toolkits And New Intel Server GPU". Forbes. Retrieved 2020-12-08.
  21. ^ "Data Parallel C++ for Cross-Architecture Applications". Intel. Retrieved 2021-10-07.
  22. ^ "Fix Performance Bottlenecks with Intel® VTune™ Profiler". Intel. Retrieved 2021-10-07.
  23. ^ "Codeplay Open Sources a Version of DPC++ for Nvidia GPUs". HPCwire. 2020-02-05. Retrieved 2020-02-12.
  24. ^ "Intel's oneAPI / DPC++ / SYCL Will Run Atop NVIDIA GPUs With Open-Source Layer - Phoronix". www.phoronix.com. Retrieved 2019-12-06.
  25. ^ "Codeplay - Codeplay contribution to DPC++ brings SYCL support for NVIDIA GPUs". www.codeplay.com. Retrieved 2020-02-11.
  26. ^ Salter, Jim (2020-09-30). "Intel, Heidelberg University team up to bring Radeon GPU support to AI". Ars Technica. Retrieved 2021-10-07.
  27. ^ Extending DPC++ with Support for Huawei Ascend AI Chipset, 27 April 2021, retrieved 2021-10-07
  28. ^ fltech (19 November 2020). "A Deep Dive into a Deep Learning Library for the A64FX Fugaku CPU - The Development Story in the Developer's Own Words". fltech - 富士通研究所の技術ブログ (in Japanese). Retrieved 2021-02-10.
  29. ^ "Exclusive: Behind the plot to break Nvidia's grip on AI by targeting software". Reuters. Retrieved 2024-04-05.

Sources

[edit]
[edit]