Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux IT Technology

Unified Acceleration Foundation Wants To Create an Open Standard for Accelerator Programming (techcrunch.com) 19

At the Open Source Summit Europe in Bilbao, Spain, the Linux Foundation this week announced the launch of the Unified Acceleration (UXL) Foundation. The group's mission is to deliver "an open standard accelerator programming model that simplifies development of performant, cross-platform applications." From a report: The foundation's founding members include the likes of Arm, Fujitsu, Google Cloud, Imagination Technologies, Intel, Qualcomm and Samsung. The company most conspicuously missing from this list is Nvidia, which offers its own CUDA programming model for working with its GPUs. At its core, this new foundation is an evolution of the oneAPI initiative, which is also aimed to create a new programming model to make it easier for developers to support a wide range of accelerators, no matter whether they are GPUs, FPGAs or other specialized accelerators. Like with the oneAPI spec, the aim of the new foundation is to ensure that developers can make use of these technologies without having to delve deep into the specifics of the underlying accelerators and the infrastructure they run on.
This discussion has been archived. No new comments can be posted.

Unified Acceleration Foundation Wants To Create an Open Standard for Accelerator Programming

Comments Filter:
  • by wakeboarder ( 2695839 ) on Wednesday September 20, 2023 @04:38PM (#63864264)
    They want to be a gatekeeper with CUDA and keep devs and companies in their ecosystem
    • by ceoyoyo ( 59147 )

      Nvidia supports openCL and various cross platform shader languages. If you want to write cross platform stuff you can, no problem. That's why you can buy any game and it will run on your Nvidia card, or AMD, even Intel.

      CUDA is more than just a programming API, framework, model, whatever. It's a big library of useful stuff, plus some attention paid to things like debugging. That's why people use it. OpenCL, oneAPI and now whatever they call this are just a specification for a low level API with the hope that

      • OpenCL (and vulkan-compute) are for the programmable units on a gpu. This is possibly not what they're referring to when they talk about accelerators (especially if comparing to cuda)

        Aside from typical arbitrary computation stream processors (still can't believe industry called them shaders, even in the graphical context fragment programs made more sense, I blame microsoft) nvidia have fixed function blocks. Being fixed function they are a lot more efficient per power and silicon area, so you might have a f

        • What's preventing someone else supporting CUDA, anyway? If you can't beat 'em, etc etc. Think about 3dfx and GLIDE; they had to support Direct3D just like everyone else. Then they went away, because... >>>NVIDIA entered the chat. Like it or not, we got them because they are good, occasionally terrible drivers notwithstanding. Way back in olden times I had most of the GPUs, and nvidia's were always the best starting with the tnt. Do I wish they had OSS drivers, yes. But will I buy an AMD GPU, no, de

          • Fixed function is great until you don't need that particular function.

            The AMD 6800 XT's are/were pretty great. For gaming they just don't have the fixed function 'ai' hardware nvidia use for their dlss and other bits. Honestly I prefer it without. Real-time inter-frame generation is cool and all but it adds to input latency significantly, no thanks.

            Don't get me wrong, I ran nvidia also until maybe 2014 or so. That's when the better drivers won me out. That some years later they are neck and neck performance

        • by ceoyoyo ( 59147 )

          The idea with openCL was exactly the same as this. You'd be able to write your code and it would run on a GPU, CPU, FPGA, whatever, taking full advantage of whatever processor you were running it on, or the best way to run something with multiple different types of processor available.

          Yes, it's very hard to support all the special purpose ops in all the different devices, not to mention radically different things like a CPU and an FPGA. This isn't going to be any different.

      • CUDA is also a thing. UXL, whatever it's going to be, seems to be just some press releases telling us how wonderful everything is going to be whenever whatever it will be arrives.
        • by ceoyoyo ( 59147 )

          It sounds like it's just oneAPI but with more than just Intel behind it. OneAPI is kind of a thing.

          It would be great if it's actually useful, and it could be, but it's pretty silly to think that Nvidia is sitting it out because of CUDA. AMD is also "notably missing from the list." I doubt that's because ROCm is such a raging success. It's more likely they'd rather work on their CUDA clone than a rebranded openCL.

  • Obligatory XKCD (Score:4, Insightful)

    by sconeu ( 64226 ) on Wednesday September 20, 2023 @05:17PM (#63864336) Homepage Journal
  • There is no good way to have a language to program accelerators in a way that is generic and give you performance.
    You may need to build your programming interface to match the architecture of your accelerators. Think about programming FPGAs, you essentially need to design your own processor at the gate level.
    Sure you CAN program that in C through High Level Synthesis; but that's a very restrictive way to do that.

    Don't get me wrong, high level abstraction and common programming interfaces are useful, even if

  • Still another one (Score:4, Informative)

    by SoftwareArtist ( 1472499 ) on Wednesday September 20, 2023 @06:31PM (#63864466)

    We have an open standard for programming accelerators. It's called OpenCL. That's what everyone should use.

    Wait, no. We have a new open standard for programming accelerators. It's called SYCL. That's what everyone should use.

    No, ignore that. We have an even newer open standard for programming accelerators. It's called UXL. That's what we're all going to get behind and commit to and really stick with this time. Trust us!

    Why do you think people keep using CUDA?

    • 100% ^^ THIS. Perfect summary.

      Just re-implement CUDA and call it a day.

    • by jay age ( 757446 )

      SYCL is just an abstraction layer, that makes programming with OpenCL [lot] easier.

      It's same with CUDA - you can use it directly, but most use some library to save themselves hard work [and typing].

      If they want to improve the programming model further, no complaints here.

    • by WDot ( 1286728 )
      In AI/ML, people use CUDA instead of OpenCL because CUDA works out of the box. *Maybe* you can find an experimental fork of the popular ML libraries that implements some of the functions in OpenCL, if you want to spend the time to get it working. Or, CUDA works, right now. AMD not working around the clock to make sure OpenCL has first-class support in Torch and Tensorflow is the reason why NVIDIA is dominating, and it really seems like an unforced error on AMD’s part.
    • Why do you think people keep using CUDA?

      Not because of changing standards, but because it's significantly better for the purpose of the programmer than the rest. We have open operating systems as well, there's a reason people keep using Windows too.

  • Is this to do with the right pedal in the car, or the LHC?

  • Yeah, it really look like Intel is just punting away OneAPI somewhere to rot and die, after all this noise and trying to shove it everywhere.
  • The launch of the Unified Acceleration (UXL) Foundation is a significant step in simplifying accelerator programming for cross-platform applications. With key industry players on board, it promises to make accelerator technology more accessible and developer-friendly, though notable absence of Nvidia raises questions about broader industry adoption.

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...