-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rust-CUDA is being rebooted! #130
Comments
This is exciting! I'd love to contribute (if I can). Any areas to dig deep into? That would probably help with figuring out starting points and if at-all I'd be capable enough to contribute! In any case .. cheers .. will closely follow how this evolves! |
I'm still orienting myself as to the current state. I'd start with just trying to get the examples running on your machine and see if you hit anything I do not! Thank you so much for (potentially) helping. 🍻 |
The big thing is to make it work. Try it on a few different machines (OS, GPUs, CUDA versions etc), make it work on modern RustC and CUDA versions without errors. I switched to Cudarc because that is in a working state, and this isn't. Dropping support for older versions of CUDA is fine if that makes it easier. |
That will be quite some work. Rustc changed significantly, so did libNVVM. @LegNeato As you are a maintainer of rust-gpu, I would be curious to know what in the end lead you to rust-cuda. Afaik rust-gpu did not enter the compute kernel area too much. |
@apriori Actually, Rust-GPU does have pretty good support for Vulkan compute! It's just that Embark and most current contributors are focused on graphics use-cases. I personally care more about GPGPU. What lead me here is I see a lot of opportunities and overlap between the two projects. As an end user writing GPU code in Rust, what I really want is to not care about Vulkan vs CUDA as the output target at all, similar to how I don't care about linux vs windows when writing CPU rust (or arm vs x86_64 for that matter). Of course, we also need to expose platform-specific stuff for those wanting to get the most out of their hardware or ecosystem (similar to how rust on the CPU exposes platform specific apis or ISA-specific escape hatches), but the progressive disclosure of complexity is key. This wasn't going to happen as two completely separate projects that only peek over the fence occasionally, or with rust-cuda no longer being developed. So I am involved in both and can hopefully bring them together where they are different for different's sake. |
Will contribute |
Would definitely love to help out, I think this is a really cool project |
I'd definitely like to contribute and get involved if I can. I'm currently a Master's Student at Georgia Tech and taking a Parallel algorithms course this semester. I have a few different machines, cards, and Operating systems I can try to put the current iteration on and see what issues pop up |
@Schmiedium Awesome! I think one thing everyone is going to hit is we are on a super old version of rust and cargo automatically upgrading versions will hit issues. I'm trying to untangle that a bit currently. |
I'm noticing this too @LegNeato - do you have a branch going or no? |
I'm a Rust beginner without any GPU programming experience, but I'd love to learn and help out where I can. |
So I had some time to play around with it. I'm running into two main issues, and they seem to be windows specific. This is on windows 10 with a 2080ti, CUDA 12.8 and Optix 8.1. The first issue I think is on me, has to do with Nvidia Optix. I'm just having trouble getting it setup correctly i think, but the Optix examples fail to compile with the error that the OPTIX_ROOT_DIR or OPTIX_ROOT env variable isn't found. This points to the FindCUDAHelper looking for environment variables, but even with those set it still fails. The second is ntapi. It looks like ntapi 0.3.7 includes code that is no longer valid rust. This issue seems to have first cropped up in 2022 and was fixed. You can see the issue here. l guess one of the dependencies somewhere in the dependency tree of this project may be using that version, causing that build error. I haven't yet been able to look into where that's being brought in, so not sure how difficult that would be to fix. I should be able to try this out on NixOS tomorrow with the same hardware, so I'll check in tomorrow if I find anything there One more comment, not an issue per say: looks like this project as of right now still requires nightly rust to build due to using #[feature] macros, so be aware of that as well. I'd be interested to know what you guys find |
Glad to hear Rust CUDA is making a comeback! I went through the |
Hello, I've been trying to use cust for the past few weeks and I have some *ideas* for how the library could be improved. I think now, if ever, would be a good time to break compatibility to polish the existing API. In particular: some of the flags are useless and none of them implement the Default trait.
Edit: another potential compatibility break is issue: #110 I'm looking forward to seeing where this project will go! P.P.S. Here's what I think you should do with the cust library (current version: 0.3.2).
@LegNeato, You should make a tracking issue to discuss what will be included in each release that you plan to do. |
I got the rest of the issues with my environment resolved. The main thing not building right now is the nvvm_codegen. It looks like there was another issue in this repo for resolving that, so I can play around with that and see if I can get it to build. I also agree with ctrl-z, I think if we want to do some re-design or break compatibility, now would be the best time |
Yep, open to breaking whatever, let's get to latest. The plan was to switch off NVVM and onto ptx directly but after talking with NVIDIA I am not so sure that is the best way forward. |
@Schmiedium you might want to look at rust -gpu 's forward porting as it has to deal with similar issues. I plan to take a look later this week as I largely did the other forward port, but if you get time go for it (just comment or start a draft or issue so we don't duplicate) 😁 |
@LegNeato I'd like to share some observations on potential challenges with direct PTX usage and offer concrete ways I can Key Challenges with PTX
|
Great info! I the topic came up because it was mentioned by @RDambrosio016 in #98 (comment) and @kjetilkjeka is actively working on / using the nvptx backend in rustc so it is worth exploring the tradeoffs |
This is being pulled in through the |
I would love to contribute. I will first try and get the existing examples working on my setup. |
@jorge-ortega Thanks! I found the package, looks like it was an old version of sysinfo. I'm going to publish a branch for the forward port of the project to try and get all the dependencies updated. And @LegNeato, thanks for the info on the rust-gpu forward port, I'll check that out for how they went about it |
Hey, great to see this crate being rebooted. I am interested in contributing as well. I have some experience in using the nvptx backend from Rust. I think it really could be a viable alternative to the current nvvm codegen which is used from Rust-CUDA at the moment. My observations so far are:
|
Hello there, I've been developing GPU kernels in Nim, Cuda and LLVM IR ? NVPTX for awhile including an LLVM based JIT compiler with both NVVM and NVPTX backends (see my Nim hello world with both backends https://github.com/mratsim/constantine/blob/v0.2.0/tests/gpu/hello_world_nvidia.nim#L107-L152)
The issue with NVVM is that they use LLVM IR v7.0.1 from december 2018 and the version just after 7.1.0 was a breaking change. Quoting myself:
There is a way to dowgrade LLVM IR which is what Julia is doing through https://github.com/JuliaGPU/GPUCompiler.jl in the following package https://github.com/JuliaLLVM/llvm-downgrade but they have to maintain a branch per LLVM release and it seems quite cumbersome. |
FWIW, the latest CUDA 12.8 introduced a second dialect of NVVM IR based on LLVM 18.1.8 (see NVVM IR docs)
|
Yeah, that's what NVIDIA pointed out to me that made me reassess! It sounds like they are treating NVVM as the stable and suggested layer and ptx is the discouraged hard mode. I'm also not sure if there is more interop or optimization potential with NVVM and if it is worth the problems hit previously. We'd certainly get work "for free" from Nvidia's tools, but it is not clear if we'll be fighting upstream on the rust or DX side. I know the MS shader compilers are notoriously annoying to work with for example. There are also considerations like the autodiff support in nightly operates at the llvm layer and might be easier to interface with if we are at the NVVM layer? On the flip side, there is the nvptx rustc backend and perhaps targeting ptx will let us all better reuse work. If anyone thinks they have insights or thoughts, please chime in. Lots to figure out! |
I have access to A100 and H100 GPUs so I can help with testing. If I have some free time, I could also try to help with development and porting to newer versions of rustc/libNVVM. |
They have better optimization passes, and the driver is optimized to lower PTX from NVVM to binary code. But I think using LLVM will significantly ease development and also deployment. And requiring Blackwell for NVVM 2nd gen is meh. Ultimately if someone has a perf bottleneck, I believe they would use inline PTX (or if crazy enough, go the Nervana way and reverse engineer the GPU SASS: https://github.com/NervanaSystems/maxas/wiki/Introduction. I think NVPTX would go 90% of the way and if even higher perf is needed, it's for a commercial product and they would dedicate dev time (or buy faster hardware or distribute compute).
Which autodiff are you talking about? is it Enzyme? (https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s32466/) |
Hi, I just saw someone referenced autodiff support! Enzyme supports CPU and GPU parallelism and has a PreserveNVVM pass that I could schedule, if someone starts running some experiments with it I can probably enable it for nightly: https://github.com/EnzymeAD/Enzyme/blob/main/enzyme/Enzyme/PreserveNVVM.cpp I'm about to finish upstreaming autodiff see here, so I hope to get soon back to my offload project goal, which also intends to run pure/safe Rust code on the GPU: rust-lang/rust#131513. It's not vendor specific, so writing unsafe and Nvidia specific Kernels is probably going to be faster in some cases, but I hope to offer a good-enough performance purely based on LLVM for most cases, s.t. all vendors and Enzyme can be supported. That's not too different from Julia world (since someone already mentioned KA.jl), where KernelAbstractions.jl is the generic frontend which supports all vendors, and then some people decide to instead or additionally write code based on CUDA.jl, for better performance which KA.jl can't provide at the moment. |
Happy to see this project being rebooted! I know this project is called Rust-CUDA. But you also mentioned that Rust-GPU is like the sibling project and you may look into backend-agnostic integration in the future. What about other backends? I personally work for a Chinese corp that is building our own GPGPU ecosystem and I know there are several other new players that are trying to break the CUDA monopoly. I am interested in contributing to the backend-agnostic features and possibly adding new backend support if it's planned. |
I'm very excited to see this project getting rebooted, I would love to contribute if I can. I have a couple of 3070 TIs and have some experience writing kernels with CUDA C/C++, pycuda, and Julia's CUDA.jl. I'd be happy to do testing, docs or examples, or development if I'm able. I've been using Rust as a hobby for a couple years, the community has been really great and I'd like to get involved to the extent that I'm able |
I am excited too. |
So I made some progress, I was able to get the dependencies updated to get a much newer version of rust supported. I played around with CI, and I got it working except for the fact that nvvm_codegen doesn't build. I also updated gpu_rand to be consistent with the newest rand_core api and that crate builds successfully now as well. The windows part of CI also seems to take forever, it was over 40 minutes and still didn't get past installing CUDA, so that's something to dig into later. I think the next thing to work on is getting nvvm_codegen to build, and the rest of the project as well. Once that's done, and the remaining CI kinks are worked out, I think we'll have an updated, functioning project on our hands Thanks to @juntyr for reviewing my pull request earlier |
Hi all, looking into building a rust-based simulator with gpu support--needless to say, would love to help contribute to this project. |
@LegNeato would love this project to run within the GPUMODE community. Interested? |
@msharmavikram not sure what that means. |
@LegNeato what kinda CI machines do you need? |
See https://rust-gpu.github.io/blog/2025/01/27/rust-cuda-reboot.
@RDambrosio016 has made me a new maintainer (I'm also a maintainer of rust-gpu).
Please comment here if you would like to be involved, or better yet put up some PRs or direct me to what needs to be done from your perspective!
The text was updated successfully, but these errors were encountered: