not at this time. I tried few time but OpenCL, MicrosoftAmp, and Cuda all three are too contrive pseudo languages that are at the merci of what NVidia and a AMD, that is not worth the time spend any time of it.
you would think that of the three OpenCL would be a general language, but The Khronos Group is a corrupt organization controlled by members who have their head so much up the ass of AMD and Nvidia, that you do not know where the head begins and the Ass end. They just do what ever AMD tells them to do instead of the other way around. The result is that OpenCL is a copycat of CUDA but much worse. What good does it to have a GPU that can do a gazillion floats operation if you can not use it? The hardware has to be designed to support the language, no the other way around.
Intel has the best solutions with multicore Phi accelerator, which maybe I paper is not as fast as GPU but in practice is orders of magnitude faster on average for general applications like a physic simulation and sequential general computing, plus it can be programmed in common languages like C++, C and others.
Unfortunately is has not caught on because it is expensive and Intel is not doing itself any favors by marketing to the supercomputer market.
reemit that GPU physics is fake physics, in GPU physics all you have to do is touching one object and the entire structure collapses down, that's can make for demos that will get sheers form the uneducated viewer, but the that can not use for anything practical.
But Like I said of your nee are for massive GPU simulation them perhaps you will better off with some the other libraries that specialize on that. nothing wrong with that.
Nah I'm not doing anything special. I was just looking for a physics engine and I thought GPU acceleration is needed for efficiency. Turns out it's not that important. Alright, excuse my stupidity.
Remember the Hardware is not what makes libraries good, what makes a library good is good algorithm and good programing. GPU Physic in particular, is the worse the worse kind physics because the hardware imposed severe and unsurmountable barriers sequential algorithms. The result is that companies like Nvidia and AMD had spend over half a billion dollars over a decade trying invert brand news laws of physics that can be parallelize in trivial GPU architecture. The only thing they have to show are little DGC demos that no one can really uses in any practical application.
They haven't even figured out how to make a simple joint stable in a real CPU, yet they try to made the bad physic they got even worse by bring it to GPU, the result is faster bad Physics. Just browse forums like Unreal and Unity so that you can see the dissatisfaction and frustration people going trough try to get anything done wit these libraries.
Newton is slower than other libraries, but I care that what is in Newton must work to some degree or reliability. If you want to see effects that traditionally some players has mislead the public into believing can only be done by using GPU, look at this demo.