programming and that is only when the algorithm used is not too sequential or has not branches than cannot be predicated,
when the algorithm is even partially to highly sequential of has many branches and si object oriented, then even a slow CPU is better than the best GPU.
The mith that GPU are 10, 100 or even 1000 of times with faster than CPU started with the same dishones people who run these so call benchmark between GPU and CPU,
what they do is that they take extreme data parallel simplistic algorithm and convert it to SOA, run at its best on a GPU, then they run the same kernel compiled in c code on the CPU and of course
the cpu comes short, they declare GPU victory.
Believe me the dishonesty among the people running these test has not bounds, there is a reason why the articles claiming GPU victory comes from Nvidea AMD, and the cheerleader self appointed experts from GPGPU, the Chromos group and GameDev.net.
they have a bested interest in showing up the GPU at its best and the CPU at its worse, because they are in one way of another paid by those companies to show what they want.
The idea is to flood the communications line with misinformation, they know that for 99.99% of the people do no read those articles, they simple
use then as reference as truth just because it has the stamp of IEEE, ACM, of some other label.
By the time paper is demonstrated to be false, it is too late because it already had the desired effect for them, which is to attract the attention of masses.
ACM, and IEEE and all flooded with what I call * Sandwiches papers. The people who engage in these are immoral and Dishonest.
This do not apply to GPU and CPU, it also apply to competing software product, and example of the is these man:
Kenneth Bodin Holmlund
VRlab/HPC2N, Umeå University, Sweden
That man uses his position as Professor of Umea University, to every year, promote his company.
The way he does that is by writing an pseudo evaluation of some technology they consider a strong competitor of Virtual reality lab.
Then the takes any group of Graduate students that year, and the have then writing a turn paper designed to discredit that technology.
He knows that the paper will make it to the internet, and will be promoted.
In reality the paper are Hit job that Virtual Reality Lab uses as a weapon to smear competitors but p[lace the doud of teh mind of the end reader.
all under the "pretention of a Research study by graduate students"
It is not the graduate studendt falt, by enlarge they do not know any better they simple want to graduate to go out and fine a job.
As for how these relate to Newton? I am more concerned with accuracy and stability more than I am with performance.
The trend from physics engine developers are toward algorithms that are parallel by nature, it is not an accident that you see Nvidia and Havok
are investing on pseudo cloth, pseudo soft body, debris, pseudo hydronamic and particles. those things are cheap to calculate in parallel with a small amount of predication, and they convey a fairly convincing result for video games.
Newton moves toward more accurate algorithm and in that line Newton 1.0 was extremely accurate but too slow, hey Newton 1.0 even has a Singular value decomposition canalize to remove problematic contacts.
with Newton 2, I am relaxing the accuracy a little and I am going with and iterative solver, but not to the point that the physics simulation becomes useless.
with Newton 3, is improving on parallel execution, but not to the point that that it is so bad than no one can use it. and I am also doing the parallel stuff like lathe and soft bodies, it is for that aspect that O may end up using OpentCL.
but I will never compromise accuracy and stability.