The target here seems to be path-finding algorithms, which is unsurprising. And, of course, this would be using the relevant GPGPU tech from each company: CUDA and ATI Stream. The article touches on the competing efforts and the need for standards in the space – which I have commented on previously.
Now, call me a pessimist but as much as I am looking forward to this – I’m not inclined to think that it will revolutionize gaming AI. Your average triple-A video game is going to squeeze everything it can out of your PC hardware. This is not a new thing. It isn’t unusual that the owner of the AI component in the game is told that they can only utilize a small percentage of the CPU at any time. I don’t think that moving that CPU load to the GPU changes the picture meaningfully – the AI component will still only be given a small percentage of the available PC horsepower to use. Only now they have to bargain with the graphics guys for their cut of the available machine cycles. And I’m not sure that many game studios would be willing to dial down the slick anti-aliased vertex-shaded blah-blah-buzzword graphics of their game simply to give more cycles to the AI. But I could be wrong.
However, this re-opens a recent favorite topic of mine – using video cards to throw large numbers of cores at parallel AI problems. There is certainly potential here, and maybe there will be a game that really utilizes it and surprises us all.
All that said, I’m still looking forward to seeing the technology and to see how all this plays out.
(Tip of the hat to Kotaku for the link.)