City of Villains To Support PPU
AGEIA today announced that Cryptic Studios will update City of Villains to make use of the PhysX physics accelerator cards that are coming out soon. The PPU support will mainly help with the game's particle effects, with the press release listing a few examples.
-
someone tell me what to think about this. i'm too unitiated to research it myself.
honestly, I don't want to spend an extra $50 - $100 on something like this.-
-
I think the nVidia GPU accelerated physics route is way more likely to succeed because it's free, there's just no way I'm plonking the cash down for yet another add on card without great reason (and CoH sure isn't it). As others have said, this Ageia PhysX card still lacks a GLQuake like the Voodoo got.
-
So as I understand it, GPU accelerated physics provides effects only, i.e. non-gameplay-changing stuff. The game world can affect the physically-modelled effects, but not vice versa.
My impression was that the AGEIA product was an engine that could do gameplay physics. (Although it's not clear that City of Villains is going to be using it for that right off.) Anyone know if I've got the wrong idea here?
Anyway, sure it's too early for us mere mortals to be shelling out money for a physics accelerator card, but in the abstract the idea of an accelerator for _gameplay_ physics interests me more than the idea of better ways to model visual effects.
It's cool that AGEIA is providing a library that can be used for non-accelerated physics modelling, just like OpenGL could be used with or without hardware acceleration. Makes it easier to adopt. However, unless they've got some real tricks up their sleeves, it sure seems like they need to get a market leader like Havok to add backend support for their card, if they want to go places.
-
-
-
A bit of surfing provides the following quote about Havok FX (the "GPU accelerated physics" solution):
Games will continue to use Havok Physics to provide scalable game-play physics with the typical “twitch†response times required to make physics fun and well-integrated with other core game components on the CPU. But Havok FX will be able to layer on top of that many 1000’s of objects (or organic effects based on clouds of objects and particles) that can be affected “downstream†by the game-play physics. There will be some limited feedback from the GPU to the CPU, but this will be lower priority and in general this is what allows the effects to be done extremely quickly on the GPU and in parallel to the game physics.
http://www.firingsquad.com/features/havok_fx_interview/page2.asp
So it sounds like Havok Physics (running on the CPU) handles the gameplay-affecting physics, and Havok FX (the "GPU accelerated physics") adds some physically modelled special effects.
-
well what, do you think that the physics processor is doing all the computations then placing the information... where? It's obviously being stored in main memory, which is accessible by the CPU. It certainly wouldn't be shipped directly to the video card, what is the video card going to do with it?
-
what is the video card going to do with it?
Well, if you're physically modelling special effects, I would assume that the video card is going to render it.
I'm not an expert about any of this, but do you see some other interpretation for the quote I posted above?
Also, to avoid confusion, let me reiterate that this is talking about Havoc FX, not about AEGIA.
-
-
-
-
The problem is the reason graphics cards can be so fast is they're a fire and forget system. They don't return any data to the game at all (and when they do there is generally performance hits to do that since the nature of the beast is that it's designed to be a one-way communications process).
With a physics card you have to wait for data to finish so your game logic can handle collisions. Is the benefit of having dedicated circuits more than the cost of shifting all that data back and forth across the bus instead of having some SIMD code work in cache memory? For now, possibly, but how long until that nice physics card becomes a bottleneck again as CPU speeds increase?
Your CPU doesn't have to wait for the graphics card to finish, but it would have to for physics processing.-
Yar. The implied problem there is not in the specific latency numbers for shuffling the data around, but rather that once you go down this road at all, you need to embrace multithreaded programming to efficiently do things other than thumb-twiddling while the physics coprocessor does its thing.
Of course, everyone is on board now with multicore as the wave of the future -- since they can't figure out anything else productive to do with all those darn transistors -- so multithreaded programming is going to be become de rigeur eventually. (Although, by that point, will it still be a win to have a specialized physics coprocessor rather than just dumping the same work on one of your general-purpose cores? Hmm.)
-
-
-
-
-
Research this ;)
http://personal.inet.fi/atk/kjh2348fs/ageia_physx.html -
What to think about it: If you buy it it's just another line to add to forum epenis sigs. You know like the douches who buy raptors to load their os that 1 sec extra since that is the longest second ever.
Enhanced physics in a game = less thought into actual game development of a plotline/storyline. Something for sites to review and skim over that the game total sucks nuts but otherwise it has the best(of the weeks releases) throwable _insert object_ that just runs so much better if you have the card in your system.
-