How does the PS4 differ from a high-end gaming PC?
Sony described its upcoming PlayStation 4 as a "supercharged" PC. However, while it may use many parts found in high-end gaming PCs, PS4 system architect Mark Cerny argues that PS4 has many unique features that separate it from today's PCs.
Sony described its upcoming PlayStation 4 as a "supercharged" PC. Powered by familiar x86 architecture manufactured by AMD, PS4 is more like a gaming PC than any previous Sony console. However, while it may use many parts found in high-end gaming PCs, PS4 system architect Mark Cerny argues that PS4 has many unique features that separate it from today's PCs.
"The 'supercharged' part, a lot of that comes from the use of the single unified pool of high-speed memory," Cerny said, pointing to the 8GB of GDDR5 RAM that's fully addressable by both the CPU and GPU. "If [a PC] had 8 gigabytes of memory on it, the CPU or GPU could only share about 1 percent of that memory on any given frame. That's simply a limit imposed by the speed of the PCIe. So, yes, there is substantial benefit to having a unified architecture on PS4, and it's a very straightforward benefit that you get even on your first day of coding with the system."
According to Cerny, PS4 addresses the hiccups that can come from the communication between CPU, GPU, and RAM in a traditional PC. "A typical PC GPU has two buses," Cerny told Gamasutra in a very detailed technical write-up. "There's a bus the GPU uses to access VRAM, and there is a second bus that goes over the PCI Express that the GPU uses to access system memory. But whichever bus is used, the internal caches of the GPU become a significant barrier to CPU/GPU communication--any time the GPU wants to read information the CPU wrote, or the GPU wants to write information so that the CPU can see it, time-consuming flushes of the GPU internal caches are required."
PS4 addresses these concerns by adding another bus to the GPU "that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches." The end result is that it removes synchronization issues between the CPU and GPU. "We can pass almost 20 gigabytes a second down that bus," Cerny said, pointing out that it's "larger than the PCIe on most PCs!"
"The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we've worked with AMD to increase the limit to 64 sources of compute commands," Cerny said. According to Cerny, the reason for the increase is that middleware will have a need to use compute as well. "Middleware requests for work on the GPU will need to be properly blended with game requests, and then finally properly prioritized relative to the graphics on a moment-by-moment basis."
-
Andrew Yoon posted a new article, How does the PS4 differ from a high-end gaming PC?.
Sony described its upcoming PlayStation 4 as a "supercharged" PC. However, while it may use many parts found in high-end gaming PCs, PS4 system architect Mark Cerny argues that PS4 has many unique features that separate it from today's PCs.-
for one a cpu right now wont benefit verry much if any from ddr5, second must gpus barely saturate pcie 2.0(bandwith 16 GBs(almost 20GBs)) and if your talking about a current gen gaming pc its more then likely has pcie 3.0(bandwith 32GBs). most pc gpus have dedicated ram,unless you are talking about a apu, if your using a apu its not a real gaming pc sorry. sony better price this right or pc gamings going to make one hell of a come back.
-
-
-
Except that there is, you can't use the GPU on system memory or the CPU on GPU memory without a whole lot of slow copying over PCI-E and coherency issues.
Integrating everything into one die that has a shared memory subsystem means you can do a lot of very very cool things.
GPUs don't saturate PCI-E because the engineers that design them do everything in their power to use it as little as possible, because PCI-E is, on the scale of a GPUs bandwidth.. really slow and high latency.-
-
-
-
-
-
-
-
-
-
-
-
-
-
I have a PS3 and a PC. I only bought three games on the PS3 in five years. Journey, Battlefield 1943, and MLB 11: The Show. Two of those purchases I regret.
I'm not really interested in any of the console exclusive titles being released. I would buy The Last Guardian, but who knows if that will ever come out. I'm certainly not going to spend $400 on a PS4 just for that game. I would get Forza if I had an Xbox.
I really don't think I'm missing out on all that much.-
-
-
The second one was worth a rental. I played them all and wouldn't suggest anyone buy them (especially for $60), but there some good Indiana Jonesy action in there. They're all more entertaining than Indy 4 as far as the story goes, anyway. Just too much terrible shooting, and the climbing and traversal feels like a long QTE that's impossible to fail.
-
-
-
-
-
There's nothing stupid about wanting to play on whatever you system you prefer, exclusively. It's personal choice.
I choose to own both a PS3 and a PC and that works very well for my tastes and interests - though I still find that the PC offers way more benefits to me, gaming-wise, than the PS3 but again, personal preference is what it's all about.-
-
Your comment here presupposes that they're not going from facts that matter to them vs. their opinion just being a quaint "notion." If a particular factual aspect of a product isn't pleasant or welcomed by that particular gamer, then I see nothing illogical or unusual (at all) about them avoiding a particular platform. Sure, they'll likely miss out on some games but if they had to play those games at the cost of dealing with something annoying or even hated about the platform that supposedly great game is being played on, it may simply not feel worth it to that player.
-
We're talking about people who refuse to play any game on a console merely because it's on a console. Not even a specific console but any console. These are not people making rational decisions. Furthermore the idea that dealing with "annoying" things on consoles is so overwhelming that it justifies this attitude is so incredibly laughable that the value of your opinion to me on this subject has plummeted below worthless.
-
-
-
-
-
-
-
-
-
-
-
you know its gone bad when its accepted wisdom at reddit. One of the top comments in a games thread the other day said that consoles allowing for more power via fixed, directly addressable hardware had been "thoroughly debunked" and you could compare consoles to PC specs 1:1.
I suspect the reason is most of the posters are barely old enough to remember the last console launch.
-
-
-
I'm still an amateur at realtime stuff, but afaik this is why we still have so much static level geometry despite physics being available - writing updates like kicking a keyboard off a desk just isnt worth the time to write from CPU to GPU memory when you can just say to the GPU "fuckit, use the same data from last frame" and its hundreds to thousands of times faster.
Its actually crazy how fast you can run into limits trying to update stuff between CPU and GPU, it makes it feel like its 2003 again. -
-
-
addendum: and being able to have the cpu or gpu work on it whilst it's in the cache without thrashing it around is a massive bonus as well.
I'm all for absurdly powerful PC's TheBlackThunder, I'm sat at a 4.6Ghz i7 with 16GB of ram and (when it's back from an RMA) a Radeon 7970... But that's a brute force approach, and there are still things that the PS4 platform is going to be able to do faster because it's just a more efficient layout, the PC will pull ahead in some places (and some of them will be important places, make no mistake) but well optimised game vs well optimised game, it's going to run a lot better on the PS4 than on a PC that on paper has equivalent GPU and, well, more CPU grunt, because it's just going to be MUCH more efficient.
And that's before getting to things that CAN'T be done in a realtime timeframe on the PC at all because you'd have to shunt data to/from system/video memory over PCI-Express... It's bad enough when there's surplus texture data in a game (off the top of my head the easiest way to see this without having underperforming hardware is Skyrim with HD texture mods) that only needs to be transferred one way. stuttering is not a sign of an efficient piece of hardware.
-
-
-
-
-
-
I call FUD on what Cerny said about the PC only using 1% of the available memory per frame. He seems to conviently forget that the GPU itself has a large amount of DDR5 memory and has been that way for years. the DDR5 amount on the GPU may not be 8GB but it doesn't need to be. The CPU has enough memory to do the task its needs to do as well. Also the data is loaded to the GPU in batches not necessarily on every frame.
-
From reading the article, I got the impression that Cerny was talking about taking out the batching/context switching normally required for CPU to GPU communication, essentially allowing large memory transfers within any frame by removing the need for caching on both the GPU and CPU sides.
It's my understanding (which I admit may be completely wrong) that any given frame, which essentially corresponds to a function call and may be nested, is really only dealing with a very small bit of data at a given time. Since writes to memory essentially require that a chunk of memory is locked, and the locking context switching is expensive, CPUs and GPUs alike resort to caching, rather than trying to write out to memory at the exit of every frame. This is compounded by the fact that system memory and graphics memory are physically separate in general-purpose computers, so on top of locking and caching, communication is required between processors to transfer any data between them.
Cerny seems to be stating that the shared GPU/CPU memory and a dedicated non-locking memory access bus between the processors removes the overhead of locking and buffering, which makes sense to me. Basically, the entire bandwidth of the memory pipeline can be utilized constantly for certain operations, and no extra synchronizing needs to be done between processors.
I think the real benefit of the massive amount of memory is that there is much more room for creating these unbuffered chunks of the memory pipeline. If the operating system can set up a given 2GB chunk of memory only for CPU writes and GPU reads, and viceversa, that's an incredible amount of data that can simply be pushed through at full speed. Or multiple cores can each have their own write chunks, optimized however to take full advantage of the bandwidth. Any core may not need that much memory to do its job, but when you're talking about allowing the processors to write out as fast as physically possible, the worst thing you can do is have your speeds bound how much RAM is available.
I don't know if what I just wrote makes any sense.-
*SNIP* any given frame, which essentially corresponds to a function call *SNIP*
Actually in modern-day game engines that use an object-oriented programming approach will usually have a FrameUpdate() function/method attached to each object. That function in turn will be called every single frame as long as that object is currently loaded. (objects can be anything, literal objects, cameras, AI routines, etc.)
Of course it gets more complicated as you generally do not want code run every single frame on many different types of objects. (For example anything that would modify what shaders are applied to an object would be pointless to be called/updated unless it were actually viewable by a player). In which case then a similar function like a OnViewFrameUpdate() method is used instead so that its only called if a raycast from camera is able to hit/view the object depending on the type of camera.
AI however, you generally want in the standard FrameUpdate() as you wouldn't want the AI to stop and wait for the player to look at them. (Simpler games, you could probably get away with this).
The trick is you want to cut down on as much code as possible being called every frame, however, generally all game objects and all the sub-objects that make up or are a part of other objects have co-routines to handle things on every frame.-
Perhaps I misunderstood the context of "frame." I've been talking about stack frames (see here, https://en.wikipedia.org/wiki/Stack_frame#Structure), basic units of execution for pretty much any program, game or otherwise. If I misunderstood the meaning of "frame," then pretty much everything I wrote doesn't apply.
That being said, from a general programming standpoint, the PS4 looks pretty hot.
-
-
-
-
-
-
-
-
-
-
-
The x86 architecture may actually give you your wish, though.
Since the impedance mismatch between PS4 and PC will be much lower, it may be a no-brainer for developers to just throw in PC peripheral support during development and just leave it there. Theoretically, the same interface will be available for both platforms (taken advantage of by game engines, rather than individual games themselves,) so they may actually be making more work for themselves by removing support rather than keeping it in place for all versions.
-
-
-
-