Visual Computing Still Decades from Computational Apex

The place for technology related posts.

Moderator: Moderators

Post Reply
User avatar
Sabre
DCAWD Founding Member
Posts: 21432
Joined: Wed Aug 11, 2004 8:00 pm
Location: Springfield, VA
Contact:

Visual Computing Still Decades from Computational Apex

Post by Sabre »

PCPer
The human eye has been studied quite extensively and the amount of information we know about it would likely surprise. With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.
With a resolution of about 30 megapixels, the human eye is able to gather information at about 72 frames per second which explains why many gamers debate the need for frame rates higher than 70 in games at all. One area that Sweeney did not touch on that I feel is worth mentioning is the brain's ability to recognize patterns; or more precisely, changes in them. When you hear the term "stuttering" or "microstutter" on forums this is what gamers are perceiving. While a game could run at 80 FPS consistently, if the frame rate varies suddenly from 90 FPS to 80 FPS then a gamer may "feel" that difference though it doesn't show up in traditional frame rate measurements.

In terms of the raw resolution though, Sweeney then posits that the maximum required resolution for the human eye to reach its apex in visual fidelity is 2560x1600 with a 30 degree field of view or 8000x4000 with a 90 degree FOV. That 2560x1600 resolution is what we see today on modern 30-in LCD panels but the 8000x4000 resolutions is about 16x that of current HDTVs.
Doom was the first game to really attempt to simulate reality and used the most basic of first order approximations to generate the graphics with a single bounce of light from the texture or sprite to the 3D camera being manipulated.

Sweeney's own Unreal was an early adopter of the second-order approximation, allowing light rays to bounce off of two things before converging on the virtual camera. According to Sweeney 99% of today's games on the console and PC are still using engines based on this type of simulation method.

Third-order approximations are much more complex but allow light to bounce through many items and as a result you are seeing more realistic reflections, skin coloring and highlights. The Samaritan demo from Epic Games shown last year is the company's investment in this type of computing and requires several of today's fastest GPUs to be rendered with even minimal interaction and lower than desired frame rates.
As an interesting demonstration Sweeney gives us the approximate computing power of these titles. The original Doom required 10 MFLOPS (millions of floating point operations) to run at 320x200 in 1993 while Unreal required 1 GFLOPS (billions) to run at 1024x768 in 1998; both at 30 Hz.

The Samaritan Demo that Epic unveiled in 2011 was running at 1920x1080, still at only 30 frames per second, with more than 40,000 operations per pixel. The total required GPU computing power was 2.5 TFLOPS (take a guess?) and it was running on three GTX 580 cards if my memory serves. Of note is that the latest Radeon HD 7970 GPU is capable of a theoretical computing power of 3.79 TFLOPS though how much of that an engine like Samaritan could utilize has yet to be determined fully.

Also interesting to see for those "consoles are good enough" users, take a look at the compute power of the Xbox 360 - only 250 GFLOPS, a tenth of the required power to run Samaritan.

And while the visual quality we are seeing in games like Battlefield 3 and the Samaritan demo are impressive, Sweeney was quick to point out that to truly get to the level of movie-quality lighting we will need to progress even beyond the third-order approximations we are seeing as the limit today. It is likely we will need to see another order of magnitude increase in computing power in order to reach that 4th level - PetaFLOPS are in our future.

Because we know (well, someone knows) and completely understands how lighting, shadows, skin, smoke and other complex visual challenges worked in the scientific sense, they can be approximated accurately. And with enough orders of approximation, as we have seen in the slides above, we can get very close to perfection. Sweeney estimates that we will need around 5000 TFLOPS of performance, or 5 PFLOPS, to reach that goal. That is a factor of 2000x today's best GPU hardware and leaves a lot of room for development from NVIDIA, AMD and even Intel until we reach it.

Even Sweeney wouldn't put a time frame on the goal of hitting that 5000 TFLOPS mark but depending on the advancement of both process technology and the drive of designers at these technology firms, it could be as soon as 5 years or as long as 10. If we go over that, I feel we have wasted the potential of the hardware available to us.
Pretty good article, I wish it was longer though!
Sabre (Julian)
Image
92.5% Stock 04 STI
Good choice putting $4,000 rims on your 1990 Honda Civic. That's like Betty White going out and getting her tits done.
User avatar
Raven
Mr. Underpowered or something
Posts: 1221
Joined: Thu Feb 18, 2010 12:46 pm
Location: Manasty

Re: Visual Computing Still Decades from Computational Apex

Post by Raven »

I've definitely seen that stuttering effect they're talking about, it drives me nuts. I run BF3 on low settings despite having a pretty powerful computer to avoid this effect.
All my cars have drum brakes and are sub 200 hp, what am I doing with my life?
2013 Mazda 2
1994 Chevy S10 pickup
1985 Chevy Caprice (no fuel system)
Post Reply