RSS icon Email icon Bullet (black)
  • Nvidia Optix Ray Tracing engine

    Nvidia Optix Ray Tracing engine

    Let’s start with a quote from Nvidia’s senior developer Jeff Brown:
    “Thousands of applications are being created today that harness the phenomenal power of GPUs, a clear sign that GPU computing has reached a tipping point. The world of computing is shifting from host-bound processing on CPUs to balanced co-processing on GPUs and CPUs. NVIDIA application acceleration engines arm developers with the tools they need to further revolutionize both real-time graphics and advanced data analysis.”

    Nvidia Optix Ray Tracing engine in action

    In short, with this new engine, you are invited to the new world of technology Optix Ray Tracing. It is the state of the art technique for creating an imaging by perfectly tracing the light rays passing through pixels in an image plane. Nvidia Optix Ray Tracing engine is designed and developed based on this technology which can produce a very high degree of photorealism with amazingly perfect style. Of course this perfection comes with higher computational cost. Ray tracing is ideal for all types of applications where time is not a critical factor like in still films and videos and not perfectly suited in applications where time plays a critical role like in computer games. The important advantages of the ray tracing come in optical effects. It can simulate all types of optical effects like scattering, reflection, refraction and chromatic aberration.

    nvidiaglasses-optix ray tracing engine example

    The NVIDIA OptiX engine, a programmable ray tracing pipeline enabling software developers, brings high levels of realism their applications by the proper programming with traditional C language. The OptiX engine makes the ray tracing faster with the tapping of the amazing potential of the amazing powers of the NVIDIA Quadro parallel processors. The Nvidia Optix Ray Tracing engine uses wide range of disciplines like optics simulation, photorealistic rendering, radiation research, automotive styling, acoustical design and volume calculations.

  • NVIDIA GeForce 3D Vision

    As everybody knows, for all ages people have been fascinated by illusions. Like everybody we like to be surprised, amazed and impressed.

    NVIDIA GeForce 3D Vision

    Fantastic way to fool your brain is old well known technology in a totally new way of designing. Do you remember VFX helmets, where games can be placed stereo 3D?? It was amazing but the technology was pretty prehistoric, running games like doom 2 or quake 1:) And now I got a fantastic pleasure to introduce you a fantastic new product with nVidia logo - brand new 3d glasses. And so nVidia has teamed up with Samsung and created a 3d binoculars with high quality 120 Hz LCDs, with all new 3D stereo shutter glasses technology. 120 Hz are very important - that your eyes will be no tired. NVIDIA has its finish in the driver support and go to a state where it is really good. Alongside this, it redesigned the approach to the game experience. These remarkable shutter 3D glasses are wireless and rechargeable, games that rely on new drivers automatically kicks-ass in 3D mode and next to that NVIDIA wanted to find a really cool game.

    GeForce 3D Vision

    It is now 2009, a lot in the stereoscopic market segment has changed and sure, a lot hasn’t. With the new GeForce 3D Stereo Vision slowly kits now available here in Europe, contained a good time to look at them. We asked NVIDIA to send the complete package, with a 120Hz Samsung LCD monitor and shutter glasses kit for. I got nothing more to say - they’re amazing!

  • Microsoft DirectX 10

    Although DirectX 10 was released quite a while ago but still there are very few games today that really take the full advantage of its features. Hugely popular First Person Shooter titles and Role-Playing Games all barely push the sheer graphical power of the newest version. As the DirectX 10, was introduced in Windows Vista, 3D features support the same effects as 9, guaranteeing compatibility across the board. Although this may sound very good, allowing for a more standardised games development environment. But the reality is not so colorful, because Microsoft will soon be regulating the introduction of 3D features, leaving companies such as NVIDIA behind. This application programming interface (API) was officially named “DirectX 10.”

    DirectX 10

    DirectX 10 was available to Windows Vista users only at the time of its introduction, but unfortunatelly, you will not find DirectX 10, being released for the Windows XP operating system. DirectX 10 is deeply linked into Windows Vista OS and we currently do not know about the plans by Microsoft to allow Windows XP to officially support the new DX10. In general, DX10 gives much more generic graphic processing model with lots of flexibility and reliability. This will be very crutial going forward, but right now developers still need to make some limitations on shader length and complexity based on the performance of the hardware that exists. Another thing related with the Geometry Shader is the Stream Out functionality that provides the GPU to recycle graphics files without computing on the CPU. Not only is this a hit of performance, it will also gives completely independant of the CPU for particle systems. Take a look at the DirectX 10 performance tested on two nVidia Cards:

    DirectX 10 performance

    If you want to have a DirectX 10 on your computer you will have to go with Windows Vista as your OS, theres nop possibility to launch dx10 on MS windows XP systems. Because of this we will see an expensive upgrade path associated with the experience of DirectX 10. You will need Windows Vista, DirectX 10 hardware and of course some DirectX 10 coded games, what is totally rediculous, that’s why plenty of gamers are turning into PS3 or Xbox. The question that gamer all over the world are asking is: “Will this very expensive upgrade will have positively impact on my my gaming experience enough to justify the cost?” That has yet to be seen and can only be answered with the games we have yet to play. We can however talk about some of capabilities of DirectX 10 with a unknown architecture and answer the question how it can potentially benefits to the gamers. In next reviews we’ll be reviewing later versions of DirectX.

  • What is nVidia PhysX?

    nVidia physX logo

    What is nVidia PhysX?

    Generally speaking PhysX is a realtime middleware physics engine SDK. It refers to the card PPU (physics processing unit) which can accelerate graphic processing, where PhysX feature is enabled. It is used the most to enhance graphic environment in computer games, PhysX was designed strictly to improve the graphic performance. The physX was released some time ago, it was used then by Graphic engineers and designers to produce professional physics simulations and to make 3D environment used in games or movies. The fact was that this technology was too expensive for a regular gamers. Graphics processes supporting hardware acceleration by PhysX can be accelerated by PhysX PPU or a CUDA-enabled GeForce GPU. CUDA is a name of PhysX developed engine by nVidia. When graphic applications such as computer games using physics calculations from the CPU, allowing it to perform other objectives instead - it is potentially resulting with a smoother and faster graphic processing. Middleware physics engines gives another feature to game designers. It allows to avoid writing their own code to handle the complex physics interactions possible in modern games, because PhysX has got a ready to use physical algorithms.

    A bit of PhysX history

    PhysX was originally developed by Ageia as the NovodeX SDK. Ageia was a company that profiled itself into the 3D graphics market with fantastic idea to bring physics computing into computer games. Ageia engineers knew that physics calculations allow for a more extreme and real visual experience. Ageia way of thinking was a really interesting, and as a pioneer idea it had also plenty of disadvantages, unfortunately the cards were put into the market way too expensive and received way to little industry support.

    ageia physx logo

    The Ageia financial results was a way below the average and the company was nearly bankrupt. All management’s eyes and hopes turned to nVidia who was interested in Ageia technology of graphic processing. And it happened, in February 2008, Nvidia bought Ageia for 30 million dollars and hired their leading staff to get Ageia’s PhysX API. After that the PhysX engine and has begun to transform into nVidia CUDA technology. In August 2008, Nvidia released software technology that allows GeForce 8 series and higher cards to implement PhysX graphic processing.

    PhysX features and performance:

    PhysX graphic processing is widely used to delivering physical environments inside the game source. The main features that PhysX is capable to perform is allowing very spectacular graphic realtime effects, very detailed environment like clothes factor, tear drops and hairs. PhysX also improves dust and collateral debris during in-game explosion. More of that it can perform moving objects inside a very dense smoke & fog without lack of performance. When PhysX is on, game characters has got complex geometries for better movement and interaction. It generally increases performance of all graphic applications moving some tasks to PhysX PPU. Nowadays any CUDA ready GeForce graphics cards, GeForce series 8 and newer, can take advantage of PhysX without the need to install a dedicated PhysX card. Take a look at the graph showing the PhysX performance rates on various resolutions. The graph compares platforms with and without Physx graphic processing:

    nVidia PhysX graph

    As you see the PhysX doubles the performance of graphic processing, so the technology, nVidia invested was a sure shot. Now, NVIDIA claims that the fact GPU solutions are cheaper is going to push better GPUs into more powerful machines, making more PCs abstractly available for gaming. When PhysX is disabled in software, it left the effects enabled, and they are now calculated over the CPU. It’s incredible how much CPU overhead that takes. Now, normally your FPS would be much higher as physics stuff is disabled, but I figured it’s a nice example of how well Physics can be done over a GPU. It’s just much more efficient.

    nVidia/EVGA GeForce gtx285 with PhysX

    PhysX P1 (PPU) hardware specifications:

    • Multi-core MIPS architecture based device with integrated physics acceleration hardware
    • Interface: 32-bit PCI 3.0
    • 125 million transistors
    • Fabrication Process: 130 nm
    • Peak Instruction Bandwidth: 20 billion per second
    • Memory: 128 MB GDDR3 RAM on 128-bit interface
    • 182 mm2 die size
    • Sphere collision tests: 530 million per second (maximum capability)
    • Peak Power Consumption: 30 W
    • Convex collision tests: 530,000 per second (maximum capability)
    • Price: Between $100-$250

    nvidia physx

    So finishing, NVIDIA did a lots of fantastic work here. The attributes the PhysX provides are very good. PhysX solution is the great choice if you want to have the best available gaming experience, and the performance are great. PhysX is a fantastic graphic process technology. It looks good for the future I want more, yet we need to see some bigger and newer titles supporting it.

  • What is the nVidia SLI?

    SLI means Scalable Link Interface and it is a brand name for a multi-GPU solution used by Nvidia for linking multiple video cards together to generate only one output. SLI is an system of parallel processing for graphics, designed to increase the multi-processing power available for graphics.

    sli ready nvidia

    Sli system wasn’t designed by nVidia engeneers. The name SLI was first used by company called 3dfx - former leader in graphic cards, it was famous for creating the very effective graphic accelerator , based on chip called “Voodoo”. Voodoo was ruleing all over the world. I remember my first diamond monster graphic card with voodoo chipset - it was world first graphic card that was an appard module to primary graphic card, it was called graphic accelerator,. It’s performance for those days was magnificent. It was the beggining af this technology. 3dfx relesed it under the full name Scan-Line Interleave, which was introduced to the consumer market in 1998 and used in the Voodoo2 line of video cards. After buying out 3dfx, Nvidia acquired the technology but did not use it. Nvidia company re-released the SLI name in 2004 and intended for it to be used in modern computer systems based on the PCI Express (PCIe) bus - whitch is actual fastest graphic interface. However, the technology behind the name SLI has changed dramatically. That’s how looks the double Geforce 9800 gt cards connected with nVidia SLI system:

    nvidia sli

    Soon I’ll write about SLi performance and I’ll try to answer the question: Do she SLI really doubles performance with two card connected? Is it worth to buy single better more expensive card or two weaker cards set in SLI mode?