RSS icon Email icon Bullet (black)
  • nVidia GTX 260 SLI

    Posted on January 14th, 2010 admin No comments

    nVidia has been continuously improving their products through valuable research and developments since the 6000 series of GeForce video cards in 2004. They were trying to optimize the applications and capabilities of multi-GPU processing configurations. The ultimate result is in nVidia GTX 260. This “optimized” configuration is perfect for visual feast.

    nVidia GTX 260 SLI

    nVidia GTX 260 SLI motherboard has come to the market with many advanced features making it perfectly ideal for graphic processing and reduced power consumption. This hybrid SLI was introduced after the launching of nVidia’s 700 and 8000 series motherboard chipsets. nVidia GTX 260 SLI integrates augmented graphic processing power through GeForce Boost in the chipset. An onboard graphics chip together with a supported discrete video card make this nVidia with increased capabilities comparing with the traditional multi-GPU configuration.

    nVidia GTX 260 SLI

    nVidia has proved its capabilities and showed less hungry for power through its more modern, traditional multi-GPU setup, which helped the chip to enhance the system’s graphical horsepower. The nVidia GTX 260 is a new chip set in the market competing with its predecessors and is relatively expensive. But comparing the visual comfort and the great features offered by GTX 260, the bit high cost is worth shouldering to get a grandeur videocard.

    nVidia GTX 260 SLI

    nVidia GTX 260 features a 576MHz, 192sp processing core with 896MB of 1000MHz GDDR3 linked through a 448bit interface. This hybrid SLI comes as a white-box, refurbished unit. External appearance of the box is not eye catching, but it contains a praise worthy important stuff, the hardware platform needed to link the two cards via nVidia’s Scalable Link Interface, or SLI.

    nVidia GTX 260 SLI

    You must have already had a best performing single card and there can be a desire in you to have a card with more graphical horsepower, then the best option in front of you is nVidia GTX 260. Comparing three factors performance, efficiency and cost nVidia GTX 260 stands in the front for any user to enjoy the great features of this modern tech savvy video card. This high end card supports all of your requirements for a multi GPU resolution like SLI. This SLI card is ideal for getting the advantages of two mid range cards, and hence making it more cost effective in installation and in operation.

    nVidia GTX 260 SLI

    AS a user, you should weigh your options judiciously before deciding on which route is best. It is not advisable just to run dual 9600GT in SLI. It is better to upgrade your card to a relatively low priced nVidia GTX260. This will improve the speed and efficiency and you will feel the difference. Your gaming experience is going to change with nVidia GTX 260 SLI.

  • EVGA GeForce GTX 285 Mac Edition

    Posted on December 7th, 2009 admin No comments

    EVGA’s GeForce GTX 285 Mac Edition is a wonderful piece perfectly ideal for the maximum enjoyment you look in video card up gradation. Upgrading in Mac anything other than hard drives or memory has been a ridiculous job. This is because most Mac machines use mobile CPUs which are soldered firmly on to the motherboards making it impossible to swap chips. This was true for upgrading Mac video cards as well. The process of upgrading has always been an envious task.

    evga geforce gtx 285 mac edition

    EVGA’s GeForce GTX 285 Mac Edition has given Mac users an amazing and handy GPU option to buy the video card upgrades from online vendors including Apple.com. Prior to the release of EVGA’s GeForce GTX 285 Mac Edition, the video card upgrades were from either NVIDIA or ATI and Apple did not support the upgrade market. It is a real worth move form the Apple.

    evga geforce gtx 285 for mac

    Apple has come out lately with GPU releases which are better in handling the upgrading process. The Radeon HD 4870 and the 8800 GT are build-to-order the much wanted GPU options directly from Apple. Importantly now Apple officially supports conventional PC video cards helping the user to dump the ROM from the Mac and hack the PC cards to become Mac editions.

    evga geforce gtx 285 apple mac edition

    GeForce GTX 285 Mac Edition is a compact hardware just identical to what you get when you buy a PC card. The prime difference comes in the packaging and firmware. You can use these cards in the Mac Pro. Typically additional PCIe power connectors are not required in the standard video card in any Mac Pro. When you intend to upgrade to a more powerful video card, you may require one or two cables to connect the small PCIe power connectors on the Mac Pro’s motherboard and also to the connectors on the video card. All these required cables are in the GTX 285 Mac Edition box.

    nvidia geforce gtx 285  mac edition graph

    GTX 285 Mac Edition has major change in firmware. The firmware has a few EFI hooks to make your PC Video card to work under OS X. PC specific cards do not have EFI hooks. This will enable you to do Binging or Googling and to find ROM dumps suitable for other PC cards which can enable Mac operation. GeForce GTX 285 with firmware can be used without any modifications to POST and function under OS X.

    nvidia evga geforce gtx 285  mac edition image nvidia

    The price is fixed at $449.99. It is bit high compared to other Video upgrading cards. But comparing the price of the hardware and the price for two extra cables Mac Edition Box has a moderate price. The box comes with a lone DVI to VGA adapter, a pair of power cables and a driver CD. It has two dual-link DVI outputs and has no support for mini-DisplayPort. In brief EVGA’s GeForce GTX 285 Mac Edition is worthy of buying it and is perfectly ideal for all Mac owners.

    geforce gtx 285 bundle nvidia

    If you want to see nice girls using your new GeForce but your provider is a prick you can unblock porn by just editing host file.

  • NVidia Quadro FX 4500 Video Card review

    Posted on November 10th, 2009 admin No comments

    As all of know Video Card, also known as Video adapter or Video controller, is a hardware element that helps a computer monitor to display videos and pictures. This hardware has necessary circuits and memory space to generate visual graphics, photos and pictures on the computer display screen. The software required to operate this hardware is the Video card driver, which drives the video card to show the videos and visuals the way you want. Video card driver converts the common visuals and graphics into digital images adaptable for the computer screen display.

    There are many varieties of video cards in the computers and video market. Providing clear pictures, graphics and visuals are the main feature of the video cards. HP nVidia Quadro FX 4500 Video Card has been in the market as the leader and is capable of providing highest performance graphic solution for scientific applications, CAD and DCC. NVidia Quadro FX 4500 Video Card is designed and engineered with state of the art hardware, electronic and digital developments and techniques. The modern technological architecture make FX 4500 video card most suitable to address the modern world challenges in video and graphic displays in computer terminals.

    NVidia Quadro FX 4500

    HP nVidia Quadro FX 4500 Video Card is one of the best 3D graphics card perfectly ideal for casual gamer and also for those who want the traditional graphics visualizations. It can support all the needs of the modern world digital developments. This video card has more processing power and can support the latest graphics standards. It can also provide the best quality smooth frame rates by enabling the special effects. HP nVidia Quadro FX 4500 Video Card supports the current standard visual and digital technology DirectX9.0c, which is used in games and other advanced graphics programs.

    HP nVidia Quadro FX 4500 Video Card is capable of providing unparalleled power and performance capabilities through the unmatched features and capabilities for those professionals looking for high standard graphic displays. It can be used with Windows Vista as well even though all new features of Windows Vista cannot be used.

    HP nVidia Quadro FX 4500 Video Card has taken the application performance to new levels. This fete is achieved through a high speed graphics DRAM bus together with fully programmable pixel pipelines and through an array of parallel vertex engines. The prime feature of cross bar memory architecture used in HP nVidia Quadro FX 4500 Video Card makes good results with enhanced pipeline efficiency enabling occlusion-culling. It can make perfect visuals by the depth Z-buffer without any loss of pixel images and color compression.

    The eye pleasing experiences of viewing high end videos and graphics are assured on your computer screen with the use of HP nVidia Quadro FX 4500 Video Card in your computer.

  • Nvidia Optix Ray Tracing engine

    Posted on October 16th, 2009 admin No comments

    Nvidia Optix Ray Tracing engine

    Let’s start with a quote from Nvidia’s senior developer Jeff Brown:
    “Thousands of applications are being created today that harness the phenomenal power of GPUs, a clear sign that GPU computing has reached a tipping point. The world of computing is shifting from host-bound processing on CPUs to balanced co-processing on GPUs and CPUs. NVIDIA application acceleration engines arm developers with the tools they need to further revolutionize both real-time graphics and advanced data analysis.”

    Nvidia Optix Ray Tracing engine in action

    In short, with this new engine, you are invited to the new world of technology Optix Ray Tracing. It is the state of the art technique for creating an imaging by perfectly tracing the light rays passing through pixels in an image plane. Nvidia Optix Ray Tracing engine is designed and developed based on this technology which can produce a very high degree of photorealism with amazingly perfect style. Of course this perfection comes with higher computational cost. Ray tracing is ideal for all types of applications where time is not a critical factor like in still films and videos and not perfectly suited in applications where time plays a critical role like in computer games. The important advantages of the ray tracing come in optical effects. It can simulate all types of optical effects like scattering, reflection, refraction and chromatic aberration.

    nvidiaglasses-optix ray tracing engine example

    The NVIDIA OptiX engine, a programmable ray tracing pipeline enabling software developers, brings high levels of realism their applications by the proper programming with traditional C language. The OptiX engine makes the ray tracing faster with the tapping of the amazing potential of the amazing powers of the NVIDIA Quadro parallel processors. The Nvidia Optix Ray Tracing engine uses wide range of disciplines like optics simulation, photorealistic rendering, radiation research, automotive styling, acoustical design and volume calculations.

  • NVIDIA GeForce 3D Vision

    Posted on June 30th, 2009 admin No comments

    As everybody knows, for all ages people have been fascinated by illusions. Like everybody we like to be surprised, amazed and impressed.

    NVIDIA GeForce 3D Vision

    Fantastic way to fool your brain is old well known technology in a totally new way of designing. Do you remember VFX helmets, where games can be placed stereo 3D?? It was amazing but the technology was pretty prehistoric, running games like doom 2 or quake 1:) And now I got a fantastic pleasure to introduce you a fantastic new product with nVidia logo - brand new 3d glasses. And so nVidia has teamed up with Samsung and created a 3d binoculars with high quality 120 Hz LCDs, with all new 3D stereo shutter glasses technology. 120 Hz are very important - that your eyes will be no tired. NVIDIA has its finish in the driver support and go to a state where it is really good. Alongside this, it redesigned the approach to the game experience. These remarkable shutter 3D glasses are wireless and rechargeable, games that rely on new drivers automatically kicks-ass in 3D mode and next to that NVIDIA wanted to find a really cool game.

    GeForce 3D Vision

    It is now 2009, a lot in the stereoscopic market segment has changed and sure, a lot hasn’t. With the new GeForce 3D Stereo Vision slowly kits now available here in Europe, contained a good time to look at them. We asked NVIDIA to send the complete package, with a 120Hz Samsung LCD monitor and shutter glasses kit for. I got nothing more to say - they’re amazing!

  • Graphic Effects: MIPmapping

    Posted on June 9th, 2009 admin No comments

    What is MIPmapping?

    A MIP mapping is a type of graphics effects filter that is used to create an semblance of depth in computer design environment. It makes a two-dimensional (2D) samples of a three-dimensional (3D) graphic surrounding. MIPmapping id being used with texture displaying, more of that MIP mapping features multiple visualizations of a each graphic texture map made in different resolutions to show surface textures with an illusion of distances from the user point of view. It simply creates the largest available scaled image and placing it inside the front environment and dynamically placing smaller ones that reaches the farther zones of background area to the same horizon - it depends what level of mip mapping your software/game provides. Every generated image is scaled to feel the difference between what’s close and what is farther - is defined as a MIP map level - higher level means more generated images that builds 3d environment. When the level of MIP mapping is higher, your card performance will be decreased. Besides generating 3D graphics MIP mapping is useful avoiding unwanted scratched edges, these “ragged” borders is called jaggies. Generally speaking mipmapping upgrades the quality level of graphics surroundings by placing various versions of the texture that are minimized more when the 3D depth increases. This graduation of every single mipmap level is dependent on the Texture LOD (Level Of Detail).

    MIPmapping example image

    MIPmapping and 3d environment

    But 3d artificial environment takes more than just MIP mapping to get a proper result, that’s because the texture’s perspective is oriented closer when it gets to the horizon. Therefore the minimized textures are the farther ones from the viewer. The true is that without any graphic filtering of the texture, you can get only a very pixilated image - it is called point sampling. There is a significant rule saying that when the distance from the viewer is increasing, the pixels that are available near the horizontal ones only for representation, that is because the 3d environment runs from the lower zones of the screen to the center.

    mip mapping in-game example

    Take a look at the Mip mapping example above. The left part is done without mipmapping. You might think that it looks sharper in the screenshot here. It is, but without mipmaps all textures flicker with a large amount of noise - this looks awful when the scene is set in motion. Simplifying the Mipmaps are different versions of one and the same texture that is available in various sizes to fit it in a proper place (depth) inside 3D graphic environment. Just try to imagine that you are standing in a long highway and you are focusing on the road texture beginning from your legs to the all way to the same horizon. To secure as much realistic appearance as it possible. The middle lines on our highway which are closer to the viewer must be generated in high details, higher resolution. If you are moving your sight closer to the horizon, the textures gets smaller and smaller. There are situations where details which are closer can be lost due to scaling problem because your graphic card driver doesn’t know which are the most significant details in the texture - it can be avoided by installing a newer driver or lower the level of MIP mapping.

  • Microsoft DirectX 10

    Posted on May 24th, 2009 admin No comments

    Although DirectX 10 was released quite a while ago but still there are very few games today that really take the full advantage of its features. Hugely popular First Person Shooter titles and Role-Playing Games all barely push the sheer graphical power of the newest version. As the DirectX 10, was introduced in Windows Vista, 3D features support the same effects as 9, guaranteeing compatibility across the board. Although this may sound very good, allowing for a more standardised games development environment. But the reality is not so colorful, because Microsoft will soon be regulating the introduction of 3D features, leaving companies such as NVIDIA behind. This application programming interface (API) was officially named “DirectX 10.”

    DirectX 10

    DirectX 10 was available to Windows Vista users only at the time of its introduction, but unfortunatelly, you will not find DirectX 10, being released for the Windows XP operating system. DirectX 10 is deeply linked into Windows Vista OS and we currently do not know about the plans by Microsoft to allow Windows XP to officially support the new DX10. In general, DX10 gives much more generic graphic processing model with lots of flexibility and reliability. This will be very crutial going forward, but right now developers still need to make some limitations on shader length and complexity based on the performance of the hardware that exists. Another thing related with the Geometry Shader is the Stream Out functionality that provides the GPU to recycle graphics files without computing on the CPU. Not only is this a hit of performance, it will also gives completely independant of the CPU for particle systems. Take a look at the DirectX 10 performance tested on two nVidia Cards:

    DirectX 10 performance

    If you want to have a DirectX 10 on your computer you will have to go with Windows Vista as your OS, theres nop possibility to launch dx10 on MS windows XP systems. Because of this we will see an expensive upgrade path associated with the experience of DirectX 10. You will need Windows Vista, DirectX 10 hardware and of course some DirectX 10 coded games, what is totally rediculous, that’s why plenty of gamers are turning into PS3 or Xbox. The question that gamer all over the world are asking is: “Will this very expensive upgrade will have positively impact on my my gaming experience enough to justify the cost?” That has yet to be seen and can only be answered with the games we have yet to play. We can however talk about some of capabilities of DirectX 10 with a unknown architecture and answer the question how it can potentially benefits to the gamers. In next reviews we’ll be reviewing later versions of DirectX.

  • What is nVidia PhysX?

    Posted on May 13th, 2009 admin No comments

    nVidia physX logo

    What is nVidia PhysX?

    Generally speaking PhysX is a realtime middleware physics engine SDK. It refers to the card PPU (physics processing unit) which can accelerate graphic processing, where PhysX feature is enabled. It is used the most to enhance graphic environment in computer games, PhysX was designed strictly to improve the graphic performance. The physX was released some time ago, it was used then by Graphic engineers and designers to produce professional physics simulations and to make 3D environment used in games or movies. The fact was that this technology was too expensive for a regular gamers. Graphics processes supporting hardware acceleration by PhysX can be accelerated by PhysX PPU or a CUDA-enabled GeForce GPU. CUDA is a name of PhysX developed engine by nVidia. When graphic applications such as computer games using physics calculations from the CPU, allowing it to perform other objectives instead - it is potentially resulting with a smoother and faster graphic processing. Middleware physics engines gives another feature to game designers. It allows to avoid writing their own code to handle the complex physics interactions possible in modern games, because PhysX has got a ready to use physical algorithms.

    A bit of PhysX history

    PhysX was originally developed by Ageia as the NovodeX SDK. Ageia was a company that profiled itself into the 3D graphics market with fantastic idea to bring physics computing into computer games. Ageia engineers knew that physics calculations allow for a more extreme and real visual experience. Ageia way of thinking was a really interesting, and as a pioneer idea it had also plenty of disadvantages, unfortunately the cards were put into the market way too expensive and received way to little industry support.

    ageia physx logo

    The Ageia financial results was a way below the average and the company was nearly bankrupt. All management’s eyes and hopes turned to nVidia who was interested in Ageia technology of graphic processing. And it happened, in February 2008, Nvidia bought Ageia for 30 million dollars and hired their leading staff to get Ageia’s PhysX API. After that the PhysX engine and has begun to transform into nVidia CUDA technology. In August 2008, Nvidia released software technology that allows GeForce 8 series and higher cards to implement PhysX graphic processing.

    PhysX features and performance:

    PhysX graphic processing is widely used to delivering physical environments inside the game source. The main features that PhysX is capable to perform is allowing very spectacular graphic realtime effects, very detailed environment like clothes factor, tear drops and hairs. PhysX also improves dust and collateral debris during in-game explosion. More of that it can perform moving objects inside a very dense smoke & fog without lack of performance. When PhysX is on, game characters has got complex geometries for better movement and interaction. It generally increases performance of all graphic applications moving some tasks to PhysX PPU. Nowadays any CUDA ready GeForce graphics cards, GeForce series 8 and newer, can take advantage of PhysX without the need to install a dedicated PhysX card. Take a look at the graph showing the PhysX performance rates on various resolutions. The graph compares platforms with and without Physx graphic processing:

    nVidia PhysX graph

    As you see the PhysX doubles the performance of graphic processing, so the technology, nVidia invested was a sure shot. Now, NVIDIA claims that the fact GPU solutions are cheaper is going to push better GPUs into more powerful machines, making more PCs abstractly available for gaming. When PhysX is disabled in software, it left the effects enabled, and they are now calculated over the CPU. It’s incredible how much CPU overhead that takes. Now, normally your FPS would be much higher as physics stuff is disabled, but I figured it’s a nice example of how well Physics can be done over a GPU. It’s just much more efficient.

    nVidia/EVGA GeForce gtx285 with PhysX

    PhysX P1 (PPU) hardware specifications:

    • Multi-core MIPS architecture based device with integrated physics acceleration hardware
    • Interface: 32-bit PCI 3.0
    • 125 million transistors
    • Fabrication Process: 130 nm
    • Peak Instruction Bandwidth: 20 billion per second
    • Memory: 128 MB GDDR3 RAM on 128-bit interface
    • 182 mm2 die size
    • Sphere collision tests: 530 million per second (maximum capability)
    • Peak Power Consumption: 30 W
    • Convex collision tests: 530,000 per second (maximum capability)
    • Price: Between $100-$250

    nvidia physx

    So finishing, NVIDIA did a lots of fantastic work here. The attributes the PhysX provides are very good. PhysX solution is the great choice if you want to have the best available gaming experience, and the performance are great. PhysX is a fantastic graphic process technology. It looks good for the future I want more, yet we need to see some bigger and newer titles supporting it.

  • Review of Gigabyte GeForce GTX 285

    Posted on April 2nd, 2009 admin No comments

    Today I want to present you the Gigabyte’s graphic card, built on the newest nVidia 200-series processor designed on a completely new architecture of 55 nm technology. NVidia released GeForce GTX 285.

    Gigabyte GTX 285

    General

    GeForce GTX 200 series GPU is a brand new chip made by nVidia. Inside the GTX 285 is all about breaking new graphics architecture. It was moved to the 55nm fabrication process - making that chip (GT200b) smaller, that requires less voltage, performs better and this card is more affordable. The GTX 285 offers excellent performance while not being so extremely expensive but just expensive. The graphic card I’ll review here comes from Gigabyte, a one of the most titled nVidia partners. From there I’ll test out the performance and you’ll see how it compares to the older cards like the GTX 280 and the Ati HD 4870 X2. So let’s have a quick look at the package before review of the card and its specifications. Inside the package, there isn’t a plenty of elements. Inside you can find the CD with the basic driver and some applications, standard GIGABYTE cable set which includes a TV-Out breakout box, single DVI to VGA, single DVI to HDMI and two dual molex to 6-Pin PCI-E connectors and connectors.

    GeForce GTX 285

    Specifications

    The Largest difference in GTX 285 to the previous models is the move to the 55nm fabrication on the GPU this move is the main reason for the loss of the 8-pin PCI E connector. The smaller size also brings with it lower power absorption and it emits much less heat. Because of that, this graphic card is able to be overclocked higher than the GTX 280. GTX 285 core runs at 648MHz, GTX 280 only 602MHz. The shader clock in 295 runs at 1476MHz, GTX 280 at 1296MHz and the 1GB of GDDR3 carrying a 2480MHz clock, the GTX 280 only 2212MHz. The NVIDIA goal was to gain an extra performance power out of the card.

    Specifications GeForce GTX 285

    3DMArk

    3DMark Vantage is the PC gaming performance benchmark created by Futuremark. The latest version was designed for Windows Vista and DirectX10. New benchmark tool provides four new tests for graphic and CPU and it supports the latest hardware. 3DMark Vantage is based on a new engine which supports DirectX10.

    GeForce GTX 285 mark 3d vantage

    As you see on a graph there’s a nice performance bump comparing the GTX 285 to the GTX 280. It seems that the new GTX 285 have problems against only the the double 2x Radeon HD 4870. I think the GTX285 performacne are quite good inspite the price of double Radeon HD 4870.

    Temperature

    The temperature is metered with TES 1326 Infrared Thermometer. It will give the true bare facts about temperatures at maximum clock speeds. There are two places to take the temperature value from - the back of the card directly behind the core and an exhaust point. The temperature looks higher when compared to the GTX 280, but readings on the 280 without a shroud are closer to the 70c mark. This test was made by tweaktown.com specialists.

    GeForce GTX 285 temperature graph

    There are other manufacturers of cards based on nVidia GTX 285 chip. Some of them do not feature overclocked editions, and whenever that happens, their graphic card wasn’t pre tested and certified for higher clock frequencies. They only offer a non-overclocked GTX 285 and when you purchase their GeForce GTX 285, so watch out buying a new card - you have to watch out for used technology and it should be 55nm. Looking closer at the GIGABYTE card, there isn’t much to say more about it, generally. The package is medium equipped, the cooler is standard which generates average noise of 67 dB during maximum performance. The prices are approximately from $340 to $410 - it depends of the manufacturing company, the Gigabyte version costs 399$. So if you are computer graphic enthusiast and you have enough cash to buy it, leave anything you do now and run to the nearest computer store or visit eBay and purchase it immediately. That’s all folks , see you next time :)

  • Graphic effects: What Is Anisotropic Filtering?

    Posted on March 22nd, 2009 admin No comments

    General

    Anisotropic filtering (AF) is a graphic algorithm of improving the surface texture of an object. Where anti-aliasing is a method of making the edges of an object smoother, the anisotropic filtering is a method of enhancing the way how the objects looks inside, it concerns to all spaces between the edges. Every 3D objects that are used to build a game environment are textured. Texture is no more than a “coat of paint” that covers all those flat polygons to make them look like skin, wood, metal or bricks in the wall. Anisotropic filtering is a technology that became an standard effect in graphics cards in the early 90’s. Now the anisotropic filtering is widely used in graphics hardware.

    Types of anisotropic filtering(AF)

    There are three types of anisotropic filtering (AF). We can differentiate three methods of AF - bi-linear, tri-linear and full anisotropic. The anisotropic filtering is very powerful method, but it also uses lots of GPU performance. Settings that are available for anisotropic filtering are from 2x to 16x. When the level of anisotropic filtering is higher it provides more clear and sharp texture details, but it effects with more GPU usage. Modern nVidia graphic card such as Geforce 285 or even older models like Geforce 8800 handles 16x anizotropic filtering in resolution 1280 x 900 with no problem. For Geforce 285 even 1680 x 1050 is not a challenge, that because of use higher-quality AF algorithm. You can feel the difference between Anizotropic Filtering ON and OFF. Just take a look how anizotropic filtering looks like in action. These examples are a screenshots from 3DMark06 - the graphic benchmark tool. You can open those examples in a new window or tab, switching between them you can see the true difference.

    3DMark06 Anisotropic Filtering (AF) OFF:

    nvidia anisotropic filtering
    (Click this image to open it in new tab to see the difference)

    3DMark06 Anisotropic Filtering (AF) ON:

    nvidia anisotropic filtering on
    (Click this image to open it in new tab to see the difference)

    In short, anisotropic filtering is used when the generated textures are away from the viewer. It gives a smoother border between High resolution textures close to the viewer and lower resolution textures which are used away from “your eyes”. It is very useful for the textures in games with the far horizon. Of Course the textures resolution decreases looking further.

    Anisotropic filtering vs. graphic cards performance

    Anisotropic filtering can have an effect on games performance, even more when it’s used together with Anti Aliasing (AA). Companies building graphic cards such as nVidia are developing this filtering method to gain the best performance. To speed up AF, Nvidia specialists have initiated a special algorithms that makes Anizotropic Filtering calculations faster. Unfortunately it effects the lower amount of AF varies depending on the angle of the surface. The newest cards from nVidia use a higher-quality AF algorithm that have no problem with it but it results in a more round pattern.

  • Next Page »