The start of the Intel Shopping Spree: Intel accquires physics engine maker Havok

The more time goes by, the more I really think that Intel want's back into the graphics market. Graphics are the most obvious pathway to beefier machines for the consumer and the easier it is to create cool graphics that consumers find compelling (like in games), the better marketplace for Intel to upgrade everyone. So what does this have to do with Intel buying the Irish company Havok (whos physics engine is used in Half Life 2, BioShock, Stranglehold,The Elder Scrolls IV: Oblivion, Crackdown, Lost Planet: Extreme Condition, MotorStorm, Halo 3, etc.) you might ask? According to Renee James, VP of Intel's Software and Solutions Group: "Havok is a proven leader in physics technology for gaming and digital content, and will become a key element of Intel's visual computing and graphics efforts. Havok will operate its business as usual, which will allow them to continue developing products that are offered across all platforms in the industry." Intel has been pushing developers to make more use of multi-threaded architecture. Just as game developers now target game eye-candy to the ability of the graphics card, Intel is pushing them to do more with beefier processors to enhance the user experience without affecting game play. This means things like adding more detailed models or more particles in a particle system if the user's machine can handle it. This is a tall order, but one of the things that'll make it easier is providing a cutting edge physics engine that's highly tuned to take advantage of multi-core systems. Since the deal is worth about an estimated $US 110 million, I'd guess Intel is really serious about it. That's cool for all the game developers and the eventual consumer apps that will be physics enabled. But a physics engine is the 2nd thing I would have bought if I were Intel. I expect to see some other acquisitions in the near future.


Programming Vista: The Desktop Window Manager API

In April 2007 I wrote an article for Microsoft Systems Developer Network Magazine: Aero Glass: Create Special Effects With The Desktop Window Manager about programming the frosted glass effect and using thumbnails on Vista, which use the Desktop Windows Manager (DWM) interface that's part of Vista. You can read the online edition of the article here. I've condensed an updated verson of the DWM interfaces and posted them in the Graphics Tutorials section here.


DirectX 10.1 makes current hardware obsolete? Not!

Having just returned from Siggraph this year I was fielding some questions from some co-workers. "Didja hear that Microsoft announced DirectX 10.1?", "Yeah", "And that it makes all DX10.0 hardware obsolete?" - to which my witty reply was "huh?". I was there when the DX10.1 features were described and I'm sure I would have noticed it if Microsoft had made such an announcement.

What I do remember was the description of the architecture for DX10 and why it's breaking from the DX9 interface. Basically it's that fact that the API has just gotten bigger and bigger and has gotten to the point where 1) The underlying hardware doesn't work the way the API was originally laid out, and 2) the drivers are now these huge things to force the legacy API calls to talk to the current hardware setup. DX10 (and OpenGL 3 for that matter) is where we make a clean break and get back to a thin driver layer over the GPU. Gone are the fixed lighting pipeline of yor. In fact the whole Begin Scene - render - EndScene architecture is gone. Cap bits are finally going away and DX is adopting (waaaay to late) the OpenGL conformance test model. In other words DX10.0 is an API specification. If you want you hardware to be certified as being DX10.0 compatible, it has to run all the features that are in the DX10.0 spec. (And I assume it has to generate conforming output when tested against the API). The programmers now get to code to one API, not various flavors, and the consumer gets to know that a DX10 card will run all DX10 games.

OK, so what's the difference between DX10.0 and DX10.1? Basically what I heard was that DX10.1 was what Microsoft wanted for Vista ship, but not all the major hardware vendors could get all the features in the current hardware generation. So what shipped was most of the features minus some more esoteric things (like how 4 sample full screen antialiasing is implemented). The reason DX10.1 is coming out so quickly is that Microsoft wants the spec out there so developers can see what's going to be available in the near future as well as putting a stake in the ground that hardware venders have to meet. The new features that are in DX10.1 are:

  • TextureCube Arrays which are dynamically indexable in shader code.
  • An updated shader model (shader model 4.1).
  • full 32-bit floating point filtering
  • The ability to select the MSAA sample pattern for a resource from a palette of patterns, and retrieve the corresponding sample positions.
  • The ability to render to block-compressed textures.
  • More flexibility with respect to copying of resources.
  • Support for blending on all unorm and snorm formats.

So, as the Microsoft guy said, it's all about the rendering quality. So, I doubt that when DX10.1 comes out suddenly your DX10.0 game will stop working. These are just enhancements to the API that don't reflect the current state of the hardware, just where the hardware will be forced to go in the near future. The DX9 API is getting a final revision that then will be frozen so any non-Vista OS will be able to run a DX9 (or 8, or 7) game, as will Vista since the DX9 DLL will coexist with the DX10.x DLL on Vista. If you want to try it out you'll need the Vista SP1 beta and the D3D10.1 Tech Preview - both will be downloadable from Microsoft.


Nival closes LA office, Kevin Bachus not seen...

I've always been fascinated by some of the folks who used to work in the DirectX/Xbox groups at Microsoft and how they manage pursue (seemingly) lucrative, somewhat high-profile careers with nothing but string of empty promises behind them. Kevin is one of my favorites. After leaving Microsoft he and a few other famous game industry names formed the Capital Entertainment Group, which made grandiose plans and folded after a year without really doing much. He then was hired as CEO a company called Infinium with a never-to-be-release gaming console/service called the Phantom. He was often seen singing praises about how great the Phantom would be, how great the service, and how, No! it wasn't vapor ware. How after a while he managed to move the company offices from Florida to LA (where he lived) and then left the company 5 months later. And then sued them for back pay. He was hired by Russian RTS developer Nirval (Heros of Might and Magic) (which was owned by Florida-based Ener1 Group) as CEO in March 2006. Apparently Ener1 wasn't happy with the low profits coming out the the LA office and quietly closed it in December 2006. No word on what Kevin is up to now...

Update: There's a nice -abeit short - reflective guest piece by Kevin on the XBox development effort you can read here.


Is Intel just getting back into the graphics business, or are they going to change it?

It's no great secret that Intel has been eyeing the discreet graphics market. Intel typically owns about 30-40% of the desktop graphics market, but that's strictly integrated (and hence - usually considered underpowered) graphics. ATI and Nvidia own most of the other part of the market and anyone who's interested in playing 3D games wouldn't consider using an integrated graphics solution if they could help it. Apparently the acquisition of ATI by rival AMD has set Intel to aggressively start talking about discreet graphics. In fact Intel has recently started aggressively hiring engineers (both software and hardware) for their Visual Computing Group. From the recruitment copy:

Join us as we focus on strengthening our leadership in integrated and high-throughput graphics and gaming experiences by developing innovative processing products based on a many-core architecture. We’re looking for engineers, developers, and architects who share our vision and understand what can happen when serious skills and vast resources join forces.

It would seem that Intel is pushing a multi-core architecture for the 2008-2009 timeframe. Given Intel's manufacturing chops they could, if they set their mind to it, make a pretty deep impression on the discreet graphics market. Given that timeframe you're looking at a 10x to 20x performance boost over the current top GPU, an Nvidia G80. Even if Intel makes a few missteps and produces something that's underpowered compared to the best from ATI or Nvidia, they would still probably be competitive on price alone.

Or perhaps they could be doing something to surprise us all? Intel recently did something pretty smart, they hired Daniel Pohl, recent Erlangern University graduate. Daniel coauthored a paper - Realtime ray tracing for current and future games in which the case is made that the traditional hardware rendering pipeline - i.e. object built up independently of each other with depth created through the use of a z-buffer, and all object-object visual interactions (shadows, reflections, etc.) having to be added on - has just about reached the end of its lifetime. Daniel is pushing raytracing instead. Raytracing lets you build up complex scenes in which all the lighting, shadowing, transparency, reflection, caustics, etc. are all handled by the ray tracer. Granted, that the framerates that Daniel gets on the custom raytracer board were about 10% of current consumer level boards, but raytracing will be the next big step in graphics architecture, not a kludge like the doomed Talisman architecture. Raytracing really is the way to render scenes. Maybe Intel will be the first?


Using Aero Glass on Vista: my Microsoft Developer Network Article is up!

One of the nice things about Vista is that they rewrote the display architecture to be a composition engine. Every window gets some off screen memory to display itself to and then all these windows are composited together onto the desktop. This means that the windows are totally independent from what they are rendered over and that it's possible to stick effects into the composition pipeline. Vista's Aero Glass interface is a simple demonstration of the power of this new architecture. The "glass" effect is the ability to tag regions of the window as being "glass" and then these areas are composited with whatever parts of the desktop are underneath the regions and then blurred with a hard coded pixel shader to give the impression of a frosted glass edge to Aero Glass enabled windows. Sadly, the Basic version of Vista can't run Aero Glass. But if you're running the Premium, Business, or Ultimate versions and you have some recent (i.e. DX9) hardware , you're all set. That a look! You can find the article here.


NVIDIA Releases Developer Tools

At the GDC, graphics chip maker NVIDIA announced they are releasing a bunch of updated and new tools. The tool upgrades are: FX Composer 2, PerfHUD 5, ShaderPerf 2. They are also releasing a new GPU-accelerated texture tool, plus a Shader Library. The most interesting toolkit is the DX10 SDK. Targeted towards the GeForce 8 series of GPU (the only GPU that can run D3D10 so far), it's a collection of samples for both OpenGL and DirectX, executables and source, that demonstrate and showcase DX10 features. The installer looks and acts like the Microsoft DX Sampler.

The DirectX collection makes up the bulk of the samples, while the OpenGL side is a bit thin. The samples include Rain, Smoke, Fur, Shadows, cloth simulation, Render Target usage, etc. The Direct3D SDK is 256MB while the OpenGL SDK is 45MB.

To compile the source you'll need Microsoft Visual Studio 2005 plus have the Feb. 2007 DirectX SDK installed (which you can get here.) if you want to compile the DX samples.

If you actually want to run the code you'll need a DirectX10 video card - which currently means an NVIDIA GeForce 8. There are videos of the programs so even if you don't have a DX10 video card you can still see the programs running.

You can get the SDK 10 at developer.nvidia.com

NVidia SDK 10 Image

Enhancing Vista Performance for under 10 bucks.

I’m going to pass along one cool thing I found out about Windows Vista. Now I normally try to put a lot of my working files on a RAM disk – I spend most of my day writing code and a lot of time is spent writing file to the hard disk. The Vista folks revisited this idea by enhancing the caching mechanism in Vista by allowing you to add an inexpensive USB 2.0 memory device to your PC to boost performance by up to 100% in some situations. I was able to add a cheap 2GM SD memory card to my PC for under $10 (from Buy.com after rebate) and boost my PC’s performance. First lets go over the technologies that make this possible:

SuperFetch: SuperFetch is an enhanced version of XP’s PreFetch feature, which examines what apps you frequently load and intelligently preloads them into memory. It is pretty dependant upon how predictable you are, but if you tend to use a few apps frequently, then SuperFetch will attempt to preload data into memory and can significantly speed up load times.

ReadyBoost: SuperFetch will preload data to you hard drive’s virtual memory page file so the data is in the right format to be read into memory, it’s just sitting on your hard disk. ReadyBoost is the next step. If you have a memory device on your machine (flash memory like a USB drive, SD card, etc.) you can designate (all or part) it as a ReadyBoost device. The SuperFetch data will be loaded onto this device. When you insert a memory device you'll see the "Speed up my System" option appear.

ReadyDrive: A ReadyDrive is a new piece of hardware. It’s essentially a hard disk with ReadyBoost flash memory built in to it. These hard disks are just starting to be built now, but in the mean time you can get most of the benefits of one by designating some USB memory for use as ReadyBoost.

readyboost image

 

After Windows tests and passes the device, you'll get the READYBOOST properties dialog.

Here are some facts that you should know before you start.

How fast?: The memory should be pretty fast: 2.5MB/sec throughput for reads, 1.75MB/sec for writes. Vista will test the memory and fail the device if it’s too slow. I picked up a cheap 2GB USB drive only to find it didn’t pass muster. Pick a good one. (Sometimes reformatting as NTFS will allow it to pass.)

What size?: The range is 250MB – 4GB. A ratio of 1:1 (low) to 1:2.5 (high) of system memory to Flash memory. The current limitation is one ReadyBoost drive per machine.

Is the data secure?: All cache data is encrypted.

What happens when the drive is pulled?: It falls back to the hard disk. The device holds a cache of what’s already on the hard disk, so the USB memory just speeds access to it. SD cards have an advantage- they don’t have an “in-use” LED like most USB drives. In my floppy drive/USB/19-in-one-device-reader drive this is an advantage.

ExtremeTech did a test of some USB drives in their article: USB Flash Memory for Windows Vista ReadyBoost

Here’s a list of ReadyBoost tested devices.

For more info on these technologies see the Microsoft Vista Performance page

See Tom Archer’s Blog for a ReadyBoost Q&A;

ReadSteeings