Rules Research Archaeology

January 7, 2010

Most readers should be familiar with George Santayana‘s quote about remembering the past (also called Santayana’s Law of Repetitive Consequences).

The rise of the BRMS over the last few years has brought lots of enthusiastic new members to our little rules world. These people are eager to contribute to the field and make their own mark. It is the job of the “old guard” to make sure the newcomers are properly aware of the prior research in our field. (Having only participated in the space since 1995, I consider myself to still be a newcomer.) For example, the rise of the multi-core processor means that tons of older research in parallel rule engines is of interest and relevant. For another example, the classic work on conflict resolution strategies doesn’t appear to be online and is in a long out-of-print 30-year-old book. (And at least the prices for “Pattern-Directed Inference Systems” are somewhat affordable – as of this writing, “Human Problem Solving” starts at $190 and goes up to $800 on Amazon.) A third example is that the Wikipedia article on the Rete algorithm only has references to papers that are not online for one reason or another. (I personally haven’t even seen the “A network match routine for production systems.” working paper.)

Thus, I would like to highlight a few useful resources:

We need to work together as a group to improve the online availability of our history.


First Larrabee Chip Canceled

December 7, 2009

Looks like the first stand-alone Larrabee chip has been canceled.


Apple open-sourced libdispatch

September 17, 2009

A quick note to point out that Apple recently open-sourced libdispatch, which is a library providing Grand Central Dispatch.

Here is Apple’s introduction to Blocks and Grand Central Dispatch, and the ars technica writeup on GCD.

Note that this isn’t the kernel or compiler support code. However, it will be interesting to see where it goes from here.


Direct3D 11 Compute Shaders

August 4, 2009

In other news, the upcoming Microsoft Direct3D 11 will feature compute shaders. If I read correctly, this is shipping in Windows 7. NVIDIA is already out promoting the compatibility with CUDA. Apparently, this technology is also sometimes called DX Compute.


A Survey Of Programming Video Cards For Other Purposes

August 3, 2009

The August 2009 issue of ;login: from USENIX has a nice article on programming video cards by Tim Kaldewey entitled “Programming Video Cards For Database Applications”. Sadly, the article is only available to USENIX members until August of 2010.

Kaldewey surveys the past and present of programming video cards for non-graphics purposes – from the early days of using the graphics APIs to fool the GPU into thinking it is rendering graphics when it is really performing a general-purpose calculation, to the present era of general-purpose APIs such as CUDA.

He also shows a back-of-the-envelope calculation for building out a 100 teraflop data center using 100 GPUs versus 1400 CPUs, including power consumption differences.

If you are a USENIX member, the article is a good read. Sadly, it won’t be current when it finally becomes freely available.

[The same issue of ;login: also has a nice article by Leo Meyerovich: “Rethinking Browser Performance“.]


Moving AI Onto GPUs

February 16, 2009

And what do we have here? It seems that Nvidia and AMD are already on top of the idea of offloading AI onto GPUs.

Read the rest of this entry »


Multicore Video Cards (Again)

February 16, 2009

I’ve previously posted on the topics of CUDA and Larrabee. I continue to be intrigued by the possibilities that open up as multi-core GPU programming becomes available. For applications that need many threads this should present interesting opportunities. Why bother struggling to run your parallel application in the meager 4 or 8 cores of your CPU when you can offload the work to 32 cores?

Read the rest of this entry »


Parallel Rule Engines: What About Your Video Card?

November 15, 2008

While I’m on the subject of Dr. Charles Forgy’s talk at ORF 2008

Has anybody tried to compile CLIPS under CUDA?

We know from ORF that Dr. Forgy is working on a 4-core machine with his parallel version of OPS/J. I’m curious to see the same ideas applied to 32+ core video cards such as the CUDA architecture and the upcoming Larrabee.