Friday, June 29, 2012

Minimizing Horn formulas using domain knowledge

Assume you have several implications

(x1 & x2) -> x3
(x1 & !x2) -> x3
...

As, often, you will want to query whether x3 needs to be true for different truth values of your different variables, you might want to optimize your formulas with respect to the number of literals.

For a given right-hand side variable, this is equivalent to a (DNF) logic minimization task that can be done, e.g. with the Quine-McCluskey Algorithm. As your left-hand-side formulas form a formula in DNF, the minterms of this formula are exactly those left-hand-side formulas.

The formula above can be optimised by optimizing (x1 & x2) || (x1 & !x2), which would yield the formula x1, and thus your optimized implication x1 -> x3.

What does this have to do with HornSAT? As you may have noticed, my examples are not Horn formulas. However, in practice you will sometimes model a problem as HornSAT that has additional, implicit constraints. For example, if some variable v potentially has n states, you may model this as n logical variables in your formulas.

These variables have properties that you normally don't explicity model in your formulas, because they only blow up the logical representation without helping the original problem. The property i am refering to is the assumption that v will take exactly one of it's n states, thus exactly one of the n logical variables will be true at any time.

This (also called principle of bivalence for logic) is the same property that allows logic minimization! Thus, using domain knowledge, one can, with a little alteration of "normal" logic minimization algorithms, derive domain-specific logic minimization algorithms. I did this once for extracting logical rules that were implicitly contained inside bayesian networks (didn't help for inference, but gave me the idea).

Note that, while HornSAT is an easy problem, logic minimization is hard.