Here is an idea: The most important thing for performance in backtracking is the search strategy. What about using machine learning for this? A solution could be guaranteed to be found if it exists, but the AI could speed up the search.
The problem is: How does the AI learn from examples?
The backtracking algorithm finds solutions that typically depends on previous choices. Let's say you plant a tree and a house can only be built 4m away from all trees. Later on, a 5m road is built. If you put the house too close to a tree, there is no room for the road.
By anticipating choices that might fail, the AI could restrict the search space and prioritize moves that more likely leads to a solution. One way it could do this, is to reason about the problem geometrically, by internalizing the constraints and keep a separate map where it visualizes the relations between objects in the map. It tries to model consequences that are not immediately understood by looking at the constraints alone. Therefore, it must visualize the constraints in a different way, that predicts upcoming problems before they happen.
Shared language between puzzle constraints and AI
One technique is to use a shared language between constraints and the AI, that copies the constraints from the puzzle and makes them stronger. For example:
distance(tree, house) > 4m
This could be modified to:
distance(tree, house) > 5m
This would solve the road-between-house-and-tree problem. Even if the AI doesn't "know" that putting the house too close to a tree usually causes problems, it could use the heuristic that constraints cannot be relaxed and therefore can only be made stronger to make search more efficient.
Another way to interpret this: The AI tries to be more careful than the hard constraints set by the user.
Modified soft constraints that give penalty points when broken
The modified constraints could be treated as soft constraints, aka "rule of thumb". The AI could give itself penalty points when breaking them. Even the penalty points could be learned, so the AI becomes a self-enforcing decision maker.
Training
Training can be done using the following method:
- Pick a random constraint
- Look at how a small change affects number of iterations of the solution
- Update in the direction that gives fewer iterations, use learning rate
- Repeat for random generated puzzles, making a small change for each
Over time the AI will get better at searching for solutions. It will anticipate problems that are not immediately obvious by looking at the constraints alone.
Genetic algorithms
A set of soft constraints can be used as "genes". Changes to the constraints are mutations, and different sets can be mixed from two parents to create an offspring. The most successful genes will spread throughout the population, increasing the chance of survival.
To train with this method, one might need a large number of randomly generated puzzles per generation. One could also perform mutations and reproducing randomly using a smoothing average of performance. The AIs with high convergence of performance could be selected for more off-springs, mixing genes with others in same generation.