As we all know, an optimal solution is best when it comes to any scheduling operation. But is optimal always practical? Not necessarily. If you have an environment with frequently changing requirements like most operational situations, you may need to shoot for “excellent” rather than “optimal.”

Optimizing a schedule involves a search for alternative resources to use and the times to use them. But there are a lot of constraints that must be satisfied. The resources have limited availability and when they are available, they are available in limited quantities. Start times for the activities must sometimes be sequenced so that predecessor and successor constraints (e.g., routings or project sequences) are satisfied. So the optimal schedule must satisfy all these constraints *and* be the best alternative. (If you are interested in the mathematical formulation of the general scheduling problem, contact me and I will send you the mathematics!)

Optimization of a scheduling solution involves some type of search that is really just an organized trial and error process. All optimization algorithms select complete schedule alternatives and evaluate each according to whether it satisfies all the constraints and is better than any alternative that has already been tried. The search continues until no better alternative can be found and all possibilities have been considered.

The way constraint satisfaction is handled is different in the various optimization techniques. One approach is to apply a numeric penalty to alternatives that do not satisfy the constraints—a ranking of sorts An alternative that’s not close to satisfying the constraints is given a big penalty and one that is close gets a low penalty. Not surprisingly, this approach is called the “penalty function” method. After exhaustive searching, the optimization technique should find an alternative with no penalty and the best score based on the scheduling objective. But notice that because of the unfriendly nature of the mathematics, at any point in the search, the current alternative may–or may not–satisfy the constraints. Therefore, the search technique must proceed to its conclusion before any useful result is guaranteed. (Examples of this approach include simulated annealing, genetic programming, taboo search, branch and bound, linear programming and neural networks). In a practical setting, when a schedule needs a modification or a new activity is added, the optimization has to start over again and it will likely change everything from the previous solution. The solutions might be optimal, but they are also very unrealistic in real-world working environments.

Another strategy is to search only those alternatives that satisfy the constraints. This approach is called “constraint-space” searching. It relies on being able to lay out all the possibilities that satisfy the constraints before scoring any one of them. The benefit is that at any point in the search process, a feasible schedule exists, which means you can the search as briefly as you want or need and have a working schedule as a result. This is a much more practical approach. In other words, if every iteration of the schedule is feasible, I can search as long or as briefly as I want to find better alternatives.

Two questions must be answered:

- Can all alternatives that satisfy the constraints be enumerated?
- Can the processing run long enough and be smart enough to produce a good schedule from the alternatives being evaluated? That is, does the process tend to converge as the possibilities are evaluated?

There is some good news here! In scheduling, the alternatives come from two sources:

- The combination of discrete alternatives that have their origin in resource selection (e.g., which machines?, which people?, which routings?);and
- The selection of start times.

It turns out that with a little algebra, we can screen a combination of resources for feasibility *and* determine all its feasible start times. Therefore, we can score only the feasible alternatives. Voila! We are able to do a constraint-space search! And in most cases, the number of feasible combinations is small enough to get *excellent* solutions. Better yet, we are guaranteed that the solutions satisfy all of the constraints. Admittedly we cannot claim optimality as a certainty because we had to process each alternative in some logical order which may or may not produce the provable optimum. But experience has shown that the solutions are as good–or better–than can be produced by a human scheduler–and they are always usable.

So what is the take away? If you have a one-time scheduling problem and can afford to run a logical search engine for as long as necessary, then get an optimizer and go for it.

If you have an environment with frequently changing requirements like most operational situations, pick a constraint-space algorithm with a good alternative generation capability and an intelligent processing sequence for your scheduling application.

Need more information? Check out my posting about **the difficulties of optimization as it applies to scheduling**. Have a comment? Let me know what you think.