A decision tree is a rooted tree with three different types of nodes, decision nodes, event nodes (chance nodes), and consequence nodes. In a decision tree, squares represent decisions to be made (decision nodes), and circles represent chance events (event nodes). The edges emanating from a square represent the identified alternatives or the choices available to the decision maker, and the edges from an event node represent the possible outcomes of a chance event with an associated probability distribution. The third decision element, the consequence, is specified at the leaves as consequence nodes. These are associated with a real numbered value representing the utilities of the different consequences. An example of a decision tree is presented in the screenshot from DecideIT:
Figure 1: A decision tree in DecideIT (note that this screenshot is a zoomed out view and shows less details than the default view).
Figure 2: Entering imprecise probabilities, using a probability template for the outcome leading to E6. For the outcome C12, we explicitly set the contraction point to 0.55.
Influence diagrams are, when evaluated, transformed into a corresponding symmetric decision tree using a conversion algorithm that creates a total ordering of all connected nodes in the diagram, barren nodes discarded. This conversion algorithm traverses along the directed arcs, and orders the nodes according to a set of rules. In some cases, when only the topology of the graph is not enough to order the nodes, a node placed to the left is converted before a node to the right. It is also possible to convert an influence diagram into an instance of a decision tree, and continue the modelling work on this tree.
Editing the properties of a node in an influence diagram is analogous to the same procedure for a decision tree. There is, however, some differences between the node property frames of the two models. In an influence diagram, the user gets an overview of the conditional expansion order when editing properties of a conditionally dependent chance node.
Figure 3: Entering conditional probabilities for a conditionally dependent chance node in an influence diagram.
Reversal of arcs is possible between two chance nodes in an influence diagram, who shares a common information state and have no other directed path between them. Today reversal of arcs in DecideIT simply employ the intuitive concept of conditional probability, and a re-flip of the arc will not restore the values for interval probabilities as they do in the precise case. The user of DecideIT may however choose not to let the software automatically suggest any new conditional probabilities when flipping an arc.
Probability and Value Statements
In a chance node in a tree or influence diagram, it is possible to set comparative statements between the probabilities of different outcomes. These statements are then added to the constraint sets. Value statements are set in an analogous fashion.
Figure 4: Setting a comparative probability statement, that the probability of the outcome leading to C5 is at least 0.05 higher than the probability of ending up with C3.
Note that by using this feature, it is possible to handle qualitative probabilities and utilities in a common framework together with the interval approach. Such statements let both decision trees and influence diagrams handle both quantitative and qualitative information.
Presentation of Evaluation Results
Results are presented as a graph. Along the x-axis we have the cut in per cent ranging from 0% to 100%, and along the y-axis the possible differences of the expected values between a pair of alternatives. It is also possible to compare one alternative against an average of a set of alternatives. The shrinking area in the graph depicts the expected value under different degrees of cutting. As can be seen, the higher cut level that is used, the more equal the alternatives seem to be, according to the principle of maximising the expected utility. For a 100% cut, where the results from the algorithms coincide with the ordinary expected value, the result implies that A3 is the better alternative. However, taking impreciseness in account, it may not be that simple.
Figure 5: Pairwise comparison of two alternatives, using the DELTA method. After about 75% cut, we see that PV[0.5]mid(δ13) < 0.
In Figure 6, we investigate at which cut level a given security level will hold in the worst case. An all-green alternative can then from this perspective be considered as completely safe.
Figure 6: A security analysis with a security level of -100 as the lowest acceptable value and 0.02 as the highest acceptable probability.
In complex decision situations with large sets of consequences, it might be time-consuming to identify the preference ordering of consequences, and DecideIT offers a graphical overview of such a relation on a set of consequences. The ordering is easily determined by checking whether vij −vkl > 0 is consistent with the value base. If not, vij is before vkl in the partial ordering. Thereafter, obvious transitive relationships are removed.
Figure 7: Preference order among consequences, where C1 is the most preferred consequence.
Even though the concept of degree of cutting is a general form of sensitivity analysis, a model may be further investigated through identifying the most critical elements of a decision problem. By varying each event’s probability and utility values within their intervals, it is possible to identify the elements with highest impact on the expected value. This feature lets a decision maker identify where to put his efforts in the information gathering procedure in order to make more safe decisions.
Figure 8: Identifying the critical elements of a decision problem, illustrated as a tornado diagram.
For probability variation, the event E6 has the highest impact on the expected value. By varying the probabilities for this uncertain event, the expected value may differ 397.9 value units. For value variation, the impreciseness in the value of consequence C6 affects the expected value the most.