Exam

The exam consists of five topic questions, one from each topic discussed in this course, and a set of true-false-questions that can be from any of the topics.

Keep an eye on the time, and remember to submit all of your answers before the time runs out. If the time runs out before submitting your answers, the grader does not give you points from that answer.

Keep an eye also on the number of submissions. For some of the questions, you can submit multiple answers.

In the exercises where you return a pdf file, there is no automatic grader. In these exercises, the grader will give you 1/25 points when submitting. These points are corrected when the teacher grades your submission.

Topic 1

In this exercise, there are five short questions for different terms related to the subject of the module classical search and beyond. This exercise is graded automatically. Don’t worry if there is a typo in your answer and it will get zero points. Teachers will go through the answers and check if there are such cases.

What do you call a search algorithm that is guaranteed to find a solution (when one exists)?
What term is used to refer to any state that is reachable from a given state by a single action? (neighboring states)
When the search strategy has no additional information about states beyond that provided in the problem definition it is called what?
There are many variants of hill climbing algorithm. When hill climbing algorithm chooses at random from among the uphill moves it is called what?
What search algorithm tries to always expand the node that is closes to the goal node? Tip: it evaluates nodes by using just the heuristic function f(n) = h(n).
Which search algorithm, if any, computes the estimated cost of the cheapest solution through node n?
A heuristic that never overestimates the cost to reach the goal is termed what?
Search strategy in which the root node is expanded first, then all the successors of the root node are expanded, then their successors, and so on. What is the name of this search strategy?
Which search strategy always expands the deepest node in the current frontier of the search tree?
Recall that parameter b stands for the branching factor of a search tree. Parameter d is the depth of the shallowest goal node, and m is the length of the longest path in the search tree. What is the worst-case asymptotic (notation O( )) time complexity of breadth-first search?
Recall that parameter b stands for the branching factor of a search tree. Parameter d is the depth of the shallowest goal node, and m is the length of the longest path in the search tree. What is the worst-case asymptotic (notation O( )) time complexity of depth-first search?

Topic 2

Consider the two-player game described as following.

../_images/topic2.png

The starting position of a simple game. Player A moves first. The two players take turns moving, and each player must move his token to an open adjacent space in either direction. If the opponent occupies an adjacent space, then a player may jump over the opponent to the next open space if any. (For example, if A is on 3 and B is on 2, then A may move back to 1.) The game ends when one player reaches the opposite end of the board. If player A reaches space 4 first, then the value of the game to A is +1; if player B reaches space 1 first, then the value of the game to A is −1.

  1. Draw the complete game tree, using the following conventions:
  • Write each state as (s A, s B ), where s A and s B denote the token locations.
  • Put each terminal state in a square box and write its game value in a circle.
  • Put loop states (states that already appear on the path to the root) in double square boxes. Since their value is unclear, annotate each with a “?” in a circle.
  1. Now mark each node with its backed-up minimax value (also in a circle). Explain how you handled the “?” values and why.
  2. Explain why the standard minimax algorithm would fail on this game tree

Return your answers in a single PDF file.

A+ presents the exercise submission form here.

An oil well may be drilled on a farm.

Based on what has happened on similar farms, we judge the probability of oil being present to be 0.5, the probability of only natural gas being present to be 0.2, and the probability of neither being present to be 0.3.

If oil is present, a geological test will give a positive result with probability 0.9; if only natural gas is present, it will give a positive result with probability 0.3; and if neither are present, the test will be positive with probability 0.1.

Suppose the test comes back positive. Use Bayes’ theorem to compute the probability oil is present. (round answer to 3 decimal places and use a period as a decimal separator)

An oil well may be drilled on a farm.

Based on what has happened on similar farms, we judge the probability of oil being present to be 0.5, the probability of only natural gas being present to be 0.2, and the probability of neither being present to be 0.3.

If oil is present, a geological test will give a positive result with probability 0.9; if only natural gas is present, it will give a positive result with probability 0.3; and if neither are present, the test will be positive with probability 0.1.

Suppose the test comes back positive. Use Bayes’ theorem to compute the probability gas is present. (round answer to 3 decimal places and use a period as a decimal separator)

An oil well may be drilled on a farm.

Based on what has happened on similar farms, we judge the probability of oil being present to be 0.5, the probability of only natural gas being present to be 0.2, and the probability of neither being present to be 0.3.

If oil is present, a geological test will give a positive result with probability 0.9; if only natural gas is present, it will give a positive result with probability 0.3; and if neither are present, the test will be positive with probability 0.1.

Suppose the test comes back positive. Use Bayes’ theorem to compute the probability neither is present. (round answer to 3 decimal places and use a period as a decimal separator)

../_images/direct_evaluation.png

What are the estimates for the following quantities as obtained by direct evaluation:

v̂ π (A) =
v̂ π (B) =
v̂ π (C) =
v̂ π (D) =
v̂ π (E) =

A+ presents the exercise submission form here.

Breadth-first search is complete even if zero step costs are allowed.
Greedy best first search guarantees both completeness and optimality.
A* is of no use in robotics because percepts, states, and actions are continuous.
A simple hill-climbing search can never reach global optimum of a function with multiple local optima.

Player MAX and player MIN are playing a zero-sum game with a finite number of possible moves. MAX calculates the minimax value of the root to be M. You may assume that at every turn, each player has at least 2 possible actions. You may also assume that a different sequence of moves will always lead to a different score (i.e., no two terminal nodes have the same score). Is the following statement True or False?

Assume MIN is playing sub-optimally at every turn, but MAX does not know this. The outcome of the game could be larger than M (i.e. better for MAX).

Player MAX and player MIN are playing a zero-sum game with a finite number of possible moves. MAX calculates the minimax value of the root to be M. You may assume that at every turn, each player has at least 2 possible actions. You may also assume that a different sequence of moves will always lead to a different score (i.e., no two terminal nodes have the same score). Is the following statement True or False?

Assume MIN is playing sub-optimally at every turn. If MAX plays according to the minimax strategy, the outcome of the game could be less than M.

Player MAX and player MIN are playing a zero-sum game with a finite number of possible moves. MAX calculates the minimax value of the root to be M. You may assume that at every turn, each player has at least 2 possible actions. You may also assume that a different sequence of moves will always lead to a different score (i.e., no two terminal nodes have the same score). Is the following statement True or False?

Assume MIN is playing sub-optimally at every turn. MAX following the minimax policy will guarantee a better outcome than M.

Player MAX and player MIN are playing a zero-sum game with a finite number of possible moves. MAX calculates the minimax value of the root to be M. You may assume that at every turn, each player has at least 2 possible actions. You may also assume that a different sequence of moves will always lead to a different score (i.e., no two terminal nodes have the same score). Is the following statement True or False?

Assume MIN is playing sub-optimally at every turn, and MAX knows exactly how MIN will play. There exists a policy for MAX to guarantee a better outcome than M.

In a fully observable, turn-taking, zero-sum game between two perfectly rational players, it does not help the first player to know what strategy the second player is using - that is, what move the second player will make, given the first player’s move.
In a partially observable, turn-taking, zero-sum game between two perfectly rational players, it does not help the first player to know what move the second player will make, given the first player’s move.
When we use alpha-beta pruning, the trade-off is that we may obtain a suboptimal solution.
Using alpha-beta pruning does not affect the optimality of the solution.
The ordering of nodes on the same level of the tree does not affect the runtime of the algorithm.
Alpha-beta pruning uses recursion to send back up min, max, alpha, and beta values from leaf nodes.
For every game tree, the utility obtained by MAX using minimax decisions against a suboptimal MIN will be never be lower than the utility obtained playing against an optimal MIN.
There could exist a game tree in which MAX can do even by better using a suboptimal strategy against a suboptimal MIN.

For the following situation, should one use adversarial search?

A zero-sum game with one or more opponents.

For the following situation, should one use adversarial search?

Finding the right classification for a group of images.

For the following situation, should one use adversarial search?

A single-player puzzle game, such as Sudoku.

For the following situation, should one use adversarial search?

Playing chess against oneself.

If the CSP has no solution, it is guaranteed that enforcement of arc consistency resulted in at least one domain being empty.
If the CSP has a solution, then after enforcing arc consistency, you can directly read off the solution from resulting domains.
In general, to determine whether the CSP has a solution, enforcing arc consistency alone is not sufficient; backtracking may be required.

We are given a CSP with only binary constraints. Assume we run backtracking search with arc consistency as follows. Initially, when presented with the CSP, one round of arc consistency is enforced. This first round of arc consistency will typically result in variables having pruned domains. Then we start a backtracking search using the pruned domains. In this backtracking search we use filtering through enforcing arc consistency after every assignment in the search.

Is the following statement True or False?

If after a run of arc consistency during the backtracking search we end up with the filtered domains of all of the not yet assigned variables being empty, this means the CSP has no solution.

We are given a CSP with only binary constraints. Assume we run backtracking search with arc consistency as follows. Initially, when presented with the CSP, one round of arc consistency is enforced. This first round of arc consistency will typically result in variables having pruned domains. Then we start a backtracking search using the pruned domains. In this backtracking search we use filtering through enforcing arc consistency after every assignment in the search.

Is the following statement True or False?

If after a run of arc consistency during the backtracking search we end up with the filtered domain of one of the not yet assigned variables being empty, this means the CSP has no solution.

We are given a CSP with only binary constraints. Assume we run backtracking search with arc consistency as follows. Initially, when presented with the CSP, one round of arc consistency is enforced. This first round of arc consistency will typically result in variables having pruned domains. Then we start a backtracking search using the pruned domains. In this backtracking search we use filtering through enforcing arc consistency after every assignment in the search.

Is the following statement True or False?

If after a run of arc consistency during the backtracking search we end up with the filtered domains of all of the not yet assigned variables being empty, this means the search should backtrack because this particular branch in the search tree has no solution.

We are given a CSP with only binary constraints. Assume we run backtracking search with arc consistency as follows. Initially, when presented with the CSP, one round of arc consistency is enforced. This first round of arc consistency will typically result in variables having pruned domains. Then we start a backtracking search using the pruned domains. In this backtracking search we use filtering through enforcing arc consistency after every assignment in the search.

Is the following statement True or False?

If after a run of arc consistency during the backtracking search we end up with the filtered domain of one of the not yet assigned variables being empty, this means the search should backtrack because this particular branch in the search tree has no solution.

We are given a CSP with only binary constraints. Assume we run backtracking search with arc consistency as follows. Initially, when presented with the CSP, one round of arc consistency is enforced. This first round of arc consistency will typically result in variables having pruned domains. Then we start a backtracking search using the pruned domains. In this backtracking search we use filtering through enforcing arc consistency after every assignment in the search.

Is the following statement True or False?

If after a run of arc consistency during the backtracking search we end up with the filtered domains of all of the not yet assigned variables each having exactly one value left, this means we have found a solution.

We are given a CSP with only binary constraints. Assume we run backtracking search with arc consistency as follows. Initially, when presented with the CSP, one round of arc consistency is enforced. This first round of arc consistency will typically result in variables having pruned domains. Then we start a backtracking search using the pruned domains. In this backtracking search we use filtering through enforcing arc consistency after every assignment in the search.

Is the following statement True or False?

If after a run of arc consistency during the backtracking search we end up with the filtered domains of all of the not yet assigned variables each having more than one value left, this means we have found a whole space of solutions and we can just pick any combination of values still left in the domains and that will be a solution.

We are given a CSP with only binary constraints. Assume we run backtracking search with arc consistency as follows. Initially, when presented with the CSP, one round of arc consistency is enforced. This first round of arc consistency will typically result in variables having pruned domains. Then we start a backtracking search using the pruned domains. In this backtracking search we use filtering through enforcing arc consistency after every assignment in the search.

Is the following statement True or False?

If after a run of arc consistency during the backtracking search we end up with the filtered domains of all of the not yet assigned variables each having more than one value left, this means we can’t know yet whether there is a solution somewhere further down this branch of the tree, and search has to continue down this branch to determine this.

Even when using arc consistency, backtracking might be needed to solve a CSP.
Even when using forward checking, backtracking might be needed to solve a CSP.
Solving a CSP problem means finding the assignment(s) that satisfy all constraints.
../_images/homealarm.png

Consider the Bayesian network in the figure above. In the Conditional Probability Tables (CPTs), the letters B, E, A, J and M stand for Burglary, Earthquake, Alarm, JohnCalls, and MaryCalls, respectively.

True or False: If no evidence is observed, Burglary and Earthquake are independent.

../_images/homealarm.png

Consider the Bayesian network in the figure above. In the Conditional Probability Tables (CPTs), the letters B, E, A, J and M stand for Burglary, Earthquake, Alarm, JohnCalls, and MaryCalls, respectively.

True or False: If we observer Alarm = true, Burglary and Earthquake are independent.

Is the following statement True or False?

If P(a|b,c) = P(b|a,c), then P(a|c) = P(b|c)

Is the following statement True or False?

P(a|b∧a) = 1

Is the following statement True or False?

If P(a|b,c) = P(a), then P(b|c) = P(b)

Is the following statement True or False?

P(a|b) = P(a), then P(a|b,c) = P(a|c)

Suppose you are a witness to a nighttime hit-and-run accident involving a taxi in Athens. All taxis in Athens are blue or green. You swear, under oath, that the taxi was blue. Extensive testing shows that, under the dim lighting conditions, discrimination between blue and green is 75% reliable.

It is possible to calculate the most likely color for the taxi given the current information. (Hint: distinguish carefully between the proposition that the taxi is blue and the proposition that it appears blue.)

You are given three statements.

1. P(X,Y|Z) = P(X|Z)P(Y|Z)

2. P(X|Y,Z) = P(X|Z)

3. P(Y|X,Z) = P(Y|Z)

True or False: statement 1 is equivalent to (statement 2 ∧ statement 3).

If the only difference between two MDPs is the value of the discount factor then they must have the same optimal policy.
For an infinite horizon MDP with a finite number of states and actions and with a discount factor γ that satisfies 0 < γ < 1, value iteration is guaranteed to converge.
When running value iteration, if the policy (the greedy policy with respect to the values) has converged, the values must have converged as well.
If one is using value iteration and the values have converged, the policy must have converged as well.
For an infinite horizon MDP with a finite number of states and actions and with a discount factor γ that satisfies 0 < γ < 1, policy iteration is guaranteed to converge.
“Q-values” are determined by immediate expected reward plus the best utility from the next state onwards.
../_images/MDP.png

Note: One round of policy iteration = performing policy evaluation followed by performing policy improvement.

Is the following statement True or False?

It is guaranteed that ∀s ∈ S : V π James (s) ≥ V π Alvin (s)

../_images/MDP.png

Note: One round of policy iteration = performing policy evaluation followed by performing policy improvement.

Is the following statement True or False?

It is guaranteed that ∀s ∈ S : V π Michael (s) ≥ V π Alvin (s)

../_images/MDP.png

Note: One round of policy iteration = performing policy evaluation followed by performing policy improvement.

Is the following statement True or False?

It is guaranteed that ∀s ∈ S : V π Michael (s) > V π John (s)

../_images/MDP.png

Note: One round of policy iteration = performing policy evaluation followed by performing policy improvement.

Is the following statement True or False?

It is guaranteed that ∀s ∈ S : V π James (s) > V π John (s)

Is the following statement about value iteration True or False? We assume the MDP has a finite number of actions and states, and that the discount factor satisfies 0 < γ < 1.

Value iteration is guaranteed to converge.

Is the following statement about value iteration True or False? We assume the MDP has a finite number of actions and states, and that the discount factor satisfies 0 < γ < 1.

Value iteration will converge to the same vector values (V*) no matter what values we use to initialize V.

Posting submission...