Note: Alpha-beta algorithm on p. 170 in the textbook does not cut off the search. Therefore, you need to modify it as follows: replace terminal test by a cutoff test, and use a heuristic evaluation function instead of the utility function (see pp. 171-173).
You need to come up with your own evaluation function. The cutoff test should return true for all depths greater than some fixed depth d. Make d a constant in your program which can be easily changed.
The Kalah player agent
Your Kalah player must accept one command-line argument. That argument must be either 1 or 2, indicating whether the program should assume the role of Player #1 or the Player #2. Assume that Player #1 always makes the first move of the game.
The program must read stdin for a command, and then react to that command. There are three commands that the program must respond to. The commands are:
Your program must correctly react to commands "opponent <move›"
(for example: opponent 8 )
and "quit", and print a representation of the game state each time it changes.
(You do NOT need to implement alpha-beta search yet)
What to turn in:
Your program must correctly react to all commands
("move", "opponent <move›" and "quit"),
and print a representation
of the game state each time it changes.
It must use alpha-beta search to compute
its moves. You need to come up with your own evaluation function(s).
Depth should be a parameter in your alpha-beta search that you can easily
change.
For debugging and checking purposes, after receiving command "move",
your program must output
Due at the beginning of the lecture on Thursday, November 18.
For this part you may use the following game servers and a game playing agent "ai".
Assignment: Investigate how each of the following affects the performance of your agent.
Let two copies of your agent with different values of depth play against each other. Let copies of your agent with different values of depth play against the sample agent "ai". For each pair of agents play two games by changing who moves first. Do at least 20 different experiments. Summarize results in a table showing what agents played (specify exact values of depth for your agents), what agent moved first, which one won, and with what score. In your report mark with a marker or a pen exact place in your code where you can change the depth. Did increase in depth make any difference in your agent performance? Why?
Consider different evaluation functions (at least two). Let two copies of your agent which differ only by evaluation functions (and have the same depth) play against each other and against the sample agent "ai". Try this for different values of depth. Again, for each pair of agents play two games by changing who moves first. Do at least 20 different experiments. Summarize results in a table showing which agent moved first, value of depth for the agent(s), who won, and with what score. Provide detailed description of evaluation functions you used. In your report mark with a marker or a pen exact place in your code where you define your evaluation function. Do your evaluation functions agree with the utility function on terminal states? How accurately they reflect the actual chances of winning? Did evaluation functions make any difference in your agent performance? Why?
Due at the beginning of the lecture on Thursday, December 2.
Submit your agent to participate in the contest.
What to turn in: