... start recording
Questions?
I've posted the basics of the assignments for the rest of the semester; please plan your time accordingly. There will be more details - related readings and expecations - as we get closer to those dates.
I'm moving back to a "assignment due on Mondays" schedule, starting with the next assignment which is due a week from this coming Monday ... so you don't need to turn in anything next week.
I would like you to propose a final project on that "week from this Monday", and will go over some suggestions next week.
This week we'll be looking at and practicing more examples of "search" problems - graphs, puzzles, two player games - problems where the point is to find a goal state. Many computing problems are of this form, and there are many search variations ... some of which we've already seen.
I would like you each to pick one such problem to understand and work out in code by a week from Monday. I've given a sample in the assignment. Trying this on your own would be great, but you can also find many solutions online, including in these readings :
In Skienna's book, this material is mostly in chapter 7, "combinatorial search and heuristic methods"
We'll work through some examples next week in class.
First: discuss the assignment due today, getting a better understanding of what I started to try to explain on Monday.
For Floyd's algorithm, the key to the algorithm is understanding what is meant by shortestPath(i, j, k)
: the distance from vertex i to vertex j (numbered 1 to n), using any of {1,2,3,...,k} between.
Once you have that, the recursion relation that is the algorithm makes sense. I mean, it's still
magic, but magic that makes sense. In practice, the matrix distance[i, j]
can get big.
My code use numpy (numerical python) integer matrices to try to keep things fast and efficient.
For Dijkstra's algorithm, the trick to implementing it is in the data structure used for the "fringe", to store the distance to each vertex: for efficiency, it needs to act as both a min heap (to get the next closest quickly) and a hash table (to test for membership and look up the best distance so far to neighbors). In my code, I've combined a python heapq and dictionary into a HeapEtc class that does this.
The peg solitaire example is an illustration of "recursive depth-first backtracking", which is a common search technique for several of the puzzles and games in the list. Most of the code in pegs.py is just simulating the puzzle. We think of each board position as a node in a tree, and the possible next moves as its children. If we know how to "undo" a move, then we can traverse the tree one node at a time, backing up if we don't find the goal that we're looking for (the solved puzzle).
start
/ \
move1 move2
/ \
2a 2b
...
/
solved ?
Next I'd like to work on a classic problem that has several different solutions which illustrates some of these ideas, and which is small enough to try in class from scratch in at least one case. This is a nice example which illustrates several ideas, and so we'll continue with it next week.
Given some specific types of coins (for example penny, nickel, dime, quarter),
find the smallest number of them that add up to a specific amount (for example $ 23.14).
First : let's try to do this in small groups, without any other prompting but with one hint: "greedy".
Second : does that approach work if the coins are (1, 3, 4)? (Hint: how many to reach 6?)
search idea:
0 tree of coin sums
/ \
1 5 1 coin (if only using penny or nickel)
/ \ / \
2 6 6 10 2 coins
... search for goal
algorithms :
I've always found the "dynamic" approaches to be one of the hardest to really understand ... though also very cool.
sources :
(During class I typed into a few text files while explaining things, and also a bit of whiteboard drawing; I've attached those files.)
last modified | size | ||
explain_floyd.txt | Thu Apr 22 2021 05:21 pm | 809B | |
search.txt | Thu Apr 22 2021 05:21 pm | 704B | |
whiteboard.pdf | Thu Apr 22 2021 05:21 pm | 238K |