Saturday, September 30, 2006

Project 2 released. Will be due 10/17

Folks:

Project 2 is released (see the class page under projects). It is a
straightforward application of the min-max and alpha-beta pruning
ideas we discussed to Tic-Tac-Toe. Once again, you are given a
working minmax game player and are asked to improve it in a couple of
ways.

Rao
[Sep 30, 2006]

Tuesday, September 26, 2006

Page Rank digression.. [Nothing really to do with 471]

Someone in the class (Betul?) mentioned that the convergence of value iteration algorithm reminded him of the convergence of pagerank.

Like I said, the theory of convergence of pagerank is quite different. Basically, pagerank corresponds to the primary eigen vector of the page transition matrix. As you probably learned in your linear algebra class, multiplying any random vector v by a matrix M has the effect of (1) first
taking the projection of the v along the eigenvectors of M and (2) stretching each projection by the
corresponding eigen value. Thus, v gets transformed by having its components along eigen vector directions stretched by an amount proportional to the eigen values. Since primary eigen value is larger (or equal) to all the other eigen values, v gets stretched more in the pimary eigen vector direction. Thus, if you repeatedly multiply v by M, then v mostly gets stretched and rotated into the primary eigen vector direction. This is a well known way of computing primary eigen vector (Called power iteration method). The rate of convergence is exponential in the difference between primary and secondary eigen values.

Anyways, if this sort of thing strikes your fancy, you might want to check out CSE494--Information Retrieval, Mining and Integration ( http://rakaposhi.eas.asu.edu/cse494 ) which will be offered next spring.

Monday, September 25, 2006

About Relaxed Planning Graph

Some students are a little confusing about the relaxed planning graph construction. Here, I'll briefly explain how to construct it.

Please check the block world example on the slides 7 and 8 of week5-f06-1.ppt.

In progression or regression planning, each node is a state consisting multiple state variable values. However, in relaxed planning, each node is just a state variable value. Like in slide 8, at the initial level, there are 5 nodes, corresponding to the initial state( Ontable(A),Ontable(B), Clear(A), Clear(B), hand-empty)

In order to construct the next level, you apply all the possible actions to current level to generate possible state variable values. Like in the example, if you pick the action PICK-A, with precondition: hand-empty, clear(A) and ontable(A). The result would be holding(A), ~ontable(A), ~hand-empty, ~clear(A). So you can check the first level add some nodes like h-A(holding A), ~cl-A(clear (A)), ~he(hand~empty). However, you might notice that ~ontable(A) is not in the figure. But I guess the instructor omit some variables to make the figure readable. ~ontable(A) should be included in the first level.

Similarly, you can apply pick(B) to add more nodes. Note that all the nodes in previous level should be transferred to the next level by no-operation.

After you get the 1st level, you can apply the same strategy to generate more level.

You can construct the graph until it levels off, which means there's no change between two levels.

Please check slide 15 , 16 and 17 to find out how to obtain a relaxed plan.

Please correct me If I made anything wrong. Thanks!

-Lei

Saturday, September 23, 2006

Pass function as parameter in lisp

Several students asked me about the approach to pass a function as a parameter.
Here is the method.

Suppose you have a function B, and you wanna call A with a function parameter, then
you can call like this:
(A #'B)

#' is mostly the same as quote. But typically, #' denotes a function. quote is to represent a list.

-Lei

Thursday, September 21, 2006

the Graphplan "EBL" discussion

I did a somewhat cursory description of intelligent backtracking and
EBL in the context of Graphplan in the class today. If you want to
read more about it, check out

http://rakaposhi.eas.asu.edu/ijcai-gp-ebl.pdf

(or the much longer http://rakaposhi.eas.asu.edu/kambhampati00a.pdf )

cheers
Rao

Readings for second part of today's class

I hope to complete the discussion on planning today,  and
shift to the discussion of MDPs. After MDPs will be Adversarial search.

The readings are:

 MDPs--17.1-17.3

 Adversarial search -- 6.1--6.3



--
Subbarao Kambhampati
http://rakaposhi.eas.asu.edu

Wednesday, September 20, 2006

Joining the AIatASU mailing group..

Folks:
 I am not sure if I mentioned this, but there is a mailing list meant for people working on AI related areas. It is called
aiatasu@yahoogroups.com

You can see an archive of the mails at

http://groups.yahoo.com/group/aiatasu/

(and you will hear all about the spectacular performance of the AI group at the upcoming International Joint Conference on Artificial Intelligence ;-)

You can subscribe to it by sending mail to

aiatasu-subscribe@yahoogroups.com

aiatasu-unsubscribe@yahoogroups.com



Rao

--
Subbarao Kambhampati
http://rakaposhi.eas.asu.edu