Saturday, September 30, 2006

Project 2 released. Will be due 10/17


Project 2 is released (see the class page under projects). It is a
straightforward application of the min-max and alpha-beta pruning
ideas we discussed to Tic-Tac-Toe. Once again, you are given a
working minmax game player and are asked to improve it in a couple of

[Sep 30, 2006]

Tuesday, September 26, 2006

Page Rank digression.. [Nothing really to do with 471]

Someone in the class (Betul?) mentioned that the convergence of value iteration algorithm reminded him of the convergence of pagerank.

Like I said, the theory of convergence of pagerank is quite different. Basically, pagerank corresponds to the primary eigen vector of the page transition matrix. As you probably learned in your linear algebra class, multiplying any random vector v by a matrix M has the effect of (1) first
taking the projection of the v along the eigenvectors of M and (2) stretching each projection by the
corresponding eigen value. Thus, v gets transformed by having its components along eigen vector directions stretched by an amount proportional to the eigen values. Since primary eigen value is larger (or equal) to all the other eigen values, v gets stretched more in the pimary eigen vector direction. Thus, if you repeatedly multiply v by M, then v mostly gets stretched and rotated into the primary eigen vector direction. This is a well known way of computing primary eigen vector (Called power iteration method). The rate of convergence is exponential in the difference between primary and secondary eigen values.

Anyways, if this sort of thing strikes your fancy, you might want to check out CSE494--Information Retrieval, Mining and Integration ( ) which will be offered next spring.

Monday, September 25, 2006

About Relaxed Planning Graph

Some students are a little confusing about the relaxed planning graph construction. Here, I'll briefly explain how to construct it.

Please check the block world example on the slides 7 and 8 of week5-f06-1.ppt.

In progression or regression planning, each node is a state consisting multiple state variable values. However, in relaxed planning, each node is just a state variable value. Like in slide 8, at the initial level, there are 5 nodes, corresponding to the initial state( Ontable(A),Ontable(B), Clear(A), Clear(B), hand-empty)

In order to construct the next level, you apply all the possible actions to current level to generate possible state variable values. Like in the example, if you pick the action PICK-A, with precondition: hand-empty, clear(A) and ontable(A). The result would be holding(A), ~ontable(A), ~hand-empty, ~clear(A). So you can check the first level add some nodes like h-A(holding A), ~cl-A(clear (A)), ~he(hand~empty). However, you might notice that ~ontable(A) is not in the figure. But I guess the instructor omit some variables to make the figure readable. ~ontable(A) should be included in the first level.

Similarly, you can apply pick(B) to add more nodes. Note that all the nodes in previous level should be transferred to the next level by no-operation.

After you get the 1st level, you can apply the same strategy to generate more level.

You can construct the graph until it levels off, which means there's no change between two levels.

Please check slide 15 , 16 and 17 to find out how to obtain a relaxed plan.

Please correct me If I made anything wrong. Thanks!


Saturday, September 23, 2006

Pass function as parameter in lisp

Several students asked me about the approach to pass a function as a parameter.
Here is the method.

Suppose you have a function B, and you wanna call A with a function parameter, then
you can call like this:
(A #'B)

#' is mostly the same as quote. But typically, #' denotes a function. quote is to represent a list.


Thursday, September 21, 2006

the Graphplan "EBL" discussion

I did a somewhat cursory description of intelligent backtracking and
EBL in the context of Graphplan in the class today. If you want to
read more about it, check out

(or the much longer )


Readings for second part of today's class

I hope to complete the discussion on planning today,  and
shift to the discussion of MDPs. After MDPs will be Adversarial search.

The readings are:


 Adversarial search -- 6.1--6.3

Subbarao Kambhampati

Wednesday, September 20, 2006

Joining the AIatASU mailing group..

 I am not sure if I mentioned this, but there is a mailing list meant for people working on AI related areas. It is called

You can see an archive of the mails at

(and you will hear all about the spectacular performance of the AI group at the upcoming International Joint Conference on Artificial Intelligence ;-)

You can subscribe to it by sending mail to


Subbarao Kambhampati

Project 1 -search Algorithm Submission

Dear all,  both hardcopy and softcopy is required for Project 1.

1). In your hardcopy, you should attach your source code with some description of your implementation of each task. You should include all the testcase output in Part III. There are in total 5 cases. For each case, you must print out the time, memory size and statistics of A* algorithm.   More difficult cases is encouraged to be included.  Please do some analysis and comparison based on the output. This will be counted for grading.

If you implemented  extra tasks, run your algorithm on the test cases as well and give some  comments.

2). Please submit  your lisp file through the blackboard system this time and in the future.  DON'T  email me your files as you did in previous project.

Here are the steps to enter the blackboard system. Just in case you are new student and have no idea about the blackboard system.
1. Log into using your ASURITE id.
2. Click the link of "COURSES & ORGS" at the top,  then you'll see "MY BLACKBOARD COURSES" at the left side.
3. Click the link below "MY BLACKBOARD COURSES" to enter the blackboard system.  You should see a list of courses. (You might encounter a page with ASU logo at the center. Just skip that page. It should automatically redirect to your blackboard system page.)

Here is the instruction to submit  your project.
4. After you enter the blackboard system, click the link of CSE471/598: INTRO TO ARTIFICIAL INTELLIGENCE(2006 Fall). Then, you'll see a list of links like "Announcements", "Course Information"... on the left.
5. Click "Assignments", then there's a link of "Project 1: Search Algorithm" for you to submit your source code. You can attach your files there and save them.  "Save" just saves your files there without submission. You can delete them or change them later.  You won't submit your files until you click the "Submit" button.   You can submit only ONCE!




Tuesday, September 19, 2006

Planning graph blog questions..

If you *really really* understood the structure of the planning graph, you should be
able to answer these questions (add your answers to the blog)

1. If you grew the planning graph (without mutexes) to level-off and
found that a particular literal p never entered the proposition
list at any level. WHat is the length of the plan needed to achieve
p from the init state?

2. In the example above, suppose an action A never entered any action
layer. Can you lose any plans by refusing to consider A as an
option in the regression search (i.e., you never branch on A)? What
about partial order planning?

3. In level 5 of the planning graph propositions p1 and p2 are not
mutex. Is it possible for the propositions to be mutex at level 3?
How about level 9?

4. Suppose a planning graph with mutexes has been grown to level off
(i.e., until two consecutive proposition levels have the same
propositions and binary mutexes).

4.1. If propositions p, q, r appear first in level 17 with no pair of
them marked mutex. Are we guaranteed to have a plan of length 17 for
achieving p,q,r? If not, how long could the plan for p,q,r be?

4.2. If propositions p,q appear first in level 12 without mutex. Are
we guaranteed to have a plan of length 12 for them? If not, how long
can the plan be?

4.3. If proposition p appears first in level 7. Are we guaranteed to
have a plan of length 7 for achieving p? If not how long can the plan


To the two students who asked me qn about pattern databases extra credit portion

This is for the two students  who had a question about doing the pattern database part of the extra credit portion.

You might want to look at the second page of the short  paper at the following URL to get some additional information on
how to implement the PDB heuristic

Come and talk to me if you have additional questions

(By the way, when you asked me at the end of today's class, I was thinking of planning and was talking about Pattern Database
heuristics for planning. The pattern database heuristic for the 8-puzzle shoudl be quite easy to implement--I may have made it sound
harder than it really is)


Subbarao Kambhampati

HW2 Recitation, Time and Place

In the interests of setting a schedule earlier rather than
later...(that is, so people can make plans accordingly)

The time will be Friday 5:00-6:00 pm; the vote is overwhelmingly in
favor of this slot, at the current state of my inbox (approx 90%). I
suppose the slot has the added advantage of coming after the thursday
class, so there will be more to discuss! My apologies to anyone who
intended to vote, but didn't -- in the case of future votes on
recitation times, I will try to give more than a day to respond.

The place will be in BYENG 576, especially since I foresee little
conflict with others at that time; I wouldn't be surprised if the
building was nearly empty after 4 pm on a friday ;). In the event of
some unforseen conflict, we'll move somewhere else, like 510, and
leave some kind of note near 576 for the invariably late.


Monday, September 18, 2006

project problem description

I got some questions concerning the project. You might be interested.

1). yes. Nodes is a list of node struct values. goalp, children-fn, fvalue-fn are all functions. Actually, "funcall" means to call a function. You can check the syntax of funcall.

2). Here, sort means sort the list of nodes in accsending order based on the f-value. (actually, fvalue-fn should return a fvalue). One simple example of sorting is like this:
CL-USER> (defun fn (x)
(abs (- x 3)))
CL-USER> (sort '( 2 3 5 4 1) #'< :key #'fn)
(3 2 4 5 1)

in which fn is a function return the value of |x-3|. Then, the list is sorted based on the value of |x-3|.

3). I think the instructor already explained the meaning of each entry. However, this defintion is very general. You might have to sepcify it based on your problem.

(defstruct node
STATE ;;the state of the problem corresponding to this node
PARENT ;;the node which is this nodes' parent
ACTION ;;the action which was taken at the nodes parent to get here
G-VAL ;;The g-value of the node
H-VAL ;;THe h-value of the node
F-VAL ;;THE f-value of the node

4). It's very difficult to generate some testing input for just part 1 as it is basically a framework to implement A* algorithm. You have to combine it with part 2.


> I have a few questions about the function A*. I am not too good with lisp so bear with me. I think some examples will help me the most.
> 1) nodes is a list of node struct values. goalp, children-fn, and fvalue-fn are functions correct? 2) Can you explain to me how the sort function works here, with an example?
> 3) The struct node, can you explain what the values in the struct represent. A good way to do this would be to show me an example of a tree with these values.
> 4) What is a good input for testing this after completion?
> I've said a mouthful. I think most of my problems are just that. Understanding the problem (with a bit of lisp unfamiliarity). Any help will be great. I'll stop by your office tomorrow around 4pm if I can.
> (defun A* (nodes goalp children-fn fvalue-fn)
> (cond ((null nodes) nil)
> ;; Return the first node if it is a goal node.
> ((funcall goalp (first nodes)) (first nodes))
> ;; Append the children to the set of old nodes
> (t (A* (sort (append (funcall children-fn (first nodes))
> (rest nodes)) #'<
> :key fvalue-fn )
> goalp
> children-fn
> fvalue-fn))))
> (The sort function we are using is a built-in function for
> commonlisp. It takes three arguments: List to be sorted, the
> comparision predicate with which sorting is done (in our case <) , and
> an optional argument :key, which takes a function and applies it to
> the elements of the list, to get the field over which items of the
> list are compared (in our case, the f-value))
> To make our life simple, we shall assume that search nodes are defined
> as structures with at least the following slots:
> (defstruct node
> STATE ;;the state of the problem corresponding to this node
> PARENT ;;the node which is this nodes' parent
> ACTION ;;the action which was taken at the nodes parent to get here
> G-VAL ;;The g-value of the node
> H-VAL ;;THe h-value of the node
> F-VAL ;;THE f-value of the node
> )

HW2 Recitation

Some people have expressed an interest in another recitation session.
So, the following times seem to be good times:

Thursday 10:00 - 11:00 (am)
Friday 5:00 - 6:00 (pm)

Everyone interested in attending is welcome to cast a vote on the
preferred time, by sending me an email. That's Thursday and Friday
this week, in case it wasn't obvious, i.e., before the assignment is
due, not after. So if you want to give input on the time, it makes
sense to do so soon :).

Also feel free to include any questions about the assignment or
request for a particular area to review more in depth.

Oh, and it will probably be held in the same place as last time, but I
don't know for sure yet -- when the time is decided, the place will be
as well (and I'll send a mail). Note that the Friday time is before
the building locks, an improvement over last time :).


Project 1 Assignment in .doc format

Having found the document for the Project 1 assignment unpleasant to read, I converted it to a Word document, and did some formatting changes to the text to enhance readability. Feel free to use it for yourself.

Disclaimer: though I did my best to leave the content unchanged, I make no promises. Use at your own risk.

Progress of project 1?

Hi, there.
This seems a little wierd that there were very few questions concerning
the project, which is due on Thursday. In the past, I should have
received a bunch of emails regarding the project. Please start the
project as soon as possible if you haven't yet. Also, please respond
to me how much you have done so that I'll discuss with the instructor
whether or not the deadline should be extended.

btw: Some of you might find that the intial stack size for CLISP is too
small(3mb). You might want to increase it so that your search would not
run out of memory. Please check out the blog for more informaiton.


Manhattan distance heuristic

When computing the manhattan distance heuristic for the 8-puzzle, the book doesn't include the empty tile's distance when summing up the distances. Is this to ensure that the heuristic will not overestimate? Wouldn't it be more accurate to include the empty tile?

Approach to Increase the memory size of CLisp

Some of your testcases might require more stack size. You can actually increase the stack size of CLisp. If you are running CLISP from command line, you can call it like this "clisp.exe -m 300mb" then you'll allocate 300mb for the memory size. Under lisp in a box, you could add "-m 300mb" to the "-q" option in the "RunCLISP.bat" under .../lispbox/CLISP folder.


Thursday, September 14, 2006

Part C added to Homework 2. It is now due 9/26

I added Part C--which contains a bunch of planning-related
questions--to the homework. You should be able to do parts 1.a-1.d
right now, and the rest of the parts after next Tuesday's class.

[Sep 14, 2006]

Discussion questions for the blog...

Here are some things you can think about (and put your comments on the

1. In the regression example given in the class, we found that there
was a state {~clear(b); holding(A)} that was generated as a child
by regression, but it is clearly dead. It would be a good idea if
we can kill every such state on birth so the search is improved. How
hard do you think such a strategy going to be? (Specifically, is it
guaranteed to be a computational win?)

2. Given a domain with K state variables and A actions, we said that
the size of the space searched by progression is 2^K and that searched
by regression is 3^K. What is the size of the space searched by the
partial order planner?

3. We called it the "partial order" planning algorithm. However, the
example we did in the class actually had a total order. Does this
make sense?

4. I said that the partial order planner will work on two types of
flaws--open conditions and unsafe links. When it picks a plan, it
has to consider all ways of resolving it (for open conditions, this
involves all possible steps that can be added + the possibility of
making it true from the initial state; for unsafe links it involves
promoting as well as demoting the threatening step).

Does the order in which the flaws are resolved going to have any
effect on the completeness (in particular, for our example, should we
have considered supporting ~cl(b) first and supporting hand-empty
first as two different choices in the search tree?)

5.(slightly harder) If I have actions with "conditional
effects"--where the action in essencse says that I will give you P if
Q is true at the time I execute. Will such conditional effects affect
the way a conflict between a step and a causal commitment gets

(think, for example, the action, drive car from home to school, which
not only brings the car to school, but also brings everything in the
car to school. Suppose you want to leave your laptop at home and bring
the AI book to school. Suppose further that in the beginning, the
laptop is in the car and the car is at home. How will you solve this
problem in terms of progression, regression, and partial order

[Sep 14, 2006]

Readings for next week as well as readings for today's discussion


The reading for the partial order planning discussion I did in the
class today is Section 11.3 in the text book. You should definitely
read the example in the text too so you can understand the idea.

I will be discussion 11.4 (finally knock on wood!) in the next
class. It would also help if you read the optional reading I
suggested: first 5 sections of

I am also adding a bunch of slides to reflect all the discussion
that happened in the class today. Check out, in particular, the three
slides added to the beginning of the class today. (I will add a
hand-written example of the PO planning once I get to my tablet pc at

Subbarao Kambhampati
Professor, Department of Computer Science and Engineering
Ira A. Fulton School of Engineering
Arizona State University
P.O. Box 878809
Tempe, AZ 85287 - 8809 (email)
480-965-0113 (Phone) 480-965-2751 (FAX)

Tuesday, September 12, 2006

Homework 2 assigned..

I opened the socket for Homework 2. It already has a whole bunch of
questions. I will be due in about 2 weeks time (I may add a
question or two--the socket is not closed yeat).


Monday, September 11, 2006

partial solution for project 0 and some useful macros for lisp

The solution to some problems of project 0 is available at:

You might want to check it out.  Keep in mind that the solution is just one version. There could be other ways of implementation.

Besides, I'd like to introduce some useful macros you might be interested:
1). CLISP doesn't provide "while" like C or Java. But actually you can define it by yourself as follows:

(defmacro while (condition &rest forms)
           `(loop (unless ,condition (return)) ,@forms))

A sample call would be:
CL-USER> (let ((x 3))
              (while (> x 0)
                 (format t "~d~%" x)
                 (decf x)))
Please include the macro definition in your source file if you use "whlie" in the follow-up project.

2). How to interupt an infinite loop in Lisp?
Sometimes, your lisp code might run forever. If you are using lisp-in-a-box, just press "Ctrl-C Ctrl-C" and then the interpreter will enter the bebugging mode. You can just type "q" to terminate the evaluation.

3). Some useful commands to debug in lisp.  You can use "trace" in lisp to track a function.
Also, you can use the following macro to display the variable values at any stage.

(defmacro display (&rest symbols)
  (let ((format-statements
             for sym in symbols
             collect `(format t  "~a: ~s~%" ',sym ,sym))))
    `(progn ,@format-statements)))

A sample call is below:
CL-USER> (let ((a 2) (b 4))
  (let ((c (+ a b)))
    (display a b c *package* MOST-POSITIVE-FIXNUM)
A: 2
B: 4
C: 6
MOST-POSITIVE-FIXNUM: 281474976710655


Have fun! :-)

Searching in Large Spaces

I mentioned some recent advances in search algorithms for large (gargantuan) state spaces during the homework recitation. The first one reminds me of SMA*, in so far as SMA* keeps track of f values to guide its re-expansion strategy. Beam-Stack Search is clever, as I recall, in that it keeps track of upper and lower bounds to re-expand portions of the tree in a way reminiscent of the algorithm in 7.2 of homework 1. The second reference is an earlier algorithm by the same author which is not able to reconstruct the solution after finding the goal (for the first time) --- it has to start search over, recursively, to re-generate the solution.

Rong Zhou, Eric A. Hansen: Beam-Stack Search: Integrating Backtracking with Beam Search. ICAPS 2005: 90-98

Rong Zhou, Eric A. Hansen: Breadth-First Heuristic Search. ICAPS 2004: 92-100

The latter paper has been published in multiple places; KR, ICAPS, and AAAI, for example. Hopefully the paper, if not the title, differs in the details or exposition from conference to conference. If so, then one has a better chance of understanding the algorithm by reading all of the multiple publications.

Another line of work by the same author has to do with partially eliminating duplicates in undirected graphs, in the context of searches like the prior ones (i.e., huge spaces). Richard Korf also publishes extensively on techniques for searching large state spaces. For example,

Richard E. Korf: Best-First Frontier Search with Delayed Duplicate Detection. AAAI 2004: 650-657

There are many other authors of course. In the same time frame as all the prior references, there is one last algorithm that comes to mind:

Stefan Edelkamp: External Symbolic Heuristic Search with Pattern Databases. ICAPS 2005: 51-60

Which is interesting because it uses external memory, pattern databases, and symbolic methods.


Friday, September 08, 2006

On the homework extra credit problem on IDDFs/DFS node expansions ratio

There was apparently a question in yesterday's recitation session as to whether the
ratio of nodes expanded by IDDFS to DFS should  really be (b+1)/(b-1)

the (b+1)/(b-1) expression is the ratio of the _average_ number of nodes expanded by

The ratio of worst case number of nodes expanded will turn out to be slightly different.

(Getting the ratio for average is slightly more involved compared to getting the ratio for
the worst case).


Subbarao Kambhampati

Wednesday, September 06, 2006

slides for lisp recitation

Dear all,
I've already post the slides of the recitation online. It's available at

Most of the information is included. You may want to check it out.
For those who missed today's recitation, please stop by in my office
hours if you have any questions.


Tuesday, September 05, 2006

On sliding tile problems--history and math (and a question)

****Number of states in n-puzzle games:
Below are some statistics on the sizes of the state spaces, and the
time taken to solve them by brute force enumeration, under the highly
optimistic assumption that you can expand 10 million (10, 000, 000 =
10^7 nodes per second.)

8 Puzzle: 10^5 .01 seconds
15 Puzzle: 10^13 6 days
3x3 Rubik'sCube: 10^19 68,000 years
24Puzzle: 10^25 12 billion years

As I mentioned in the class, Manhattan distance heuristic is enough to
bring 15 puzzle down to solvability. 24 puzzle however is a different
beast all together. We are only now able to start cracking 24-puzzle
problems (with pattern database heuristics).

***An interesting point to ponder:
Here is a question for you to think. The game of chess has about 10^40
legal positions--which is much bigger than 10^25, and yet a computer
is the chess champion. How come? (Answer by adding comments on the blog).

********History of the sliding puzzles is quite interesting..

The fifteen puzzle was invented by sam loyd in the 1870s, and appeared
in the scientific literature shortly thereafter. The editor of the
journal where the paper on 15-puzzle appered said at the front of the
paper" "The 15 puzzle for the last few weeks has been prominently
before the American public, and may safely be said to have engaged
attention of nine out of ten persons of both sexes and of all ages and
conditions of the community" ---In otherwords, it was the pokemon and
hoolaa-hoop of the 1870s.

Part of the reason for the world-wide fifteen puzle craze was that
Loyd offered a $1000 cash prize to transform a particular intiial
state to a particular goal state. Johnson and Story later proved that
it wasn't possible, and that the entire statespace was divided into
even and odd permutations and that there is no way to transform one
into the other by legal moves (see below).


Also see
which explains using the theory of permutations why Samuel's original
puzzle could not be solved.

****The theory behind disconnectedness of the state-space of tile-puzzles
has got to do with "even and odd permutations". In general if you take
a state, physically lift two consecutive tiles in that state and swap
their positions, then the new state will be disconnected to the old
state. You can see that

1 2 3
4 5 6
8 0 7

can be got by

starting with

1 2 3
4 5 6
7 8 0

physically lifting and swapping 7 and 8

to get

1 2 3
4 5 6
8 7 0

and then moving the blank to the left

The swapping ensures that the goal state and the state

1 2 3
4 5 6
8 0 7

are disconnected.


Subbarao Kambhampati
Professor, Department of Computer Science and Engineering
Ira A. Fulton School of Engineering
Arizona State University
P.O. Box 878809
Tempe, AZ 85287 - 8809 (email)
480-965-0113 (Phone) 480-965-2751 (FAX)

Project 1 assigned. Due 9/22

I put up project 1 online. We have covered everything you need to
know to do it. Please print and read the project
description. Don't let the length of the project description
intimidate you--in this project, I give you a lot of code
so what you need to write is not all that much.

I will answer questions about the project in Thursday's class. You can
also ask the TA before that ;-)

I have set the deadline to be 9/22. Since this is the first real
project, I will try to monitor the progress and adjust the deadline if

[Sep 5, 2006]

Lisp recitation @BYENG 420

I am going to hold a recitation about lisp programming at 3:30-4:30pm
tomorrow (Sep. 6th) in BYENG420.
You are welcome to come, though this is a little late for your project.
I'll try to cover the basics, concepts and some tricks to program under
lisp in a box.

See you.

Monday, September 04, 2006

Lisp for Mac Users

Here's what I found for Lisp on Mac OS X.

Install Darwin Ports, which is a really useful thing to have handy anyway, to install a lot of Linux ports.

Then open up a terminal and type "sudo port install sbcl" if you are using an Intel Mac, or "sudo port install clisp" if you are using a PPC Mac. SBCL is available for both PPC and Intel, and CLISP is available for PPC only right now, and CLISP is what most of the class will probably be using in Linux/Windows.

Right now the Xemacs site is down, but if it gets back up and you want to, you can do a "sudo port install xemacs" (but make sure you have Apple's optional X11 installed from your Mac install DVD). And you can also do a "sudo port install slime" to install a lisp thing for emacs, and get it configured.

All that I have done, however, is install SBCL. And I'm editing my "defuns" using vim, then doing a (load "myfile.l") from within SBCL.

It was frustrating trying to find what's available for Mac OS X, but I think this is pretty much it right now (that's free). I have WinXP installed on another partition on my (intel) Mac Mini, but I really didn't feel like using Windows.

-Corby Ziesman

HW 1 Recitation, Thursday 9/7 6:30 pm-7:30 pm

Among the respondants, thursday is slightly more popular. At this
time, there are no specific questions about the homework, so the
general plan is to review the lecture notes with an eye towards the
homework questions. If you do have a specific question, send it to me
(not the list).

My plan is to use BYENG 576, but if that is unavailable (or too
small), then the backup plan is to use BYENG 510.


Sunday, September 03, 2006

Project 0 submission

Please hand in both hard copies and soft copies!

The hardcopies will be collected in class. Please print clearly your name and ASU-ID at the beginning of your sheet.

As for soft copies, please include all your functions  in one file. The file should be named as "Your name.lisp". Please make sure it is compilable.

Please send your softcopy to with your name on the title. Thanks!



For question 6 when finding the cube root of a number using the Newton-Raphson Method it requires an estimate x0, as stated in the question. However do you want the estimate to be computed based on the given value, or do you want us to give the function an estimate?

Problem Solved (Allegro in a Linux Box)

I downloaded an updated file of Allegro from this page:

Also the license could be downloaded from the same place.
No problems during the installation and
now everything is working nice under Linux


Saturday, September 02, 2006

lispbox under Linux

I had no problem installing Lisp in a box for Windows but in the
case of Linux I am getting an error:

Dumping under names emacs and emacs-21.3.1
make[2]: *** [emacs] Segmentation fault
make[2]: *** Deleting file `emacs'
make[2]: Leaving directory `/home/acardh/2006-2/LispLinux/lispbox/emacs-21.3/src'
make[1]: *** [src] Error 2
make[1]: Leaving directory `/home/acardh/2006-2/LispLinux/lispbox/emacs-21.3'
make: *** [all] Error 2
My emacs version is 21.4.1 and should not be a problem.
If somebody had the same error and knows how to fix it I would appreciate their help

Thanks in advance

Friday, September 01, 2006

Reminder: AI Seminar Today@10:30 BY660: Distributed PCA and network anomaly detection

Title: Distributed PCA and network anomaly detection
Speaker: XuanLong Nguyen, EECS, UC Berkeley

Place: BY 660; Friday 9/1; 10:30--11:30AM


We consider the problem of network anomaly detection given
the data collected and processed over large distributed systems.
Our algorithmic framework can be seen as a distributed
version of the well-known principal component analysis
method, which is concerned with tracking the behavior
of the data projected onto the residual subspace of the principal
components. Our approach consists of a protocol for
local processing at individual monitoring devices and
global decision-making and feedback at a coordinator.
A key ingredient of our framework is an analytical
method based on stochastic matrix perturbation theory for
balancing the tradeoff between the accuracy of network anomaly
detection, and the amount of data communication over the
network. This is joint work with Ling Huang, Minos Garofalakis,
Joseph Hellerstein, Michael Jordan, Anthony Joseph and Nina Taft.