Tuesday, October 31, 2006
FYI: CSE 574 (Planning & Learning) is being offered next semester
CSE 574: Planning & Learning is being offered next semester. Sungwook Yoon is the instructor of record and Will Cushing
and J. Benton will also be actively involved (they are all being cc'd on this mail). Between them, they know a whole lot about the
area. Although I am not teaching, I may play the role of an honorary TA (if they let me).
If you liked the planning, MDP etc, you will enjoy 574.
Information on the past offerings of CSE 574 is available at http://rakaposhi.eas.asu.edu/cse574
UG students who want to take it for graduate credit should contact Sungwook.
regards
Rao
Re: Book prizes..
by........the name escapes me, but I'm sure google knows :).
Will
On 10/31/06, Subbarao Kambhampati <rao@asu.edu> wrote:
> This mail is primarily meant for the student who correctly guessed that
> he/she is one of the students crossing 70 (he/she knows who she/he is).
>
> It is being cc'd to the class list just so to set the record straight and
> stop any possible speculations of reneging on my part (most of which I
> probably
> mooted myself ;).
>
> Dear >70:
>
> Please feel free to send me amazon link to any one general science book that
> you would like to get. Some nonbinding suggestions, that I had enjoyed
> reading, include:
>
> http://www.amazon.com/LanguageInstinctStevenPinker/dp/0060976519
> (The Language Instinct by Steven Pinker)
>
> http://www.amazon.com/GunsGermsSteelFatesSocieties/dp/0393317552
> (Guns, Germs & Steel by Jared Diamond)
>
> http://www.amazon.com/GenomeAutobiographySpeciesChaptersPS/dp/0060894083
> (Genome by Matt Riddley)
>
> http://www.amazon.com/AmericanPrometheusTriumphTragedyOppenheimer/dp/0375726268
> ( American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer by
> Kai Bird and Martin Sherwin)
>
> http://www.amazon.com/ManWhoKnewInfinityRamanujan/dp/0671750615
> (The Man Who Knew Infinity: A Life of the Genius Ramanujan by Robert
> Kanigel)
>
> cheers
> Rao
>
>
>
Fwd: Hw3 statistics
 Forwarded message 
From: Lei Tang <L.Tang@asu.edu>
Date: Oct 30, 2006 11:55 AM
Subject: Hw3 statistics
To: Subbarao Kambhampati <rao@asu.edu>
The full score of hw3 is 40 + 40. I'll post the details of grading standard in the blog later.
The following are the statistics of hw3 grades:
HW3  Without Extra Credits(40)  With Extra Credits(80)  
 Mean  Median  Highest  Mean  Median  Highest 
Overall  25.1  28  36  34.9  30  66 
Under  19.3  18.5 
 26.6  21.5 

Graduate  29.6  31 
 41.1  36 

Very few students did the policy iteration problem correctly.
These two did a good job:
XXX : 36/40.
YYY.: 35/40.
cheers
Lei
Book prizes..
It is being cc'd to the class list just so to set the record straight and stop any possible speculations of reneging on my part (most of which I probably
mooted myself ;).
Dear >70:
Please feel free to send me amazon link to any one general science book that you would like to get. Some nonbinding suggestions, that I had enjoyed reading, include:
http://www.amazon.com/LanguageInstinctStevenPinker/dp/0060976519 (The Language Instinct by Steven Pinker)
http://www.amazon.com/GunsGermsSteelFatesSocieties/dp/0393317552 (Guns, Germs & Steel by Jared Diamond)
http://www.amazon.com/GenomeAutobiographySpeciesChaptersPS/dp/0060894083 (Genome by Matt Riddley)
http://www.amazon.com/AmericanPrometheusTriumphTragedyOppenheimer/dp/0375726268 ( American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer by Kai Bird and Martin Sherwin)
http://www.amazon.com/ManWhoKnewInfinityRamanujan/dp/0671750615 (The Man Who Knew Infinity: A Life of the Genius Ramanujan by Robert Kanigel)
cheers
Rao
(**Important Modification to the extension policy**) Re: Project 3 Haloween extension
Sorry for the second mail on this topic, but it occurred to me that my extension coming at 9pm is probably unfair for the people who already worked
on the project 3 mostly (there seem to be a fair number of them), or have already irrevocably missed all the Haloween parties to
actually put the extension to use.
I thought up a hopefully fairer extension process
>Project 3 is still nominally due on Thursday in class
> Each of you have one penaltyless extension that you can either use for this project or save it and use it for any of the next ones.
Apologies for any confusion.
Rao
It has been brought to the attention of the management that the Simpsons are too busy filming their semiannual
Haloween special. In view of this, it has been decided that project 3 will be accepted until
Tuesday the 7th, in class.
Project 4on prologwill be given out by the end of this week and will be due before thanksgiving.
Homework 4 socket will be opened and filled with uncertainty and logic questions.
Happy Haloween.
Rao
Project 3 Haloween extension
Haloween special. In view of this, it has been decided that project 3 will be accepted until
Tuesday the 7th, in class.
Project 4on prologwill be given out by the end of this week and will be due before thanksgiving.
Homework 4 socket will be opened and filled with uncertainty and logic questions.
Happy Haloween.
Rao
Hw3 grading standard
The required part is Qn I (2+3+3+3+2+3) + Qn II (3+3+2+3) + QnIII (3+3+3) + wacky question(4 as only four are discussed in class) = 40.
Extra credit:
Wacky question providing additional metaphor (1')
+ QnI (14')
+ QnII(5', I downweight this part as we've done most questions in past homework)
+ QnIII (8'+4')
+ Qn V (3'+5')
=40
Monday, October 30, 2006
Inclass exam statistics
I graded the inclass exams; they will be returned to you at the end of the class tomorrow.
Here are the stats for the inclass exam (out of 85)
Undergrad Avg=30.85; Std dev= 16.72; Max=70.5; min= 8.5
Grad Avg=42.10; std dev=17.52; Max=74.5; min=5
For the overall class here is the distribution
Top three scores: 74.5 (g); 70.5 (u); 65(g)
>70 (2; 1g; 1u)
6070 (1; 1g)
5060 (8;6g;2u) > means there were 8 students between 5060 of which 6 were grads and 2 were undergrads
4050 (3; 3g)
3040 (9;5g;4u)
2030 (9;3g;6u)
1020 (4;1g;3u)
010 (2;1g,1u)
regards
Rao
ps 1: If any of the over 70 folks can guess who they are and send me an email (subject: I think I am the grad/undergrad who crossed 70)
I will buy them a geek present (i.e., book)for doing well as well as being gutsy enough to guess that they did well ;)
ps 2: Check the attachment for some unbaked philosophy on exams..
ps 3: If you need to vent, feel free to use the anonymous channel http://rakaposhi.eas.asu.edu/cgibin/mail?rao
Friday, October 27, 2006
Re: Take home exam
rao
Dr RaoThe take home exam is also open book and open notes right? (In fact, anything but open friend and open internet)
Matt Martindale's qns re: project 3
 The answer to the first question is Yes. (if both are false, then coremeltdown is false).
The answer to the second is also yesyou should make all causations perfect and exhaustive
rao
 matthew.martindale said...

In part 1 of the project, Bart says that either the presence of lowquality heavy water OR inferiorplutonium will cause a core meltdown. Does that mean that we expect a coremeltdown NOT to happen if both IP and LQHW are false? Similarly, can we guarantee that the presence of both LQHW and IP will lead to a core meltdown with 100% certainty
10/24/2006 12:04 AM
 matthew.martindale said...

Also, on part II, Number 1, are we supposed to apply the "perfect and exhaustive" causations to the glowinthedarkemployee and slurpeeliquification nodes? (ie. CM causes SL = 1.0, CM causes GIDE = 1.0)??
Thanks again
Thursday, October 26, 2006
An exam so nice, some do it twice... or the lowdown on the athome version of the inclass exam...
I understand that the exam went without too many hitches. Just in case
you are wondering whether or not to do the exam again at home, here is
my standard FAQ.
Rao (shivering in Philly)

The athomeversionoftheinclass exam is a pedagogical innovation
next only to the socketopensocketclosehomeworkassignments (and
are apparently just as confusing ;)
So, here is the standard FAQ on ahvotic
0. What are the ground rules for doing this
Only that (a) you have not talked to anyone about the exam and (b) you
have to submit it at the beginning of the class
on Tuesday 10/31
1. Do I lose anything if I don't do it at home?
No (okayyou do lose the satisfaction of doing it twice;). Your
grade in inclass will stand.
2. How is the effective midterm grade computed?
Eff = max( inclass; w*inclass+(1w)*athome )
3. What is the range of w?
0.5 < w <1
(typical values in the past ranged between .6 and .666)
4. But if everyone else does it at home and improves their grade, and
I decide to watch Seinfeld reruns, don't I lose out?
No. First of all, *nobody* ever loses out by watching Seinfeld reruns
(Channel 10; week nights 10:30 and again at 11:30; also Channel 14 on
Rao's TV).
The difference between your inclass score and the Eff score will be
considered as your _extra credit_ on the mid term (and thus those
points wont affect grade cutoffs).
5. How do you device these ludicrously complex schemes?
This is the only way of making Xcel do some real work.

Rao
Wednesday, October 25, 2006
Talk of interest: C# 3.0: Another reason why functional programming matters (Thu BYENG 210; 12noon; Pizza)
Folks:
For those of you still wondering why you should learn Lisp, here is a talk that should be of interest
rao

LISP is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot.
Eric Raymond
http://catb.org/esr/faqs/hackerhowto.html
It will be in 10/26 11 a.m. – 12 p.m. BYENG 210
Speaker: Wes Dyer, Microsoft
C# 3.0: Another reason why functional programming matters
Recently, there has been renewed interest in adopting language features pioneered by functional languages into mainstream language. C# is an interesting example of an objectoriented language that has adopted functional features such as iterators, lambdas, and the ability to quote expressions. These features form the basis of LINQ which is a powerful language integrated querying model designed to query data from many domains with one medium.Wes Dyer did his Bachelors at ASU (and did his honors thesis with Rao ;)
Monday, October 23, 2006
Re: Makeup office hours on Monday afternoon 2:30pm (RSVP; can optionally be made a review session on demand)
office door). If not, I will talk to them one on one in my office.
rao
Folks
Since I will be away from Wednesday morning, I am willing to hold office hours on Monday 2:30pm
to answer any exam related question.
If you plan to show up, let me know. If several people plan to
show up and want to have a group review for the exam, I will arrange for a meeting room.
rao
Midterm Recitation, Tuesday (tomorrow) morning, 9:3010:30
favorite among the respondants. The place will presumably be BYENG
576, although 510 is a fallback plan.
Will
On 10/21/06, Kyle Sanner <ksanner@asu.edu> wrote:
> Hi William,
> I'd prefer tuesday 9:3010:30 for the recitation, and I'd like to see
> Planning reviewed if possible.
> Thanks,
> Kyle
>
> On 10/19/06, William Cushing <william.cushing@gmail.com> wrote:
> > Well, a fair number of people seem to have a conflict from 10:4011:55
> > on tuesday, and tuesday is winning anyways (by a little bit; perhaps
> > it is only the case that the email obsessive (=early voters) prefer
> > mornings?). So I figure if people are free 10:0011:00, they must not
> > have a 9:1510:30 class (or a 10:40).
> >
> > Long story short, there is a late entry on the ballot, tuesday' 
> 9:3010:30.
> >
> > It will not be at a serious disadvantage despite the late entry,
> > because I will assume that everyone who has said tuesday so far also
> > likes tuesday'  you only need to email me again if that isn't true.
> > Ideally, tuesday' will absorb all the monday people and the people who
> > are conflicting with both days.
> >
> > Will
> >
>
>
Saturday, October 21, 2006
Makeup office hours on Monday afternoon 2:30pm (RSVP; can optionally be made a review session on demand)
Since I will be away from Wednesday morning, I am willing to hold office hours on Monday 2:30pm
to answer any exam related question.
If you plan to show up, let me know. If several people plan to
show up and want to have a group review for the exam, I will arrange for a meeting room.
rao
Thursday, October 19, 2006
Project 3 due date set to Nov 2nd
should be able to do it easily while bayes nets are fresh in your
mind.
The possible projects in the pipeline are:
A project on first order theorem proving (aka writing your own prolog
interpreter)
Possibly a project on learning.
Rao
Re: Midterm Recitation
on tuesday, and tuesday is winning anyways (by a little bit; perhaps
it is only the case that the email obsessive (=early voters) prefer
mornings?). So I figure if people are free 10:0011:00, they must not
have a 9:1510:30 class (or a 10:40).
Long story short, there is a late entry on the ballot, tuesday'  9:3010:30.
It will not be at a serious disadvantage despite the late entry,
because I will assume that everyone who has said tuesday so far also
likes tuesday'  you only need to email me again if that isn't true.
Ideally, tuesday' will absorb all the monday people and the people who
are conflicting with both days.
Will
Midterm Recitation
is nice to hear :).
I'm available on Monday starting at 3 pm, nominally ending at 4 pm
(but I think history shows that recitation sessions go for an extra
half hour of specific questions), or tuesday starting at 10 am
(nominally ending at 11 am).
If you have a preference over those two times, send me a note  I'll
be picking which day for sure no later than saturday.
There is a ton of material that could be covered (and not enough time
to review it all), so I'd really appreciate input on what particular
topics are the most confusing, i.e., most interesting to review.
Otherwise, the default plan is to review my favorite ideas covered in
class since the end of recitation 2 till present, maybe throw in some
of the really important older material, like A*.
Good luck on the test!
Will
Tuesday, October 17, 2006
Project 3 preview ready on the projects page
(No lisp coding...!)
rao
Qns for the class/Blog (on Bayes nets)
1. You have been given the topology of a bayes network, but haven't yet gotten the conditional probability tables
(to be concrete, you may think of the pearl alarmearth quake scenario bayes net).
Your friend shows up and says he has the joint distribution all ready for you. You don't quite trust your
friend and think he is making these numbers up. Is there any way you can prove that your friends' joint
distribution is not correct?
2. Continuing bad friends, in the question above, suppose a second friend comes along and says that he can give you
the conditional probabilities that you want to complete the specification of your bayes net. You ask him a CPT entry,
and pat comes a responsesome number between 0 and 1. This friend is well meaning, but you are worried that the
numbers he is giving may lead to some sort of inconsistent joint probability distribution. Is your worry justified ( i.e., can your
friend give you numbers that can lead to an inconsistency?)
(To understand "inconsistency", consider someone who insists on giving you P(A), P(B), P(A&B) as well as P(AVB) and they
wind up not satisfying the P(AVB)= P(A)+P(B) P(A&B)
[or alternately, they insist on giving you P(AB), P(BA), P(A) and P(B), and the four numbers dont satisfy the bayes rule]
3. Your friend heard your claims that Bayes Nets can represent any possible conditional independence assertions exactly. He comes to you
and says he has four random variables, X, Y, W and Z, and only TWO conditional independence assertions:
X .ind. Y  {W,Z}
W .ind. X  {X, Y}
He dares you to give him a bayes network topology on these four nodes that exactly represents these and only these conditional independencies.
Can you? (Note that you only need to look at 4 vertex directed graphs).
=================
Rao
Current raw totals
Here are the current raw totals (Project 0 and 1; Homeworks 1 and 2). The UGs (white) and Grads (Green) are
sorted into two groups. Each group is sorted in the descending order of Raw Total (purple column). Raw extra marks are kept
separate. Please note that these are RAW (unweighted) totals. The two homeworks and the project 0 and 1 together
won't probably count for more than about 25% of your overall grade (exact percentage depends on how many homeworks and
projects get given).
This should give you some idea about your relative standing.
The grades are posted by posting ID which you should know (this is not your student id).
Rao
=====================================
Fwd: Project 1 statistics
The following is the information after regrading Project 1:
 Without Extra Credits(100)  With Extra Credits(120)  
 Mean  Median  Highest  Mean  Median  Highest 
Overall  83.3  90  100  88.0  95  120 
Under  74.3  85 
 76.8  85 

Graduate  89.9  90 
 96.1  97.5 

The full score is 100 + 25.
The required part is weighted as 20(part I) + 25 (part II) + 30(part III) + 25 (presentation and analysis) = 100.
For extra credit, the first 3 task has 5 points for each. The last task (pattern database) has 10 points.
Attached are the grade sheet for the project and the records with updated information(I split the extra from the required part).
The following students did a good job:
<removed>
Lei
Monday, October 16, 2006
Project 2 will be taken without penalty until Thursday (so you will have seen graded version of Project 1)
Since project 1 is being returned tomorrow (Tuesday), to allow for the possibility that some of you might learn from the
grading of the project to improve your project 2 presentation, we will take project 2 until Thursday without penalty.
The homework 3 will still be due on Thursday.
FYI
Rao
Some comments on Project 1
The project reports will return to you in Tomorrow's class.
Here are the guideline for the grading:
Part I(20) + Part II (25) + Part III(30) + Presentation & Analysis (25) + Extra (5+5+5+10).
Thus, the full score is 100+25.
Here are some general suggestions to write a good report:
1. Write the report clearly so that I can find out you do finish each task. Some description of your implementation would be appreciated. Don't just give me the source code.
2. Give the output of some sample test calls to show that you code has no bug.
3. Given the output, do some thorough analysis. You can plot some figures to show the difference of different methods, to check whether it is consistent with the theoretical result (like the time complexity, space complexity, order of increase etc.) What are the conclusions? Does it make sense?
4. Did you encounter any problem during the project? Anything unexpected? Can you conjecture any method to resolve it? Are there possible ways to get better solutions?
The basic philosophy is that, the more you spend on this project, the more you gain.
Please don't treat the project as just a "project". but a chance for exploration.
Some students just gave the source code. That's too bad. Some students just gave me the output for each test case without any comment or analysis. Unfortunately, these students probably will lose the credits for the analysis part. On the other hand, some other students really did a good job on analysis part with beautiful figures to show the trend for comparison.
BTW: The link of project 2 submission in the blackboard system under the assignment directory. Please hand in your program through that link. Don't use the digital drop box. Thanks!
regards,
Lei
Sunday, October 15, 2006
Fwd: Planning Questions
Initial Conditions: (and (garbage) (cleanHands) (quiet))
Goal: (and (dinner) (present) (not (garbage)))
Actions:
cook :precondition (cleanHands)
:effect (dinner)
wrap :precondition (quiet)
:effect (present))
carry :precondition
:effect (and (not (garbage)) (not (cleanHands)))
dolly :precondition
:effect (and (not (garbage)) (not (quiet)))
The correct planning graph can be found in the paper
http://www.cs.washington.edu/homes/weld/papers/pi2.pdf
(Figure 5 shows the graph with mutexes)
 Forwarded message 
From: Archana Ramesh < Archana.Ramesh@asu.edu>
Date: Oct 15, 2006 1:27 AM
Subject: Planning Questions
To: rao@asu.edu
Hi,
I did some reading on Planning and I think I've understood it much better now than I had before HW2. If you have access to any questions on Planning similar to the ones asked in HW2, could you please email them to me, as I would like to check my understanding now.
Thank you very much,
Archana
Friday, October 13, 2006
A site with many cool Java applets for the ideas discussed in the class..
The following site contains many cool java applets that you can use to learn about/experiment with techniques we are learning in the class.
http://www.cs.ubc.ca/labs/lci/CIspace/index.html
We will use the belief net tool from here in the next project; but other tools are neat too..
Rao
Thursday, October 12, 2006
P(S~M) vs. P(S)
In general, there don't seem to be good and convincing arguments that it will always be easier to assess P(S~M) compared to P(S). (So I oversold my
case :(
The most reasonable explanation as to why we look at P(S~M) type probabilities rather than P(S) that I can offer on further reflection is that we are interested in computing posterior probability distribution of a variable (rather than just its maximal probable value). If the patient's disease can be one of 7 types. There may be a prior distribution over these diseases, and after seeing some evidence, the doctor wants to get the posterior *distribution* on the diseases (not just the most likely disease but the distribution). If we are doing, we will anyway be needing
probabilities of type P(Sdisease= di) (note that P(S~M) can be seen as P(SM=false)).
I added a slide to the class notes making this point.
cheers
Rao
Links from Today's class + Bayesian brain..
reason why Bayes is considered bigBayesian view in statistics allows predictions even with little data, since it assumes that there may be a prior distribution. Frequentists on the other hand distrust priors
and depend only on the data. In practice this means that Bayesians can jump to conclusions with much less data.
http://www.economist.com/science/displaystory.cfm?story_id=5354696&no_na_tran=1
(and here is a more technical paper on which that article is based: http://web.mit.edu/cocosci/Papers/GriffithsTenenbaumPsychSci06.pdf )
See also Wikipedia entry for Bayesian Probability http://en.wikipedia.org/wiki/Bayesian_probability which makes it clear that a Bayesian does not tie
probabilities to relative frequenciesshe is happy to give a probability to some event such as "seeing a grue wearing green shorts". This view of probability
is sometimes called "personal" probability. Bayesianism has come to dominate probability and statistics.
===========
The Human Language series I mentioned is the series described at http://www.thehumanlanguage.com/page1.html
It is an eminently watchable video series. The ASU media library has copies of this video.
================
Re: Doubt regarding Thursdays class
of the random variables involved in that event). For example, questions such as "what is the probability that
you will get at least 4 heads when you toss a coin 25 times. Here, the distribution is the binomial one, and given the
distribution, you can compute the required probability.
In statistics, you are given the data, and are being asked to compute the model.
A general prediction problem may start with observed data, fit a model to it, and use the model to compute the probability of some event.
In the coin example above, suppose the coin can be a biased one. You are given the outcomes of 200 cointoss experiments with that
coin, and are then asked to predict the probability that in the next 25 coin tosses, it will come up heads at least 4 times.
In this case, you will first learn the "parameters" (i.e, pthe probability that the coin comes head) from the data. This is statistics.
Then, given p, you plug it into the binomial distribution and can compute the probability that the coin will come heads 4 times at least in the
next 25 tosses.
Does this help?
Rao
Hi,You have made the statement that "Probability is inferring from the model of the world and Statistics is creating model from data". Defining statistics as creating model from data is okay but I found it difficult to understand how probability is about inferring from the model of the world. Have you made this statement in general or specifically to say that you can caluculate the probabilty of different things once you know the joint probability.If you have made it in general we can say that anything we do is an inference thinking in very abstract,even statistics.Thanks,Srianth Reddy B.
Qns for Today's class
1. We saw that propositional logic is monotonic and that real world requried "defeasible" or "nonmonotonic" reasoning. Is probabilistic reasoning
monotonic or nonmonotonic? Explain.
2. What is the difference between "Probability" and "Statistics"?
3. We made a big point about the need for representing joint distribution compactly. Much of elementary probability/statistics handles
continuous and multivalued variables, where specifying the distribution of the single variable itself will need a huge number of numbers.
How is this normally sidestepped in elementary probability?
Also, make sure to read/review chapter 13 which reviews elementary probability.
Rao
Announcement re: Homework 3 and Sample Midterm
I added one last question to the homework 3 and closed the socket. It is now due in class on next
thursday (October 11th)
(The additional question is the wacky question).
Also, due to popular demand, I am releasing a sample midterm. You can see it at
http://rakaposhi.eas.asu.edu/cse471/samplemidterm06.pdf
You can optionally complete and submit the sample midterm as extra credit portion for homework 3.
Rao
Announcement re: HW3 and Sample Midterm
I added one last question to the homework 3 and closed the socket. It is
now due in class on next
thursday (October 11th)
(The additional question is the wacky question).
Also, due to popular demand, I am releasing a sample midterm. You can see
it at
http://rakaposhi.eas.asu.edu/cse471/samplemidterm06.pdf
You can optionally complete and submit the sample midterm as extra credit
portion for homework 3.
Rao
Announcements: HW 3 closed; due next thursday; Sample midterm available
I added one last question to the homework 3 and closed the socket. It is
now due in class on next thursday (October 11th)
(The additional question is the wacky question).
Also, due to popular demand, I am releasing a sample midterm. You can see
it at http://rakaposhi.eas.asu.edu/cse471/samplemidterm06.pdf
You can optionally complete and submit the sample midterm as extra credit
portion for homework 3.
Rao
Friday, October 06, 2006
Homework 2 Grading details
For part A, the grade points for each problem is the same as in the solution(in total 41 points).
For Part B, the score for each problem is just 2 points, NOT 3 points as in the solution. So there are actually only 12 points.
As for Part C, a, b, c, d and e are 3 points each. f, g, and h are 2, 4 and 5 points respectively. This part has in total 3*5+2 + 4 + 5 = 26 pts.
Thus, the total score is
41+12+26=79.
Thursday, October 05, 2006
Homework 2 Stats
The following are some statistics of the homework 2(The full score is 79).HW2: Mean/Median /HighestOverall:
44.8 / 49 / 73
UG: 33.8 mean/37.5 median
Graduate: 52.0 mean / 54.510 median
Wednesday, October 04, 2006
Class Survey Statistics available for the curious
In case you are curious, the statistics from the survey that I did in the class couple of weeks back are available at
http://rakaposhi.eas.asu.edu/cse471/survey06.htm
Feel free to comment/interpret either on the blog or anonymously
cheers
Rao
Homework 2 solutions posted (graded homeworks will be returned in class on Thursday)
The homeworks will be returned in class on thursday
rao
AI to Monitor Foreign Press for Threats
from the suddenriseinthenumberofbushsucksarticlesabroad dept.
Corby
Tuesday, October 03, 2006
Is DeepBlue intelligent? Some extracurricular philosophy
This is a mail I had sent to the class in Fall 2003. As they say
about reruns, if you haven't seen it, it is *new* for you ;)
Rao
Here is an article that discusses the question whether Deep Bluethe
Kasparovbeating chess programthat we are discussingis
"intelligent".
I send this to you because this is pretty much my bias/position too on
this issue (plus I like Drew McDermott's styleif you ever get a
chance, you should read his paper "Artificial Intelligence meets
Natural Stupidity"which can be found at
http://rakaposhi.eas.asu.edu/mcdermott.pdf  and was written in the
early days of AI (~1978) to criticize researchers' tendency to
selfdelude... (which is also related to the AI/Thermos Flask
jokeask me about it sometime).
Bottom line: Introspection is a lousy way to theorize about thinking.
See the end for a pointer to a different perspective
Rao
[9/26/2003]
]]
How Intelligent is Deep Blue?
Drew McDermott
http://cswww.cs.yale.edu/homes/dvm/
[This is the original, long version of an article that appeared in the
May 14, 1997 New York Times with more flamboyant title.]
IBM's chess computer, Deep Blue, has shocked the world of chess by
defeating Garry Kasparov in a sixgame match. It surprised many in
computer science as well. Last year, after Kasparov's victory against
the previous version, I told the students in my class, ``Introduction
to Artificial Intelligence,'' that it would be many years before
computers could challenge the best humans. Now that I and many others
have been proved wrong, there are a lot of people rushing to assure us
that Deep Blue is not actually intelligent, and that its victory this
year has no bearing on the future of artificial intelligence as such.
I agree that Deep Blue is not actually intelligent, but I think the
usual argument for this conclusion is quite faulty, and shows a basic
misunderstanding of the goals and methods of artificial intelligence.
Deep Blue is unintelligent because it is so narrow. It can win a
chess game, but it can't recognize, much less pick up, a chess piece.
It can't even carry on a conversation about the game it just won.
Since the essence of intelligence would seem to be breadth, or the
ability to react creatively to a wide variety of situations, it's hard
to credit Deep Blue with much intelligence.
However, many commentators are insisting that Deep Blue shows no
intelligence whatsoever, because it doesn't actually ``understand'' a
chess position, but only searches through millions of possible move
sequences ``blindly.'' The fallacy in this argument is the assumption
that intelligent behavior can only be the result of intelligent
cogitation. What the commentators are failing to acknowledge is that
if there ever is a truly intelligent computer, then the computations
it performs will seem as blind as Deep Blue's. If there is ever a
nonvacuous explanation of intelligence, it will explain intelligence
by reference to smaller bits of behavior that are not themselves
intelligent. Presumably *your brain* works because each of its
billions of neurons carry out hundreds of tiny operations per second,
none of which in isolation demonstrates any intelligence at all.
When people express the opinion that human grandmasters do not examine
200,000,000 move sequences per second, I ask them, ``How do you
know?'' The answer is usually that human grandmasters are not *aware*
of searching this number of positions, or *are* aware of searching
many fewer. But almost everything that goes on in our minds we are
unaware of. I tend to agree that grandmasters are not searching the
way Deep Blue does, but whatever they are doing would, if implemented
on a computer, seem equally ``blind.'' Suppose most of their skill
comes from an ability to compare the current position against 10,000
positions they've studied. (There is some evidence that this is at
least partly true.) We call their behavior insightful because they
are unaware of the details; the right position among the 10,000 ``just
occurs to them.'' If a computer does it, the trick will be revealed;
we will see how laboriously it checks the 10,000 positions. Still, if
the unconscious version yields intelligent results, and the explicit
algorithmic version yields essentially the same results, then they
will be intelligent, too.
Another example: Most voicerecognition systems are based on a
mathematical theory called Hidden Markov Models. Consider the
following argument: ``If a computer recognizes words using Hidden
Markov Models, then it doesn't recognize words the way I do. I don't
even know what a Hidden Markov Model is. I simply hear the word and
it sounds familiar to me.'' I hope this argument sounds silly to you.
The truth is that we have no introspective idea how we recognize
spoken words. It is perfectly possible that the synaptic connections
in our brains are describable, at least approximately, by Hidden
Markov Models; if they aren't, then some other equally
counterintuitive model is probably valid. Introspection is a lousy
way to theorize about thinking. There are fascinating questions about
why we are unaware of so much that goes on in our brains, and why our
awareness is the way it is. But we can answer a lot of questions
about thinking before we need to answer questions about awareness.
I hope I am not taken as saying that all the problems of artificial
intelligence have been solved. I am only pointing out one aspect of
what a solution would look like. There are no big breakthroughs on
the horizon, no Grand Unified Theory of Thought. Doing better and
better at chess has been the result of many small improvements (as was
the proof of a novel theorem last year by a computer at Argonne Lab.)
There have been other such developments, such as the
speechrecognition work I referred to earlier, and many results in
computer vision, but few ``breakthroughs.'' As the field has matured,
it has focused more and more on incremental progress, while worrying
less and less about some magic solution to all the problems of
intelligence. A good example is the reaction by AI researchers to
neural nets, which are a kind of parallel computer based on ideas from
neuroscience. Although the press and some philosophers hailed these
as a radical paradigm shift that would solve everything, what has
actually happened is that they have been assimilated into the AI
toolkit as a technique that appears to work some of the time  just
like Hidden Markov Models, gametree search, and several other
techniques. Of course, there may be some breakthroughs ahead for the
field, but it is much more satisfying to get by on a diet of solid but
unglamorous results. If we never arrive at a nonvacuous theory of
intelligence, we will no doubt uncover a lot of useful theories of
more limited mental faculties. And we might as well aim for such a
theory.
So, what shall we say about Deep Blue? How about: It's a ``little
bit'' intelligent. It knows a tremendous amount about an incredibly
narrow area. I have no doubt that Deep Blue's computations differ in
detail from a human grandmaster's; but then, human grandmasters differ
from each other in many ways. On the other hand, a log of Deep Blue's
computations is perfectly intelligible to chess masters; they speak
the same language, as it were. That's why the IBM team refused to
give game logs to Kasparov during the match; it would be equivalent to
bugging the hotel room where he discussed strategy with his seconds.
Saying Deep Blue doesn't really think about chess is like saying an
airplane doesn't really fly because it doesn't flap its wings.
It's entirely possible that computers will come to seem alive before
they come to seem intelligent. The kind of computing power that fuels
Deep Blue will also fuel sensors, wheels, and grippers that will allow
computers to react physically to things in their environment,
including us. They won't seem intelligent, but we may think of them
as a weird kind of animal  one that can play a very good game of
chess.
==========
[[
For a radically different view point, see
http://www.cs.yale.edu/oped/how_hard_is_chess.html
This one is by David Gelernter, who, get this, is a colleague of Drew
McDermott at Yale. On an unrelated note, Gelernter is also one of the
scientists who was targeted by the UnabomberKazinscky(?), and was
seriously injured by a letter bombthus the title of his book at the
end of the article.]]
