Thursday, November 30, 2006
A reference for the discussion on support vector m/c
http://www.acm.org/sigs/sigkdd/explorations/issue2-2/bennett.pdf
Rao
Fwd: Instructor/Course Evaluations
As per the mail below, I am encouraging you to complete the course evaluations before next wednesday.
Your feedback--especially written comments--will be carefully read.
(By the way, to state the obvious, we get these only after all the grades are submitted etc. So, you can be candid ;-)
rao
---------- Forwarded message ----------
From: James Collofello <JAMES.COLLOFELLO@asu.edu >
Date: Nov 29, 2006 5:41 PM
Subject: Instructor/Course Evaluations
To: "DL.WG.CEAS.Faculty" <DL.WG.CEAS.Faculty@mainex1.asu.edu>
Colleagues,
The Fall 2006 teaching evaluations are scheduled to be available to students starting Wed 11/29 around 9:00 am and will close at Wed 12/6 (reading day) at 12:00 midnight. Students will be able to access the evaluation tool at: https://intraweb.eas.asu.edu/eval
Please encourage your students to complete the evaluations or face several nagging email requests. Good luck on your scores!
James S. Collofello
Associate Dean for Academic Affairs
Ira A. Fulton School of Engineering
Wednesday, November 29, 2006
Re: Next Tuesday's class format: Interactive Review...
interesting things to say, since it is somewhat probable that if you
find something interesting, someone else in the class, speaking before
you, will have found interesting as well.
-Will
On 11/29/06, Subbarao Kambhampati <rao@asu.edu> wrote:
> Folks
>
> It is customary to do an end-of-course review in the last class. My version
> of this is to let *you* do the review. This is called the
> "interactive review" session.
>
> A significant portion of Tuesday's class--which will also be the last class
> of the semester--will consist of interactive review and discussion.
>
> If you are registered for this course, attendance to this class is
> mandatory.
>
> Each of you will get about 2min to hold forth on any of the following:
>
> -->topics covered in the course that particularly caught your fancy (and
> why)
> --> intriguing connections *between* the various topics covered in the
> course that struck you
> --> what topics--if any--got overplayed or should have gotten more coverage
>
> It may be useful for you to make some notes along these lines *before*
> coming to class--so
> you have everything ready to hold-forth when called on.
>
>
> rao
>
> ps: Tomorrow--Thursday--we will have our regular class which will continue
> discussion of Machine Learning
>
Next Tuesday's class format: Interactive Review...
It is customary to do an end-of-course review in the last class. My version of this is to let *you* do the review. This is called the
"interactive review" session.
A significant portion of Tuesday's class--which will also be the last class of the semester--will consist of interactive review and discussion.
If you are registered for this course, attendance to this class is mandatory.
Each of you will get about 2min to hold forth on any of the following:
-->topics covered in the course that particularly caught your fancy (and why)
--> intriguing connections *between* the various topics covered in the course that struck you
--> what topics--if any--got overplayed or should have gotten more coverage
It may be useful for you to make some notes along these lines *before* coming to class--so
you have everything ready to hold-forth when called on.
rao
ps: Tomorrow--Thursday--we will have our regular class which will continue discussion of Machine Learning
Monday, November 27, 2006
Sunday, November 26, 2006
Pointer to Geoff Hinton lJCAI research excellence award lecture..
On Wednesday's make-up class, I mentioned that Geoff Hinton is foremost among the people working on brain-inspired neural networks (there isn't that much work on neural nets in the ML community these days because you can get by with max-margin kernel classifiers instead of multi-layer neural nets. However, understanding the training of multi-layer and recurrent nets does offer the possibility of understanding how human brain works).
Anyways, I mentioned Hinton's research excellence award lecture at IJCAI last year. Here is a link to his slides (he calls this version the
"gentle/after-dinner version" of his talk.
http://www.cs.toronto.edu/~hinton/talks/gentle.ppt
(The other versions of the talk can be found at http://www.cs.toronto.edu/~hinton/talks.html )
enjoy..
Rao
Saturday, November 25, 2006
Project 4 help
like:
(defun varsubst (pattern binding)
(if (null pattern) nil
(if (atom (car pattern))
(if (eq '? (car pattern)) ;find matching and return var
;if pattern is (? var)
(if (equal pattern (caar binding)) (list (cadar binding))
;if binding doesn't match, traverse through bindings
(list (match-var pattern (rest binding))))
;else, atom in pattern isn't ?, so append car pattern to return list
(append (list (car pattern))
(varsubst (rest pattern) binding)))
;else, not an atom...pattern is a list
(append (varsubst (car pattern) binding) (if (null (varsubst (rest
pattern) binding)) nil (list(varsubst (rest pattern) binding)))))))
(defun match-var (pattern binding)
(if (equal pattern (caar binding)) (cadar binding) (match-var pattern (rest
binding))))
I'm not sure if it is supposed to look that customized, but it works.
I am now on the rename function and understand that the same generated number
suffix is appended to each variable. I can generate a number such as "112"
but it is in String format. I've been looking around and can't figure out how
to combine "112" and 'rao to produce a variable like 'rao112.
Thanks in advance.
Friday, November 24, 2006
Optional reading on Max-margin classifiers..
http://www.acm.org/sigs/sigkdd/explorations/issue2-2/bennett.pdf
Rao
Thursday, November 23, 2006
re: Recitation Session, Tuesday 28th November, 1:40->
Just a clarification so you are not going to think I am sneaking in an
extra class ;-) Recitation session is purely optional.
The regular class for 28th is what was made up yesterday.
rao
==========
Rao brings up the excellent point that everyone ought to be free to
meet on the 28th of November at the normal class time.
So the next recitation session will be held then, in the classroom
itself. As always, send any specific questions you'd like covered in
more depth, for example, questions on the midterm that you still don't
understand fully. Other than that the plan is as usual; cover the
material from the end of the last recitation up till now and comment
on the connections to the homework.
-Will
Audio and video of yesterday's make-up class available online
The audio, video and slides of the lecture are available online. Some notes:
1. The video is a 320x160 video (if you see it at 200% the size and resolution are reasonable).
The size of the video is a whopping 718mb (so if you want to download it, you will have to be
on a high-speed internet connection)
2. In two places, in the slides, I replaced the slide used in the class with a better one. In both cases,
I left the older version in with a warning saying that the replacement slide right next is better.
that is all.
Happy thanksgiving!
Rao
Wednesday, November 22, 2006
Recitation Session, Tuesday 28th November, 1:40->
meet on the 28th of November at the normal class time.
So the next recitation session will be held then, in the classroom
itself. As always, send any specific questions you'd like covered in
more depth, for example, questions on the midterm that you still don't
understand fully. Other than that the plan is as usual; cover the
material from the end of the last recitation up till now and comment
on the connections to the homework.
-Will
Tuesday, November 21, 2006
request for feedback on the first two learning lectures..
I know that several of you are simultaneously taking either the grad
or undergraduate level data-mining courses. I was wondering, for those
of you, whether the learning lectures were useful at all or are they
too much of the same thing that you have learned. If you have any
comments let me know--either directly or via the anonymous email
http://rakaposhi.eas.asu.edu/cgi-bin/mail?rao
(Feedback is also welcome from others who are learning about learning
for the first time. I was struck by the unusually stoic demeanors of
the students in the class and was wondering whether the cause was
enlightenment or boredom).
thanks
Rao
[Nov 21, 2006]
Monday, November 20, 2006
*Important*--Please read (about Project 4; upcoming gigantic homework assignment; making up for early projects etc)
Here are several announcements:
1. It looks like there will not be enough time for a coding oriented project 5. So, project 4 will be the last coding project.
(I can see your disappointed looks; I am sorry it had to be this way..)
2. The main assignment after project 4 is a double homework that has been evolving. I will extend it with problems (and perhaps mini-applet assignments) on learning. This homework will be done on the last day of the class (5th December--no exceptions). Furthermore, there may not be an explicit one week gap between the addition of last question and the due date. So, I strongly suggest that you all start working on the problems that are already there.
3. Since there is no project after the prolog one, I am willing to allow students who did very badly in any of the earlier projects a chance to improve their project grades. If you want to do that, let me and the TA know (and also let us know which project(s) you want to work on and what your grade on it was); we may be able to tell you whether it will be worth your time. (Your final project grade will be a weighted average of the two grades). *All* work must be submitted by the last day of the class--no exceptions.
4. Since project 4 is the last project, I am also willing to take it without penalty until Wednesday make-up class; and with a flat 3*pi % late penalty until next Monday. Those who have banked an extension can give it until next Monday without penalty.(I know there is no class on Monday--it is your responsibility to hand-in the hard copy--either at the dept office or to the TA)
[Of course, you can hand it in tomorrow and spend your time on other things such as the homework.]
that is all
regards
rao
Sunday, November 19, 2006
Re: project 4 report question
Rao
Hi Lei and Prof Rao ,
I just wanted to ask what exactly we are expected to show regarding
the output of prolog with domain 2 that is family tree .
(For earlier parts i am including edited output as the trace was very crisp )
The dribble file for domain2 questions is very large. Should we just mention logical steps that prolog has taken
to prove/answer the question.(not actual console statements ..) Or we just mention our final answer (along with observations if any)
The actual output can be verified as we are submitting the code also.
If any one of you can throw some light on how much we include in report it would be great .
--
Regards,
Mandar
Friday, November 17, 2006
Fwd: CSE598 - Project 4 - small correction
since adding additional background knowledge is not going to
hurt you (and, if Peyman is right, will help you in this case).
Rao
---------- Forwarded message ----------
From: Peyman Nayeri <Peyman.Nayeri@asu.edu>
Date: Nov 16, 2006 4:26 PM
Subject: CSE598 - Project 4 - small correction
To: Subbarao Kambhampati <rao@asu.edu>
Hi,
For the 4th task in the 4th project there is a hint
saying to add knowledge to the effect that sibling is
a transitive relation. This won't help. We need to add
knowledge to make sibling a symmetric relation.
Cheers,
Peyman
____________________________________________________________________________________
Sponsored Link
$420k for $1,399/mo.
Think You Pay Too Much For Your Mortgage?
Find Out! www.LowerMyBills.com/lre
Thursday, November 16, 2006
Bias, generalization and stereotypes: A half-baked lesson in Ethics
[[extra-curricular]]
[ACM suggests that some percentage of CS courses should be spent on discussing
ethics. May be this will fill that role... ]
Inductive generalizations are what allow the
organisms with their limited minds to cope with the staggering complexity
of the real world. Faced with novel situations, our ancestors had to
make rapid "fight or flight" decisions, and they had to do biased
learning to get anywhere close to survival. So, we can't really
seriously ask people not to generalize or not to have biases!
The problem of course is where does this leave us vis-a-vis
stereotypes--the "all Antarciticans are untrustworthy", "all
Krakatoans are smelly" variety. Afterall, they too are instances of
our mind's highly useful ability to induce patterns from limited
samples.
So, what, if any, is the best computational argument against stereotyping? One
normal argument is that the stereotype may actually be wrong--in
otherwords, they are actually wrong (non-PAC) generalizations, either
because they are based on selective (non-representative) samples, or
because the learner intentionally chose to ignore training samples
disagreeing with its hypothesis. True, some
stereotypes--e.g. "women
can't do math", "men can't cook" variety--are of this form.
However, this argument alone will not suffice, as it leaves open the
possibility that it is okay to stereotype if the stereotype is
correct. (By correct, we must, of course, mean "probably approximately
correct," since there are few instances where you get metaphysical
certainty of generalization.)
What exactly could be wrong in distrusting a specific Antarcitican because
you have come across a large sample of untrustworthy Antarciticans?
I think one way to see it is perhaps in terms of "cost-based
learning". In these types of scenarios, you, the learning agent, have
a high cost on false negatives--if you missed identifying an
untrustworthy person, or a person who is likely to mug you on a dimly
lit street, or a person who is very likely to be a "bad" employee in
your organization, your success/survival chances slim down. At the
same time, the agent has much less cost on false positives, despite
the fact that the person who is classifed falsely positive by your
(negative) stereotype suffers a very large cost. Since the false
positive *is* a member of the society, the society incurs a cost for
your false positives, and we have the classic case of individual good
clashing with societal good.
This then is the reason civil societies must go the extra mile to
discourage acting on negative stereotypes, so we do not round up all
antarciticans and put them in bootcamps, or stop all Krakatoans at
airport securities and douse them with Chanel 5. And societies, the
good ones, by and large do, or at least try to do. The golden rule,
the "let a thousand guilty go free than imprison one innocent", and
the general societal strictures about negative streotypes--are all
measures towards this.
You need good societal laws precisely when the individual
good/instinct clashes with the societal good.
So, you are forced to learn to sometimes avoid acting on the highly
efficient, probably PAC, generalizations that your highly evolved
brain makes. I think.
Yours illuminatingly... ;-)
Rao
Epilogue/can skip:
It was a spring night in College Park, Maryland sometime in
1988. Terrapins were doing fine. The Len Bias incident was slowly
getting forgotten. It was life as usual at UMD. About the only big
(if a week-old) news was that of a non-caucasian guy assaulting a
couple of women students in parking lots. I was a graduate student,
and on this particular night I did my obligatory late-evening visit to
my lab to feign some quality work. My lab is towards the edge of the campus;
just a couple more buildings down the Paint Branch Drive, and you get
to the poorly lit open-air parking lots.
On that night I parked my car, walked down the couple of blocks to my
lab, only to remember that I left a book in the car. So, I turned, and
started walking back to the parking lot. As I was walking, I noticed
that this woman walking in front turned a couple of times to look back at me. I remembered
that I had passed her by in the opposite direction. Presently I
noticed her turning into the cryogenics building, presumably her
lab. As I passed by the cryo lab, however, I saw the woman standing
behind the glass doors of the lab and looking at me.
Somewhere after I took a few more steps it hit me with lightning
force--I was a false positive! The woman was basically ducking into
the lab to avoid the possibility that I might be the non-caucasian
male reportedly assaulting campus women. I knew, at a rational level,
that what she was exhibiting is a reasonably rational survival
instinct. But it did precious little to assuage the shock and
diminution I felt (as evidenced by the fact that I still remember the
incident freshly, after over 15 years.). There is no substitute
for assessing the cost of false positives than being a false positive
yourself sometime in your life...
--------------
....not to make up your minds, but to open them.
To make the agony of decision-making so intense that
you can escape only by thinking.
-Tag line from Columbia School of Journalism Seminars
"Induction extends your expectation, not your experience"
Colorless green ideas sleeping furiously (Chomsky, Universal Grammars etc. (Long.))
As I mentioned in the class today, for the longest time, and by that I
mean, until well into late 50's, the conventional scientific wisdom
was that infants come into this world with a "Tabula Rasa" (blank
slate) mind, and pick up everything by learning (supervised or
reinforced) and observation. The reigning doctrine was
"behaviorism"--you can condition/reinforce any behavior into any
organism. To behaviorists, children were but cuter (pavlovian) rats,
and language acquisition was no different than a acquisition of maze
following skills. B.F. Skinner was the leading exponent of behaviorism
and was, in early fifties, writing book after book expounding on how
behaviorism can explain all sorts of human behavior.
[Skinner was such an ardent behaviorist that there was even an
apocryphal urban legend that said he raised his own daughter in a
"skinner box" to test his behaviorism hypotheses--see
http://www.snopes.com/science/skinner.htm ]
When Skinner came around to applying behaviorism explanations to
language acquisition and wrote the book on "Verbal Behavior", it was
expected to further shore up the behaviorism doctrine, and become a
classic. What became a classic instead is a critical scholarly 1959
"review" of the book by a then little-known linguist named Noam
Chomsky (
http://cogprints.ecs.soton.ac.uk/archive/00001148/00/chomsky.htm ).
Chomsky essentially killed the book as well as much of the euphoria
of behaviorism by an arugment that has since then come to be known as
the "poverty of stimulus" argument. He said that behaviorism and
stimulus/response reinforcement does not quite explain how it is that
children seem to be able to generate sentences that they have never
heard of. In other words, there are not enough examples (hence a
poverty of "stimuli") for the children to learn entire
language--grammar and sentences together (even for children--such as
mine--with overly talkative parents ;-) [You note that the argument
that something cannot be learned is being done in terms of the
the inordinate number of examples needed to learn it. As we saw in
the class, difficulty of learning tasks is measured in terms of
"sample complexity".]
As an alternative explanation, Chomsky cited his own work on
"generative grammars"--a set of grammar rules that can generate
"grammatically correct" sentences from a language. He said that it
must be the case that children come into the world with grammar rules
already in their head. Since the grammars of different world languages
are different, he posited that the children come into this world with
a "universal" grammar. Using the language being spoken around them,
they then set the "parameters" (or knobs, if you will) on their
universal grammar such that it becomes customized to the specific
language environment they are in. Once they have the customized
grammar, they then are in the business of learning word sense (or
semantics). "Colorless green ideas sleep furiously" is one of
Chomsky's famous examples, which he uses to show that even small kids
with limited vocabularies can tell automatically whether a sentence is
grammatically correct. [Even here, in learning semantics, children
come into the world with pretty strong biases--including the so-called
"whole object hypothesis". If I point towards a big moving bus and say
"Bus", the kid hypothesizes that the whole big thing is called
bus--not just the wheels, or the hubcaps, or some subset of the
parts. Pretty strong bias, come to think of it--what if I said
"Driver" pointing towards the driver side of the bus?]
Chomsky of course went on to become the demi-god of cognitive science
in general, and mathematical linguistics in particular (and you
obviously heard of him in your CSE 355, when you probably learned
about the Chomskian hierarchy of languages--which is in terms of their
grammar complexity). A lot of research has been done since Chomsky's
original work, to shore up the support for the universal grammar
hypothesis. It is so much of an accepted fact (dogma) now that it
(universal grammar) in turn is seen as yet another evidence that all
humans evolved from a common set of ancestors--as against evolving
separately and independently (the "Lucy" theory,
http://www.pbs.org/wgbh/evolution/library/07/1/l_071_01.html ; by the
way Don Johanson, who found Lucy skeleton in Ethiopia, is right here
at ASU--check out http://www.asu.edu/clas/iho/dcj.html ). The basic
argument is that the rank similarity of the human languages cannot be
explained without it. (Of course, there are much stronger arguments
for the common ancestor theory--including the fact that we are all
same species--any man from anywhere in the world can mate with any
woman from anywhere in the world and produce healthy offspring).
So that is some of the background on the universal grammar. By the
way, note that none of the above says that conditioning will not be
effective in changing ones behavior--you probably saw the recent press
accounts of the infamous Wendell Johnson orphan stuttering experiments
(http://www.jsonline.com/news/nat/jun01/stutter11061001.asp). All
Chomsky's argument says is that conditioning and reinforcement are
only part of the story and cannot alone explain language acquisition;
evolution did a whole other part too.
Now for a couple of references. Probably the best-written lay-person
book on human language acquisition is Steven Pinker's "Language
Instinct"( http://www.amazon.com/exec/obidos/tg/detail/-/0060976519/104-1220170-3641559).
A very nice and eminently watchable 3-part PBS series on language
acquisition is "Human Language"
( http://equinoxfilms.home.mindspring.com/HLseries.html).
That is all for now. Now for more important things like Seinfeld rerun
already in progress.
Rao
Epilogue/Postscript: These days of course, a search on Chomsky on the
web is more likely to turn up references to his political views than
his linguistic ones. He is sort of a one-man vocal and loyal
opposition for many of the US government policies. For example, he
wrote one of the first dissenting "non-party line" opinions of the
9/11. Whether I agree with him or not, I am grateful that there is
someone unafraid of speaking his mind--especially in these days of
hyper-patriotism, where FBI thinks it normal to monitor you because
you are against war, and your freedoms are being re-defined as the
ones Ashcroft hasn't yet gotten around to excising into the "Patriot
Act".
----------------
"I simply want to tell you that there are some men in this world who
were born to do our unpleasant jobs for us. Your father's one of
them."
[..] "I always thought Maycomb folks were the best folks in the world,
least that's what they seemed like."
"We're the safest folks in the world" said Miss Maudie. "We're so
rarely called on to be Christians, but when we are, we've got men like
Atticus to go for us."
--Miss Maudie and Jem talking about Atticus Finch, as
Scout and Dill look on..
Re: CSE 598
in Florida). We are doing a make-up on
Wednesday 22nd (same time; room to be decided)
Rao
Dr. Rao,
What day is class not being held which is causing us to have the make up class on November 22nd?
-James
Wednesday, November 15, 2006
Several clarification of project 3 & Solution for the extra credit problem
Concerning the 2 pts off for comment of Task IV. I would take it back(I guess I am expecting too much). So the final effective full score would be 25.
Also, since this is a mini-project, its contribution to your final grade will be less than other projects.
The solution for the extra credit problem is below:
If we do not take the test, the expected cost would be $10,000 (take the insurance).
If we take the test, then depending on the test result, we can decide whether or not to take insurance.
So with 0.4 probability, the LHW would be T. Then, the P(CM|LHW) = 0.27. Since, 100,000*0.27>10,000, we definitely need to take the insurance. Hence, when LHW turns out to be true, the cost would be 10,000.
With 0.6 prob, LHW is False. Then, P(CM| not LHW) = 0.09. In this case, it is not necessary to take insurance any more. And the expected cost would be 100,000*0.09 = 9,000.
Therefore, the expected cost after taking a test would be
0.4*10,000+0.6*9,000 = 9,400.
Thus, we can pay at most 10,000-9,400 = 600 for the test. But here, the test price is $1000>$600, so it is not necessary to take the test.
AI Project 3 statistics
P3 | Without Extra Credits(27) | With Extra Credits(33) | ||||
| Mean | Median | Highest | Mean | Median | Highest |
Overall | 19.6 | 21 | 25 | 22.4 | 24 | 30 |
Under | 17.4 | 16 | | 19.4 | 17 | |
Graduate | 21.2 | 22 | | 24.7 | 26 |
|
-Lei
Tuesday, November 14, 2006
Project 3 Grading Standard
Task I: 6 = 2 (for the network&CPT) + 2 (for the probability) + 2 (for comment)
Task II: 8 = 2 (for CPT) + 2 (for Propositional logic) + 2 (for probability) + 2 (for logic proof to show equivalence)
Task III: 7 = 3(for the network) + 2 (for probability) + 2(for comment)
Task IV: 6 = 2(for network) + 2(for change to singlely connected tree) + 2 (for comment)
Extra: 6= 2+4.
Re: Grade cutoffs from Fall 2003 (with *no* implicit guarantees about your future..)
Several of you wanted some guidance about how the scores translate into
letter grades at the end of the semester. As I said in the class, there is no automatic translation program. It requires all the deliberative powers of a bleary-eyed full professor to convert them into letter grades.
If it helps, the following are the cumulative scores and lower-bound grade cutoffs that were used last time around.
This is strictly to give you a non-binding example. Every class is different and
the actual grades this time will again be determined adaptively. (In particular, the last times lowerbounds may or may not be admissible heuristics on this times grades...)
Feel free to ask me question either anonymously or in person. Like I said, at this point, after these many classes, tests, projects and exams, if you are still enjoying the class, it will be a shame to lose you purely because of grade anxiety...
regards
Rao
From: Subbarao Kambhampati <rao@asu.edu>
To: cse471-f03@parichaalak.eas.asu.edu
Date: Fri, 19 Dec 2003 08:52:33 -0700
Subject: Admissible heuristic for letter grades...
People started asking me for letter grades.
Your final letter grades will be available sometime next week online.
However, I think it is reasonable to give you a lowerbound on your grade.
Here then is an admissible--and reasonably informed (since I am making it
;-) heuristic on estimating your grade:
For Graduate students:
If your cumulative is >80% your lower bound grade will be an A
Above 70, lowerbound grade is B.
For UG students:
If your cumulative is >75% your lower bound grade will be an A
if your cumulative is > 65, your lowerbound grade will be B
if your cumulative is >50, your lowerbound grade will be C
if your cumulative is >35, your lowerbound grade will be D
else E.
***In both cases, if your cumulative+extra credit pushes you over a threshold, then
you get that higher grade.
Rao
ps: I am willing to take comments from people about grade thresholds that
are _below_ the category they
are in (i.e., A folks can tell me whether the A,B,C thresholds should be
changed. B folks can tell me if B and C thresholds can be changed
and so on). You can comment on whether the current thresholds are too
generous or too tight etc.
(The top part is graduate and the bottom part is UG in the bitmap below)
Grade cutoffs from Fall 2003 (with *no* implicit guarantees about your future..)
letter grades at the end of the semester. As I said in the class, there is no automatic translation program. It requires all the deliberative powers of a bleary-eyed full professor to convert them into letter grades.
If it helps, the following are the cumulative scores and lower-bound grade cutoffs that were used last time around.
This is strictly to give you a non-binding example. Every class is different and
the actual grades this time will again be determined adaptively. (In particular, the last times lowerbounds may or may not be admissible heuristics on this times grades...)
Feel free to ask me question either anonymously or in person. Like I said, at this point, after these many classes, tests, projects and exams, if you are still enjoying the class, it will be a shame to lose you purely because of grade anxiety...
regards
Rao
From: Subbarao Kambhampati <rao@asu.edu>
To: cse471-f03@parichaalak.eas.asu.edu
Date: Fri, 19 Dec 2003 08:52:33 -0700
Subject: Admissible heuristic for letter grades...
People started asking me for letter grades.
Your final letter grades will be available sometime next week online.
However, I think it is reasonable to give you a lowerbound on your grade.
Here then is an admissible--and reasonably informed (since I am making it
;-) heuristic on estimating your grade:
For Graduate students:
If your cumulative is >80% your lower bound grade will be an A
Above 70, lowerbound grade is B.
For UG students:
If your cumulative is >75% your lower bound grade will be an A
if your cumulative is > 65, your lowerbound grade will be B
if your cumulative is >50, your lowerbound grade will be C
if your cumulative is >35, your lowerbound grade will be D
else E.
***In both cases, if your cumulative+extra credit pushes you over a threshold, then
you get that higher grade.
Rao
ps: I am willing to take comments from people about grade thresholds that
are _below_ the category they
are in (i.e., A folks can tell me whether the A,B,C thresholds should be
changed. B folks can tell me if B and C thresholds can be changed
and so on). You can comment on whether the current thresholds are too
generous or too tight etc.
Saturday, November 11, 2006
Current cumulatives (only required portion--extra credits not included)
Some of you wanted to know your class standing currently. I weighted the materials graded until now
as follows: Pr0 --1pt; Pr 1,2-10pt each; hws 5pt each; exam 20pt (which comes to 56pts; with the remaining
points to come from the rest of the semester). The last two columns give your percentage assuming you
score at the same rate; and also your relative percentage w.r.t. the top scorer in each section.
The extra credit is not taken into account (I try not to look at them
until I decide the grades). Your extra credit points will be scaled
the same way your normal points are scaled.
As I mentioned, I like the +/- system and expect to give all the
grades A+/A/A- etc.
Let me know if you have any questions and need any anxiety
amelioration.
regards
rao
CSE471
CSE598
===============================================================
Re: code for unify
your lisp file)
rao
Wednesday, November 08, 2006
Readings for tomorrow--10.3 and 15.5
then 15.1-->15.4 as time permits)
rao
Tuesday, November 07, 2006
In case the (ir)rationality of sqrt(2)^sqrt(2) is bugging you... + Constructive vs. Existential math.
that it is irrational (actually transcendental (*)). So a constructive proof for
our theorem is with p=sqrt(2)^sqrt(2) and q=sqrt(2)
see http://www.math.hmc.edu/funfacts/ffiles/10004.3-5.shtml
(which also points out a more general and easy to understand constructive proof. Consider
e^{log_e q} for any transcendental number e and rational number q--which will be q. All you need to show is log_e(q) is irrational and you can show this easily (If log_e(q) = m/n with integers m and n without common factors, then
q = e^{m/n}. This would mean that e is the root of an algebraic equation x^m - q^n = 0. But the definition of trancendental number is that it cannot be the root of any algebraic equation!).
Rao
(*) By the way, transcendental => irrational but not vice versa. In particular, transcendentals are those irrational numbers that cannot be roots of any algebraic equation. Two famous examples of course are e and pi. Notice that proving that a number e *is* transcendental involves showing that e^r for any rational number r cannot be rational (since if it is, then e will be the root of an algebraic equation). Thus, proving transcendentality is not all that easy.
(ps 2:
Check out
http://digitalphysics.org/Publications/Cal79/html/cmath.htm
for a nice discussion on the Constructive vs. Classical mathematics--and how during Hilbert's time there was a pretty big controversy in mathematics--with mathematicians such as Brouer insisted that all math that depended on existential proofs be thrown out.Papa Hilbert had to come to rescue--pretty heady stuff.
You might also look at
http://plato.stanford.edu/entries/mathematics-constructive/
which also talks about the "slick" irrational power irrational can be rational proof...
Fwd: Project 2 & Take home mid-term statistics
The following are the statistics of Project 2 (game playing) and take home mid-term.
P2 | Without Extra Credits(45) | With Extra Credits(65) | ||||
| Mean | Median | Highest | Mean | Median | Highest |
Overall | 38 | 40 | 45 | 41 | 42 | 63 |
Under | 35 | 36.5 |
| 35.25 | 36.5 |
|
Graduate | 40.2 | 42 |
| 45.3 | 45 |
|
Mid-term(Take-home) | Mean | Median | Highest |
overall | 59 | 61 | 82 |
Under | 49.3 | 45 | 78 |
Graduate | 66 | 71 | 82 |
Monday, November 06, 2006
Thursday, November 02, 2006
Wednesday, November 01, 2006
Re: Converting multiply connected networks to singly connected ones
the multi-connected rain-sprinkler network is converted into a singly connected one (basically you convert a multi-connected net to a singly-connected one by
merging some of the nodes into new supernodes--which take values that are in the cartesian product of the domains of the merged nodes)
rao
Dr Rao,Can you give me some pointers (for part IV) on how to convert multiply-connected networks into singly connected ones?
KABOOM and Core-meltdown (Re: late in the day question about Project Three)
(I just assumed that core-meltdown will be accompanied
with a big noise I didn't think that core meltdown will not be detectable to the outsiders as you seem to suggest.
You may well be right from a strict nuclear technology stand point--but I stand by my theory)
Rao
ps: Here is a
link to Chernobyl disaster which says that meltdown produced enough of a fall out cloud for people to realize
http://en.wikipedia.org/wiki/Chernobyl_accident )
Dear Rao,As I prefer to "save" any "get out of jail free" cards I'm given I'm trying tofinish Project Three tonight/tomorrow.I have a couple questions about Task Part One Questions 3A and 3B.First, are they, in fact, extra credit ?Second, when you describe the "Kaboom during the holiday season" Ifeel there is a little vagueness about the criteria for determining if a"Kaboom" happens.(that is, how would the residents of Springfield "prove" or "feel" thata Kaboom happened)Is it:a) a CoreMeltdown occurs (is this nominally something those outsidethe plant can detect)b) either of Glow In-the Dark employees / Apu's Slurpee's Liquify aredetected (even though both of these conditions might occur even if aKaboom doesn't happen during the holiday season)c) any of CM/GID/ASL occurs/is detected (even if no one glows andthe slushies are ok, someone rats Burn's out for the reward if CMhappens)I was originally leaning toward b) but as I consider the problem in moredepth I think c) is actually the best definition of a Kaboom.Thanks for any clarification you feel you can give,Al