The overview lectures comprise the first two class meetings. They serve various purposes, first, to give you exactly that, an overview of Artificial Intelligence; second, to allow you to consider a variety of topics in choosing your course project; third, to link the course material to the various books on reserve for the course. This latter connection is important, since you are required to use material from at least two of the books as part of your project.
You are not normally required read the Bibliographical and Historical Notes at the end of each chapter. But if your project is based on one or more of the chapters, you should study those sections.
It is important for you to not think of these lecture notes as the course content. They are deliberately brief and serve to point you to the material in the book, and the readings.
Thursday, January 10th - first half of class
1.1, 1.2, 1.3 optional, 1.4, 1.5
2.1, 2.2, 2.3, 2.5; skim 2.4
3.1, 3.2, 3.3, 3.7 ; skim 3.4
4.1, 4.2, skim 4.3 and 4.4, 4.6
5.1, 5.5; skim 5.2
6.1, 6.2, 6.8; skim 6.3, 6.6
These first six chapters focus on concepts and strategies, rather than more specific domains and applications that could lead to a project. But the ideas in these chapters underlie much of AI and AI applications.
Chap. 1 - Easy reading, but important in giving you an overview of the entire field, as well as its history.
Chap. 2 - Agents are a major theme of our textbook. From a Cognitive Science perspective, the "?" in Figure 2.1 could be labeled "Cognition", and would cover a range of actions in animals from reflexes to complex reasoning. (I include humans in the category "animals", of course.) The "agent" perspective pervades the book, so you should pay careful attention to this chapter. My own views on agents differs somewhat from the book: I believe that memory and learning need to be emphasized from the outset; It is possible to operate in a perceive/cogitate loop quite a bit, without any actions intervening. Naturalists spend hours and days just watching animal behavior, as do baseball fans watching a game. The actions on the observer's part in such activities are inconsequential. Funge's game AI book on reserve has sections related to this chapter, e.g., Acting, Perceiving, Reacting, and Remembering. Nilsson's brief Chapter 2 on "Reactive Machines" is useful here. There are some fascinating short chapters in the Bekoff book as well as in The Minds of Birds book. Comparing different types of task environments, Sec. 2.3, adds some perspectives to the notion of agent. In a similar way, comparing the structure of a variety of more sophisticated agents in Sec. 2.4 is useful. I will bring my own perspective to the agent architectures.
Chap. 3 - Search: Most of the important and worthwhile problems in AI do not have any analytic or closed-form solutions. Ultimately then, they are solved by search. Later in the course we will dig into issues of space and time complexity of search. This chapter focuses on "uninformed" or "blind" search. The next chapter goes beyond these limitations; more complex strategies are needed to find solutions efficiently enough to be practical or even to have any hope of finding any solution at all. Chapter 8 of Nilsson is a good match to this one. For search in game AI, see Chap. 6 in Funge or Chap. 5 in Buckland. The classic formulation of search, bottom of pg. 69, is used throughout AI. You will recognize many classic approaches to search in Sec. 3.4.
Chap. 4 - Informed search: This is a complex chapter, so an overview needs only a skimming of one section and reading the brief summary. A typical "informed" search strategy as described in this chapter, would be to find a road route to a goal city through a set of intermediate cities by using the air distances from each city to the goal as a guide, a "heuristic". Sections 4.1 and 4.2 need to be read together so you can understand the most famous of the informed search strategies, A* ("A-Star").
Chap. 5 - Constraint satisfaction: The classic problem of map coloring (so no adjacent countries have the same color) is described in section 5.1 as a constraint satisfaction problem (CSP). Other examples would be assigning airline flights to gates, working out employee shifts, or assigning classrooms to scheduled classes. Virtually all problems of interest in AI (or in life!) are limited by constraints of one sort or another. You might find Nilsson's short Chap. 8 useful. Sec. 5.2 on backtracking search for the solution of constraint problems is fundamental and deserves serious skimming, short of full study.
Chap. 6 - Adversarial search: This is mostly devoted to board games such as chess and checkers. But the fundamental concepts of multiple competing agents extend well to computer/console games of all kinds. Sec. 6.2, and especially the minimax algorithm, has the essential underpinnings of game strategies. Later sections deal with efficiency and additional variations and complications.
Thursday, January 10th - second half of class
7.1, 7.2, 7.3, 7.8; skim 7.4 and 7.5
8.1, 8.2, 8.5; skim 8.3
9.1, 9.3;, 9.6 skim 9.2 and 9.3 and 9.4
10.1, 10.2, 10.9
11.1, 11.2, 11.7
12.1, 12.3, 12.8; skim 12.2
Chap. 7 - Logic is the foundation for much of AI. The simplest form of logic, propositional logic, is presented here. The way to think about propositional logic is to assume that any sentence in English is a fact, a proposition that can be considered true or false, e.g., "The Empire State Building is in Alaska." (false). Rules of logic are developed from this, e.g., the truth value of the compound statement, "Some apples are red. AND The earth is a cube." is false, because the second proposition is false. In the sections you are to skim, study the notions of semantics, inference, equivalence, validity, and satisfiability, which are foundational. Understanding resolution will also require careful study, so start to get acquainted with it now.
Chap. 8 - First-order logic goes beyond the facts of propositional logic to introduce objects and relations, e.g., Shape(Earth, Sphere). "Shape" in this first-order logic example is analogous to a predicate in a programming language, a function that can have the value true or false. But they are different, since such a predicate in logic is not an executable function; it is a declarative statement. This is similar to propositional logic in which a proposition corresponds to a statement and has a value - no concept of executability is implied. A collection of propositions or complex statements in first-order logic is best considered as a database or knowledge base that depends on external reasoning methods to construct proofs or add additional statements to the knowledge base. This is a short chapter, so the material you are to read covers most of the basics of first-order logic, including an introduction as to how to use it. The details of inference are handled in the next chapter, 9.
Chap. 9 - Inference in first-order logic is the most powerful and concise method or reasoning normally encountered in AI. But it is "brittle" - it is difficult for it to deal with the issues of time and the passage of time (Chap. 10), beliefs (mental states), and importantly, dealing with uncertainty, in which our knowledge only allows us to say that something might be true or false, but we do not, or cannot know for certain (Chaps. 13, 14, 15). Sec. 9.3 is useful, as it exhibits a readily understandable method, forward chaining, for reasoning in first-order logic. Forward chaining, Sec. 9.3, and backward chaining, Sec. 9.4, are not complete proof procedures. Even the most powerful method we will study, resolution, Sec. 9.5, can only prove whether or not a given statement is true; it cannot generate all the logical consequences that follow from a knowledge base.
Chap. 10 - Knowledge representation: It is vital in AI to build knowledge bases that represent what is know for a particular problem. Given such a knowledge base, we can apply various reasoning methods to answer queries. This can be done efficiently through the introduction of ontologies, e.g., animals and plants share certain properties of all living things, e.g., they are made up of cells containing DNA. (For the record, red blood cells in humans are a rare exception, containing no DNA.) Sec. 10.2 is a useful overview of the types of basic categories and objects used in knowledge representation.
Chap. 11 - Planning: This chapter gets us to the action aspects of agents. The treatment of actions in AI centers around the planning of actions, hence the title of this chapter, Planning. The basic task of planning is to come up with a set of actions that will result in moving from some initial state, e.g., you're at home in Massachusetts, to some final state e.g., being at a friend's place in Washington, DC. The search for plans needs to be constrained to make it tractable, e.g., excluding a step such as climbing Mount Everest during your trip to DC. Sec. 11.2 presents the basic strategy of searching for a plan that achieves a goal.
Chap. 12 - In the real world, plans typically deal with time, schedule, and resources, covered in Sec. 12.1. Many planning tasks can take advantage of the hierarchical structure of a problem, and deal with the various components of the problem in somewhat separate manner (Sec 12.2). E.g., you decide to fly to DC - that then produces a separate task - how to get from your home in Massachusetts to the airport.
Thursday, January 17th - first half of class
13.1, 13.2, 13.3, skim 13.4, 13.6, 13.8
14.1, 14.2 to end of pg. 498, 14.5 to pg. 512, 14.7
15.1, 15.2 to the bottom of pg. 542, 15.6 (pgs. 568 and 569 only), 15.7
16.1, 16.2, 16.3, 16.5, 16.8
Skim 17.1, 17.7, 17.8
18.1, 18.2, 18.3, 18.6
19.1, 19.2, 19.6
Skim 20.1, 20.5 through pg. 743, 20.7, 20.8
21.1, 21.2, pages 780 and 781, 21.6
Chap. 13 - Including uncertainty in the representation of knowledge and devising ways to reason about uncertain knowledge are some of the most important new directions in AI. The readings introduce probability and the all-important Bayes' Rule. You should certainly know the axioms of probability in Sec. 13.3. Start getting familiar with inference in Secs. 13.4 and especially, Bayes' Rule in Sec. 13.6 which is the foundation of many systems today.
Chap. 14 - The important property to understand about Bayesian networks, e.g., Fig. 14.2, is that they focus on the causal relations, and ignore connections between items that are not causally connected. This enables them to represent a potentially enormous numbers of joint probability values with a small number of conditional probability tables, CPTs. Methods such as direct sampling, Sec. 14.5 are simple to implement and could help you build an interesting project, such as using probabilities for simulating reasoning by game characters, whether cooperating with one another or opposing. The direct sampling method of Sec. 14.5 is a simple procedure to implement and it works well for modest-sized problems. Students have used it as the implementation base for projects in past editions of this course.
Chap. 15 - Probabilistic reasoning over time: This chapter is reasonably difficult, but the bit of reading assigned will give you some insights into how probabilistic reasoning can be applied to processes occurring over time. The brief section on speech recognition gives you a bit of insight into this common application of what used to be considered AI, but has now been absorbed into the mainstream.
Chap. 16 - Decision making adds utility theory to probabilistic reasoning. That is, decisions are based both on the probability of something occurring, and what it's worth to you. For example, if an inexpensive flight has a non-negligible probability of causing you to miss getting to someone's wedding on time, you might prefer a more expensive flight with a smaller probability of being late. Human decisions in this regard are often illogical (page 592) a fascinating topic in cognitive science. Section XII of the Levitin book is devoted to Decision Making and starts off with a paper by the famous investigators, Tversky and Kahneman. (Kahneman received the Nobel Prize in Economics for this and related work, but Tversky had passed away some six years before the prize was awarded. Kahneman is married to Anne Treisman, whose work on visual perception I have followed for years.) Sec. 16.2 can help you understand the concept of lotteries, not trivial at first blush. It might be helpful to write down some simple examples to see what lotteries describe. For example, on page 588, you could have A=eat, B=go-to-movie, C=shop, and p=0.8. You can imagine each of A, B, and C, as "prizes", to be won in a lottery (pg. 586). Utility functions, Sec. 16.3, are reasonably intuitive. Decision networks, as presented in Sec. 16.5, are equally intuitive.
Chap. 17 - Often, decisions are sequential problems, e.g., for navigating during a trip. A general solution for a particular class of problems is called a policy. In Sec. 17.1, pay special attention to the material through pg. 616.
Chap. 18 - Learning: Given the huge amounts of information that are being gathered and stored for analysis, there is more interest than ever in learning in AI (commonly called Machine Learning). An agent can steadily improve its performance by learning from the success or failure of past decisions and actions. The decision tree in Fig. 18.2 is a fine example of a decision system that can be built by analyzing scored examples. This is called supervised learning. Sec. 18.3 is of variable difficulty; read which parts you can. In Sec. 18.3, the information-theoretic approach to choosing attributes as well as avoiding overfitting, are both interesting and important for machine learning systems, even beyond the decision tree approach discussed in this section.
Chap. 19 - This chapter on knowledge in learning contains difficult material, so we'll only cover the Summary in this overview. Sec. 19.1 is interesting from the point of view of theory, but do not expend a lot of energy trying to understand it all. Sec. 19.2 is short, useful reading.
Chap. 20 - This chapter on statistical learning is not easy either, so I've suggested a few snippets for your overview. There are important applications for learning that have to deal with noisy data, e.g., learning to recognize speech, with all the variations among speakers and in the "identical" utterances of a single speaker. Economic data normally needs statistical approaches too.
Chap. 21 - This chapter is devoted to how agents learn, not from a fixed set of data, but from reinforcement received during or at the end of a task. Systems such as this are of great practical importance. Study Fig. 21.1 and the notes about it to give you an idea of what's in Sec. 21.2. Study Fig. 21.1 and the notes about it before diving into Sec. 21.2.
Thursday, January 17th - second half of class
22.1, 22.2, 22.3 through the top of pg. 800 and skim the rest of 22.3, 22.6, 22.7, 22.9
23.1 pgs. 834 and 835, 23.2, 23.3, 23.5
24.1, 24.2, 24.3, 24.4 focusing on the figures, 24.7
25.1, 25.2, skim 25.3 and 25.4, 25.8, 25.9
Read the entire chapter 26 (sections 1 through 4)
Read the entire chapter 27 (sections 1 through 4)
Chap. 22 - By "communication", the book means natural language, such as English. Communication can, and often does, involve the visual modality too, as television does and when one person shows others certain things, procedures, and so forth. A major component my own research is the study of diagrams, which can communicate complex data, structures, and relationships. Typically the language and visual modalities work together to communicate information. Some argue that a proper study of language is not even possible without grounding it in the world - that language as pure symbolism is not a proper container of meaning. Certainly, children learn language in such a grounded way. Secs. 22.1 and 22.2 are a useful introduction to the study of language from an AI perspective. Secs. 22.6 and 22.7 introduce important topics briefly and readably. A reasonable understanding of algorithms will help you get through Sec. 22.3 - it's all about data structures and the reduction of computational complexity.
Chap. 23 - Language is so varied and complex that probabilistic methods have been developed to analyze language by machine. The latter parts of Sec. 23.1 draw on concepts we haven't covered, so you only need to read the two pages assigned. Secs. 23.2 and 23.3 cover the practical uses of natural language analysis as applied to information retrieval and information extraction. Information retrieval and extraction for images and diagrams is not nearly as well developed or understood as for language - I do research on these problems.
Chap. 24 - The implementation of systems for visual perception is a complex technical problem, as reflected in this chapter. That is why I suggest that you focus primarily on the figures. (Figures are certainly an appropriate way to describe visual perception.) There's math to study here, but it is helped by the tie-in to the figures.
Chap. 25 - Robotics has to deal with space and motion, adding complex technical details beyond the standard symbol manipulation strategies of much of AI. Robotic perception is particularly messy, since robots often have to navigate through complex environments. Game AI is very much a collection of strategies to build "robots", the non-player characters (NPCs) in a game. Though a lot of tricks can be employed to reduce the complexity of the problem and solutions, the more realistic a game is, the more AI and robotics-AI has to be included. Our textbook says little, if anything, about computer and console games, but I will discuss them throughout the course.
Chap. 26 - Everyone wonders about the basic problem of how machines might be made to act intelligently. Such systems will play greater and greater roles in the future. This chapter discusses a number of basic issues surrounding these questions.
Chap. 27 - This is a summary chapter, looking back at what has been discussed in the book and forward to what might be.
Whew! Quite a whirlwind tour.
to CSG120 home page.,
RPF's Teaching Gateway, or
homepage