icosilune

Joseph Weizenbaum: Computer Power and Human Reason

[Readings] (08.29.08, 3:51 pm)

Overview

Weizenbaum’s 1976 influential text reflects a growing concern over the philosophical ramifications of artificial intelligence. Weizenbaum is specifically concerned with the interpretation of his Eliza program, and its characteristic of its simplistic processing being mistaken for the wisdom of an actual human psychotherapist. Weizenbaum argues that this is indicative of a larger problem, wherein the application of science and technology (and computer science specifically) is reducing humans to be equated to machines. Weizenbaum argues against the development of AI, not because AI cannot achieve its goals, but because AI should not achieve its goals.

Notes

The computer is a vehicle that is making the world into a computer. Weizenbaum’s initial concern seems very similar to Hidegger’s question concerning technology. Weizenbaum also interacted with Daniel Denett in fleshing out his ideas.

Eliza is not embodied but begins with cultural conceptual knowledge. (at least pretends to have such knowledge). Weizenbaum’s schocks on public reception of Eliza: 1) Model of therapy (that could be made artificial) 2) Antropomorphization of computer. 3) Over crediting of limited text processing power. (p. 5)

Weizenbaum’s concerns and questions: 1) Man as recognized as/equated to clockwork. 2) Role of instrumentality (Freud ref about man being how like a god with his prosthetic). 3) Human trust and autonomy (machines diminish autonomy with dependence). What is the retrospective modern audience take on these? (pp. 8-9) Weizenbaum sees a dogmatic, rationalist perspective seen in colleagues and students. These cultural embedded perceptions persist to this day. (p. 10)

Consider Judaic tradition of the contract between God and man. Depends on free action, decision making of both parties. With rationalistic perspective, truth is equated to provability. Science is a drug and slow acting poison. (p. 12)

Behaviorist ref to B.F. Skinner: human values are illusory. Consider comparison to embedded nature of value in linguistic/semiotic systems. (p. 14) Weizenbaum introduces theatre as a school “The Greek and Oriental theatres, the Shakespearian stage, the stages peopled by the Ibsens and Checkhovs nearer to our day–these were schools. The curricula they taught were vehicles for understanding the societies they represented.” Weizenbaum is looking at this from a cultural study perspective. (p. 16)

Weizenbaum discusses tools, as extensions of people. Tools are imaginative extensions, but limited by imagination. Tool informs and alters user’s perception of world. (p. 18) The computer has closed some doors while it opened others. Weizenbaum’s fear of computer as a tool is technological determinist in nature. Technology changes people who use it. People have become over-confident in the computer’s ability to solve problems, and under-confident in themselves. (p. 38)

Weizenbaum discusses power of machines as regular, lawful entities. “Machines, when they operate properly, are not merely law abiding; they are embodiments of law.” We have faith of the law in the computer. We defer our laws and knowledge to the law of the computer. (p. 40) Uses logic of games to describe state and rules in systems. Weizenbaum uses some abstract examples- greed is not a rule, but implicit law embedded in Monopoly. Other games may be more abstract, but still encode rules and embody laws. (p. 44)

Some complexity theory here: also, matter of translatability of X into an algorithm for representing X. Translatability is directly related to simulation. This relates to notion of a formal description “effective procedure”, but glosses over ideas of subtext or nuance. “Can anything we may wish to do be described in terms of an effective procedure”: No (p. 65) Natural langauge encodes grammatical validity (p. 69)

Baudrillard ref here: Representation is equated to the subject. Semiotic simulation implies that boundary between is blurred. (p. 106) Process of programming, defining a model is two-sided: writing/programming reveals flaws in our logic, we cannot conceal them via ambiguity as in natural language. Programming may also be used to explore ideas and come to understand a subject. (p. 108)

Computers are physically embodied, but playing at ideas in a purely cognitive realm. Computer does not understand what it is doing. The underlying references of the reason are lost on it. (p. 112) The solitary power of the lone programmer: Comparison to compulsive gambling is reminiscent of Alison Adam’s discussion of male programmer mating with female program. “The computer programmer, however, is a creator of universes for which he alone is the lawgiver. So, of course, is the designer of any game. But universes of unlimited complexity can be created in the form of computer programs. Moreover, and this is a crucial point, systems so formulated and elaborated act out their programmed scripts. They compliantly obey their laws and vividly exhibit their obedient behavior. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.” (p. 115)

Weizenbaum derives a slick comparison between the compulsive programmer and the compulsive gambler. Both blind themselves to realistic laws and prefer to live in their own artificial domains. (p. 124-125)

Weizenbaum explores the ambitions of the AI project, namely to extend and handle any problem that could be solved by a human. At least, AI should be “nothing less than to build a machine whose linguistic behavior, to say the least, is to be equivalent to that of humans.” (p. 138)

Wezenbaum explores the differences between theories and models. Theory is limited to the textual nature of the theory. Models may be said to satisfy theories. Computers enable alternate linguistic expression of theory via programming. (p. 144) Computer models enable immediacy and response in validation of theory, but also let pass the indeterminability of false premises, and concealment of fault. (p. 152)

AI attempts to replicate how people solve problems. Huristics and problem solving methods are protocols for understanding subjects to be modeled. Can reduce all problems to be approached by one possibly faulty model. This is description of Newell and Simon’s work, which we know is somewhat gender biased (more about how young male college students solve problems). (p. 169) Behaviorist model treats human model as a black box. Skinner refuses to look inside the black box, whereas the AI theory sees the inside of the black box as a computer to be replicated. (p. 175)

The transformation of problems into technical ones: the application of FPS machine and objective treatment on human subjects. (p. 180) The method of scripting the interactor to some extent preps and pre-programs human interactor. Compare with Alice chatterbot movement (p. 188)

AI has: confusion of intelligence with IQ, and the neglect of other modes of intelligence. (p. 205) Sum here is a deeper emphasis on the depth of human knowledge and the extreme limitation of the computer. The limits of computers should be thought in terms of “Oughts”. (p. 226)

Most successful programs are built on heuristics, not theory. This is notable in application of games such as SimCity, Sims, etc. Computer programs are based on strategies that seem to work under most unforeseen circumstances, as opposed to strong theory. (p. 232) The capacity of the computer through closed-ness can re-create history. Like Baudrillard, but applied. When computer is applied to war, it introduces tremendous dehumanization and distance between ones making decisions and the ones in battle. Weizenbaum implicates computer as device which lead to terrible waste of life and destruction in Vietnam. (p. 239) Common viewpoint holds persistent confidence and sense of inevitability of machines. Professor J. W. Forrester: Our social systems are no utopias. (p. 247)

Weizenbaum argues: “But I argue that rationality may not be separated from intuition and feeling. I argue for the rational use of science and technology, not for its mystification, let alone abandonment. I urge the introduction of ethical thought into science planning. I combat the imperialism of instrumental reason, not reason.” (p. 256)

What is a human voluntary act? Is voluntary nature illusory? Machines cannot be voluntary since they follow rules, but what is human voluntary process? If we believe Herbert Simon, we are non-voluntary, merely reacting to our environments. (p. 260) “An individual is dehumanized whenever he is treated as less than a whole person.” But, how would it be possible to treate someone as whole? Through technology or otherwise? (p. 266) Final ought of computer science projects: “… there are some human functions for which computers ought not to be substituted. It has nothing to do with what computers can or cannot be made to do. Respect, understanding, and love are not technical problems.”

Reading Info:
Author/EditorWeizenbaum, Joseph
TitleComputer Power and Human Reason
Typebook
Context
Tagsspecials, media theory, ai
LookupGoogle Scholar, Google Books, Amazon

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.