Use of Artificial Intelligence in organizational setting – An Evaluating of the Implementation, Problems, and advantages
The managers, engineers, programmers, and technical communicators actually working with knowledge and information want something concrete they can work with to help save time and money for their organization by the Use of Artificial Intelligence. Dreyfus (1997) explains four traditional areas for applied AI research and development: game playing, language translating, problem solving, and pattern recognition (Dreyfus, 1997).
|Figure 1: An early poem by RACTER (Güzeldere and Franchi online, 2002)|
|“Awareness is like consciousness. Soul is like spirit. But soft is not like hard and weak is not like strong. A mechanic can be both soft and hard, a stewardess can be both weak and strong. This is called philosophy or a world-view.”|
Unfortunately, the evolution of artificial intelligence as a practical field has been somewhat disappointing. Consider the stanza of poetry shown below in Figure 1.
The poem shown in Figure 1 provides a thought-provoking analysis of some abstract ideas such as awareness and our general state of consciousness. It mentions the ethereal qualities of our souls, and then seems to wander off subject to discuss random issues not related to these abstract ideas. In fact, by the end of the poem, the reader is somewhat bewildered as they try to piece together what exactly the author is trying to say. The bad craftsmanship of this poem is not particularly interesting, nor are the metaphors and figurative language supporting this rambling verse. What is meant to be interesting about this poem is that it was written by a computer. RACTER is an “artificially insane” computer program (Güzeldere and Franchi online, 2002). Whenever software programs enter into the creative realm to do things like converse with humans, write poetry, compose songs, and go insane, we are looking at an example of applied AI.
As human beings we cannot help but be fascinated by what applied AI has to offer. The possibilities of intelligent machinery are constantly being regurgitated in literature, film, and modern art. Most recently, a resurgence of popular interest in AI has emerged in the movie theatres with films such as AI: Artificial Intelligence, The Matrix and its two sequels, and Bicentennial Man drawing in large numbers to the box office.
Historically speaking, there were many important social and technological events that led up to our modern ideas about (and implementations of) applied artificial intelligence. In 1818, Mary Shelley published Frankenstein, which remains one of the most cautionary tales of the relationship between man and machine. The year 1835 brought about the invention of the electric relay by Joseph Henry, which allowed electrical current to be controlled by switching the relay into the on or off position. This coupled with George Boole’s development of symbolic and binary logic in 1847 formed the basic foundation for computational logic.
In 1917, based on his play, Karl Capek coined the term “robot” which in Czech means “worker” (Kantrowitz , 2002). Following shortly thereafter was the pioneering work of John Von Neumann, who is considered the father of today’s stored memory-based computer architecture. One of Von Neumann’s most significant contributions to the field of applied AI was his construction of an algorithm, or ordered process, for minimizing losses and maximizing gains. This algorithm was aptly named the minimax theorem. The minimax theorem was arguably the first real application of weak artificial intelligence. Rule-based programming would allow computers to “play” games of strategy with a human opponent, where the objective would be to maximize the points for the computer and eventually win the game. Von Neumann’s theory was later applied to computerized versions of popular household games like Checkers and Backgammon.
The first “neural-network architecture for intelligence” was proposed by Warren McCulloch and Walter Pitts in 1943 (Kantrowitz , 2002). This groundbreaking research not only introduced the finite-state machine as a model for computation (McCulloch and Pitts online, 2002), but it also planted the idea that the computation done by the human brain could somehow be codified into a mathematical language and understood. This idea eventually came to be known as the symbol-system hypothesis. Unfortunately for researchers, they would soon discover that the number of states available to the human mind was infinitely complex and quite impossible to predict with mathematical certainty.
In 1950, Alan Turing, another computer science pioneer and founding father, published “Computing Machinery and Intelligence.” This article built upon his earlier theories addressing the procedure in which computers could be used to “handle symbols such as humans do in the process of thinking” (Smith, 1993). The term “artificial intelligence” was coined in 1956 by John McCarthy at a Dartmouth College conference and marked the point at which AI began to be considered a distinct entity from the information sciences (Buchanan, 2002). In 1958, Von Neumann’s famous article comparing the human nervous system to a digital computer was published. The MIT Artificial Intelligence Lab was founded by Marvin Minsky in 1959 along with McCarthy (Generation5 online, 2002). Minsky is considered by many to be the father of AI.
After the 1950s, many researchers began to consider the emergence of artificial intelligence technologies as an actual possibility. Despite these high hopes, fifty years of progress have still not delivered these technologies to our society. Our current technologies have not brought us any closer to true artificial intelligence any more than they have delivered the paperless office that has been hyped for the past few decades. As Fred Tonge writes, “… there is a large difference between saying that some accomplishment ‘ought to’ be possible and doing it. Too often, when some interesting behavior is produced, the common reaction is, ‘So what. What’s his name (was it Turing, or Jules Verne, or Isaac Asimov?) suggested that years ago’” (Tonge, 2002). With our modern technologies, however, we are moving into new and alternative domains for the production of processing encoded instructions. These new domains may give the field of AI the breakthrough it needs in order to produce more compelling examples of artificial life and intelligence.
The late 1960s proved to be a monumental period for research in artificial intelligence. In 1966, Joseph Weizenbaum was working in the MIT AI Lab when he created the most famous AI application yet, a small program named ELIZA. ELIZA was a natural language processing program that could converse with users based on a script which gave the program a set of rules to follow for different types of conversations. DOCTOR, which was ELIZA using a Rogerian psychotherapy script, soon became famous around the world for listening to people’s problems and offering advice that seemed to show reasoning abilities within the program. In reality, though, ELIZA was “based on very simple pattern recognition, based on a stimulus-response model” (Wallace, 2003). Weizenbaum’s contribution to AI was especially unique because he “paid no less attention to the moral aspects of AI than to the research itself” (Generation5 online). Weizenbaum saw computers as “tools to expedite our daily lives” and did not believe in putting the “technical advances of AI above the ethics” (Generation5 online).
By the late 1960s, we had computers that could reliably beat the world’s most talented checkers players. In 1975, Marvin Minsky “published his widely-read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together” (Buchanan, 2002). In 1974, Lewis Thomas published his article “Computers,” proposing that true AI could only be found in a system of computers and not in a solitary supercomputer-type machine.
“Read More: Directive Leadership”
The most recent advancements in AI (gaining popularity in the late 1990s) have occurred in the areas of autonomous agents and ubiquitous computing. Autonomous agents are knowledge-based systems that perceive their environment and act on that environment to realize one or more goals (Tecuci , 1008). Ubiquitous computing devices are “computers on the go.” Using these technologies, firefighters and paramedics literally wear computers in order to have access to large amounts of information without losing their sense of mobility (Zimmerman, 2001).
To define applied artificial intelligence operationally, we must first define what is meant by intelligence. Intelligence can be defined as “a property of a system, a judgment based on observation of the system’s behavior and agreed to by ‘most reasonable men’ as intelligence” (Tonge, 2002). Using this definition, “artificial intelligence” becomes “that property as observed in non-living systems” (Tonge). So, in order to talk about a computer or a mechanical device as having artificial intelligence, we would need to identify observable properties that would convince most reasonable people that it is acting intelligently. Of course this idea is based on one key assumption, that “intelligence is not restricted to living systems” (Tonge 2002). For those believing in the possibility of artificial intelligence, this is obviously an assumption they are forced to make.
Artificial intelligence as an academic interest emerged sometime in the mid-20th century following the publication of Turing’s famous article “Computing Machinery and Intelligence” in 1950. In this article, Turing introduced and explained the Turing Test, which is mentioned in virtually every book or article ever published about computer-based intelligence. McCarthy’s most recently revised definition of the term describes artificial intelligence as “the science and engineering of making intelligent machines, especially intelligent computer programs. Use of artificial intelligence is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable” (McCarthy online, 2003).
While a more precise definition of artificial intelligence is debated among researchers, it is generally agreed that AI studies are concerned with various information processing systems. Some claim that AI should only be concerned with artificial systems of information, such as those found in computers or other electro-mechanical devices. Others, like Aaron Sloman (2003), believe that the field should consider natural information processing systems as well. Sloman identifies several principles related to AI, including studies of the ways in which “knowledge is acquired and used, goals are oriented and achieved, information is communicated, collaboration is achieved, concepts are formed, and languages are developed” (online 2003).
To understand the concept of artificial intelligence, Use of artificial intelligence is important to differentiate between true artificial intelligence and artificial pseudo-intelligence. Most experts in the field use the dichotomy between “weak” and “strong” AI to separate these theories. Strong AI theory operates under the assumption that “computers can be made to think on a level (at least) equal to humans” (Crabbe and Dubey online, 2002). We have yet to see an actual application of strong AI in our technology. Weak AI, on the other hand, has a dramatic presence in modern technology. Expert systems found in search engines like Google use rule-based systems to narrow down information and find the most pertinent “hits” for a given series of keywords. Voice recognition software uses intelligent algorithms to find the most likely match for a spoken phrase. Computer games use weak AI in “a combination of high-level scripts and low-level efficiently-coded, real-time, rule-based systems” (Crabbe and Dubey online. 2002). Software programs frequently incorporate AI technologies because “learning is essential for a software agent; no software developer can anticipate the needs of all users” (Leiberman and Maulsby 2002). In other words, weak AI applications make existing computer programs more useful, convenient, and speedy. What weak AI applications do not do, however, is think creatively and critically in the same way that humans can when processing information.
So, as a society, we longingly wait for the wonders of strong AI to materialize while we are already immersed in the influences of weak AI.
Richard Lanham writes in his thoughts about artificial life: “Evolution, human and animal, actual and potential, is being charted by a field of computer-based technology called ‘artificial life.’ Into all these interactive environments the literary imagination, the fictional impulse, enters vitally. The personal computer has proved already to be a device of intrinsic dramaticality” (1993).
In other words, if we believe what Lanham has to say, we will inject our own sense of drama and humanity into these virtual worlds. He suggests that our own creative energies are required to put the “life” into “artificial life” and thus these energies are needed to encourage advancements in this field. From this viewpoint, artificial intelligence will not replace humanity but will act more in accordance with the notion of Derrida’s supplement: it is both an extension to and a replacement of our current notion of humanity.
The purpose of this dissertation is to examine the implementation of AI in organizational setting, the barriers and limitations it faces and the obvious advantages of the AI for business corporations. For this purpose an exploratory descriptive analysis will be presented covering a wide range of AI perspectives as indicated in the topic of the dissertation.
Further, the researcher aims to determine whether the business corporations have better awareness of the AI advantages and benefits which is a critical factor for proper implementation of artificial intelligence in organizational setting. The other purpose of this dissertation is to identify any problems or barriers encountered during implementation and use of artificial intelligence by the end users. The findings of this study are expected to significantly advance understanding of how the use of artificial intelligence can improve carrying out routine tasks at workplace through automated system.
Without properly evaluating the usability of artificial intelligence, organizations are unable to ascertain whether they are using the latest computer technology for efficient and cost effective e work disposal system. Without conducting a usability study in to the field of AI, the management has no way of knowing whether their employees utilize their full potential of the Use of artificial intelligence for disposing of their work quickly and effectively.
The specific objectives of this study are:
- To examine the implementation of artificial intelligence in workplace
- To examine the benefits and advantages of AI for business organizations
- To investigate the barriers in its implementations
An evaluation of the implementation and benefits of Artificial Intelligence in the business setting will help understand its true benefits and advantages and will also help us understand the potential barriers of AI in our business processing. This evaluation is vitally important for further developments in the field of AI which has promised to change our life dramatically.
Knowledge management (KM) is a broad term covering many different fields of research and areas of development within various organizations across the world. Knowledge management techniques have been programmed into applied tools in many different forms and variations, and they are often classified as artificial intelligence (AI) tools and touted as evidence of progress in that field. Unfortunately, these techniques are difficult to define because of the problems involved in representing knowledge, which of course must be done before knowledge can be managed. This task is known as knowledge representation (KR), while the process of gathering and collecting knowledge is referred to as knowledge acquisition (KA). As Tim Berners-Lee et al. (2004) explain, traditional forms of “knowledge representation, as this technology is often called, is currently in a state comparable to that of hypertext before the advent of the web: it is clearly a good idea, and some very nice demonstrations exist, but it has not yet changed the world” (Berners-Lee et al. “What the Semantic Web Can Represent” online).
The first step in any knowledge management process is to collect data. Data are “facts collected on an entity” (Desouza, 1979) or “observations of states of the world” (Davenport, 1979). This data must then be somehow filtered or processed in order to become information, which can be represented as processed data put into some context (Desouza, 1979). Thus the relationship between data and information can be expressed as follows: Information = Context [Processed Data] (Desouza).
Knowledge, then, is the apex of useful growth for any collected data. If data is considered as the fundamental building block of the data/information/knowledge pyramid, then knowledge is the purest form that resides at the pinnacle of this structure. Thomas Davenport defines knowledge as “information with the most value (which) is consequently the hardest form to manage. It is valuable precisely because somebody has given the information context, meaning, a particular interpretation; somebody has reflected on the knowledge, added their own wisdom to it, and considered its larger implications” (1997). In a storytelling mechanism, such knowledge can be found in one or more stages of the story submission and retrieval process. For example, knowledge can be encapsulated into user-submitted stories at the beginning stage by adding descriptions of a user’s experiences and that particular user’s reactions to those experiences into a story. Likewise, it can emerge as an end product when a new user retrieves a story and adds his or her own life experiences and interpretations to that story. A helpful analogy for this process is the Internet forum or bulletin board, where one user starts a thread related to a certain topic and then additional users add their own thoughts, feelings, and suggestions in an asynchronous fashion. A third party can then come along at any point and make an informed decision about an issue or product simply by reading the thread of discussion from start to finish.
The idea behind knowledge management is to develop a set of processes that make it easier for humans to work with the vast amounts of data that bombard us from all angles of our lives, and to transform this data into purposeful, context-specific information. Knowledge management can therefore mean a variety of things to a variety of different people from different professions. Corey Wick explains,
It is well documented that there is a multitude of seemingly divergent definitions of knowledge management. Some emphasize information development and document management … some concentrate primarily on the technology surrounding documents and information … others emphasize the financial value of knowledge as intangible assets … some emphasize cultural issues … and still others talk about ‘knowledge organizations.’ (2000)
This dissertation intends to explore and examine the implementation of artificial intelligence for knowledge management. As a result of this focus, my idea of knowledge management often encroaches on several of the various definitions mentioned by Wick above. Regardless of the specific definition used, in each model it is clear that the relationships between data, information, and knowledge play very important roles in a given organization’s knowledge-building or knowledge-distributing process. For example, we can consider the difference between data, information, and knowledge in relation to a storytelling system.
A detailed explanation of narrative knowledge management requires a historical summary of major developments in computing technology, artificial intelligence software, and narrative theory. The phrase “intelligent system” is defined as a system, often computer-based, than can perform some function that would be considered intelligent if it were performed by a human user. This is, in fact, a definition that is often used for the similar phrase “artificial intelligence.” For the purposes of this dissertation, however, I examine intelligent systems that also include more primitive computer systems that served as precursors for our modern computer technologies. While some of these machines were only used for measurement or simple mathematical calculations, at the time of their creation they were certainly considered intelligent.
At present, in the beginnings of the 21st century, we find ourselves in the midst of a very serious problem: information overload in the information age. Luciano Floridi (1990) describes this as the problem of “infoglut,” which “concerns the degree to which retrievable information can actually be managed” (1990). In the following paragraphs, I will show how our computing technologies evolved into a system capable of generating so much accessible information that infoglut is now a serious concern for organizations large and small.
Computers and mechanical systems in general have always been designed with one primary purpose in mind: to make our lives as humans easier. It is therefore surprising to see how hard humans in our current century are working; with the amount of technical innovation and ingenuity we have seen in the last hundred years, it is ironic that we have to do any work at all. Unfortunately, an increase in technological sophistication does not equal an increase in leisure and relaxation time. To understand why this is true, it is necessary to examine the history of technology and specifically the evolution of intelligent systems. It is then interesting to consider the idea of intelligence in terms of other non-mechanical and non-computerized systems. One such example is the gender-based system of classification.
Every major technological change in the history of our culture has brought with it a profound change in perspective in how we view the world around us. Improved scientific processes and observation techniques have gradually immersed us into a more detailed environment, bringing to light previously invisible forms of data for us to observe. These paradigm shifts accompanying new technologies are inevitable considering the social and economic impact of new tools and techniques. The beginnings of organized abstract thought can be traced back to the Greeks and their introduction of classified scientific knowledge.
Russell Shackelford writes, “The ancient Greeks provided the founders of virtually every field of human knowledge: philosophy, mathematics, geometry, engineering, astronomy, anatomy, and medicine, to name but a few. All of their accomplishments share a central quality. They were able to consider phenomena from a new perspective: they were able to engage in abstraction” (1998). This may seem like a minor detail to note, but in fact the idea of abstraction is what makes any system intelligent. An engineer or a designer must design a device to solve all instances of a given problem, not merely one manifestation of that problem. By organizing principles and theorizing using abstract thought, the Greeks were able to actually advance their studies far more than they would have been able to using only observation and the technology of their time. While many of their theorems and ideas had to be modified or even discarded altogether, many of their other ideas were of sufficient quality to stand the test of time. In addition, their emphasis on abstraction paved the way for future pioneers and mechanical enthusiasts who would later use these principles in their own work.
The idea of abstraction is also critical in differentiating computing from pure mathematics. In mathematics, we have the flexibility of infinitely approximating analog values, whereas a computer must at some point stop and make a decision between two discrete points. Daniel Kohanski elaborates,
The reduction of phenomena to numbers that fit into the orderly structures of mathematics also seduces us into believing we can avoid the unruly and chaotic real world. But the raw material of thought is information, and to the extent that we accept computer-digested data instead of seeking it on our own, our ideas about the world are based on incomplete approximations filtered through some programmer’s judgment calls and the limitations of the machine. (1993)
These world observations are what Davenport describes as data, and estimated data will eventually be merged into approximated information and finally personalized into knowledge. With each transformation there is a loss of precision and also a tendency towards more abstraction.
We can also address the idea of abstraction from a linguistic perspective. Derrida begins On Grammatology with an all-out assault on structuralism and semiology. From the very beginnings of his diatribe, he warns of the dangers of dichotomy. How, for instance, do we break down an abstract concept as language into a signifier and a signified? Derrida explains this as “inflation of the sign ‘language’” and warns that it is “the inflation of the sign itself, absolute inflation, inflation itself” (1976). Writing, therefore, is not a secondary by-product of language, but is an essential component of language itself. It “comprehends language” (Derrida). Derrida is looking at the science of the study of writing as something different from the science of the study of language. While there is still the technology of the signified and the signifier, the act of writing is a continuously morphing activity that redefines and reconstructs the reality of the signified. For instance, is Derrida’s own writing an elaborate joke, a legitimate defense of the importance of writing, or a hell-bent joyride through classical and modern philosophical theories? Part of the point of his writing seems to be that it is often hard to tell. While Derrida uses painstaking detail to examine classical text and theory to support his argument, he seems to take genuine pleasure in deconstructing the arguments of Saussure and the structuralists.
One of Derrida’s main quarrels with structuralism is the attempt to withdraw a single, ultimate meaning from “the movement of signification” (Derrida). He finds it impossible to arrive at a solitary meaning through language – a concept he defines as presence. It is such a property of language that allows two different people to arrive at two very different conclusions from observing a piece of data; to one, this data might be purposeful and therefore would represent information or possibly even knowledge. To another, the data might be irrelevant or imperfect for the task at hand, leaving it represented as simple data. Another prevalent theme in this chapter is Derrida’s distrust of the binary world, which is interesting considering our reliance on binary computers for intelligent interactions. As Hubert Dreyfus explains, there are two types of computing devices: analogue and digital. He writes,
Analogue computers do not compute in the strict sense of the word. They operate by measuring the magnitude of physical quantities. Using physical quantities, such as voltage, duration, angle of rotation of a disk, and so forth, proportional to the quantity to be manipulated, they combine these quantities in a physical way and measure the result. A slide rule is a typical analogue computer. A digital computer—as the word digit, Latin for “finger” implies—represents all quantities by discrete states, for example, relays which are open or closed, a dial which can assume any one of ten positions, and so on, and then literally counts in order to get its result. (Derrida)
Our modern computers are digital, meaning that they “operate with abstract symbols which can stand for anything” (Dreyfus, 1997). This ability prompted Alan Turing to describe the digital computer as the universal machine, which means that “ … any process which can be formalized so that it can be represented as series of instructions for the manipulation of discrete elements, can, at least in principle, be reproduced by such a machine” (Dreyfus 1997). It is precisely this characteristic of universality that led early artificial intelligence researchers to believe human intelligence could eventually be encoded into symbols and therefore possessed by computers as an observable property.
The very acknowledgement of using a binary system of order to represent complex human characteristics suggests that there is some point of origin at which there is a neutral value given to the signified concept. Derrida finds this concept to be impossible. The real world is analog, not digital. There is no clearly defined point of neutrality for abstract concepts such as good and evil, and these concepts rely on oppositional definitions for their very existence. When the abstract idea must further be taxonimized within itself, this binary perspective seems even more unlikely. Such is the case with language and writing. As Derrida points out, “The question of the origin of writing and the question of the origin of language are difficult to separate” (Derrida). If the study of writing were to look at the field of linguistics for a basis of definition, this would annoy the binary enthusiasts who claim that territory as belonging to the science of oration and speech.
This concept of digitization can also be applied to the mechanics that exist in computer architecture. Although computers are designed within the binary world, the components inside them are in fact working with analog voltages which are then encoded into their respective binary values. For example, +5 volts could be represented as a binary “1,” while -5 volts could represent a binary “0.” While there is a seeming point of origin within this system (zero volts) this is only a theoretical value that is rarely ever seen within computers (due to resistance in the wiring, interference with other signals, etc). In fact, the digital system is used in contrast to the analog system precisely because of such irregularities. Analog is reality, but binary is robust. While +5 volts is seen as the theoretical value of the binary “1,” in reality it is a range of voltage from 0 to +5 volts that is generally used in practice of Use of artificial intelligence, and likewise in the negative direction for the binary “0.” Using this system of encoding, it takes a significant amount of interference within an electronic circuit for a “0” to be interpreted as a “1,” or vice-versa.
Even more strangely is the appearance of a third possible value within computer architecture: the tri-state. While components and circuits are indeed restricted to high or low levels of voltage within their system, there is also the transition from a high to low (or low to high) voltage that can take on a new value in itself. This very idea has led to numerous improvements in efficiency within the computer that were previously thought to be impossible within a binary system. While this metaphor may seem strained, we can think of the computer as representing language, a new technology that enables us to communicate with one another and produce greater amounts of work in shorter amounts of time. Speech, then, would be that which is produced by the computer in its final form. And writing, if we are to consider the traditional structuralist definition, would be the binary encoding of the final material – a symbol of a symbol, or a sign of a sign.
Thus, just as in language, the true power of computer architecture is found in the “play” of the encoded signals. A “1” can only be accomplished through applying a high voltage (in both the positive or negative direction), and a “0” can only be formed using a low voltage, but the play between them (the tri-state) is very powerful because it can be achieved in many different ways (the transition from a zero to a one, the transition from a one to a zero, the maintaining of a signal for a certain amount of time, etc.). If we think of writing as the programming of language, then this type of play is even more similar. While hardware architects are making the most of their available materials to make communication within a computer as efficient as possible, so must writers work with their own materials to achieve efficiency within the bounds of printed language. Here the intelligence of the system is not confined to the normal boundaries of a dichotomy but instead is manifested through creative manipulations of abstract states.
A definition of what is being studied is important in the field of systems science. The phenomenon must be scientifically observable, and it should qualify and be classified as a system. A system is a group of related objects and their attributes (Hall & Fagen, 1968). A given system is made of properties, functions, relationships, and attributes. The relationship ties the system together. Succinctly, a system is a unit, the whole percept, with certain attributes perceived relative to its external environment. The unit has the quality of containing subunits operating together to manifest the perceived attributes of the unit (Lendaris, 1986). The subunits, as systems in their own right, are the internals of the unit. The percept is their environment (Hall & Fagen; Lendaris).
Hall (1989) added that an abstract system is similar to a surrogate or an analog that performs as the original irrespective of operations or the physical appearance of the mechanism. There is a derived macroscopic behavior (Hall & Fagen, 1968). One of such behavior is wholeness. That is, when every part of a system relates to every other part and changes in one cause the rest to change, the system is said to behave coherently. Another macroscopic behavior is progressive segregation, which is a process of decay or growth behavior that associates with time and is noticeable from the physics of the system. A system gains experience as time passes and decays as time progresses. Another is progressive systematization. A system is said to undergo progressive systematization if there is a change toward the wholeness. The progressiveness may be a strengthening of existing relationships among the subsystems or a development of a new relationship among parts that never had a relationship.
An object is a system if certain rudiments are satisfied. First is if the object is corporeal with an environment consisting of its upper or outer level. Lendaris (1986) called the outer level the relevant environment, the immediate surrounding. The outer level specifies other objects that affect and are affected by the unit. An irrelevant environment does not change the attributes of the unit, nor do its attributes change the unit.
It follows that a unique system maintains identity, individuality, and is distinguished in terms of its properties. Even in the case of similarity, the notion of bundling differentiates one system from another (Sanderson, 1999). In addition to bundling, a system should satisfy impenetrable criteria. Thus, two objects cannot maintain or possess the same spatio-temporal bundles due to a table of presence, a table of absence, and a table of comparison. A table of presence indicates the unit must be either physically or conceptual present to an observer and to those the observer communicates with or to. The most important attribute is that the system must contain and illuminate a set of interests that attract observation. A table of absence consists of other objects in the universe of discourse. The other objects do not satisfy the table of presence criteria; they do not contain the same spatio-temporal bundle that the identified system has. A table of comparison is based on the degree to which a set of bundles is present or absent in the identified system for comparing it with other objects in the universe.
A set of behaviors also adds requisite meanings to what a system is. One of the essentials is centralization. Centralization is a behavioral aspect of a system where one subsystem dominates and is attached to others. For example, a little change in a leading subunit will trigger an amplified change in the system because of the attachment. Another common feature is openness or caginess. A system is said to be open when it exchanges information with its relevant environment. A system is considered cagey if there is no exchange of information between the system and the environment. Both types depend on how much of the universe is included in the system or the environment.
A possible feature of systems is adaptability. A system is adaptable when it is able to adjust to environmental changes and continuously operates in its normal state. Adaptability is in the sense that what may be intrinsic to the natural systems is considered a mystery to the machine intelligence enterprise. The adaptive, desired behaviors are embedded as needed and may be lacking depending on the ability, tools, psychology, sociology, and ethics of the developers. The behavior generally is for continuity.
Feedback is among the features that make the behavior possible because some systems have a certain or a portion of their output rerouted back along with new input to be processed. The rerouting affects succeeding output and the stability or instability of the systems. A system is stable if certain or all of its variables behave or react within a defined behavioral limit. Even adaptive systems may maintain certain stability with respect to the set of variables that maintain desired behavioral limits. As such, intelligent artificial systems (IASs) are conceptualized, designed, and produced based on the principles of systems science.
Therefore, this study is in accord with Kurth-Schai’s (1984) system. The system is a complex, organized, dynamic, and purposive creation. It is a complex human creation in the sense that it shares common goals with cognitive and affective components that communicate through language, graphics, or emotions. The system behavior may also result from its experience, current and future engagement, or expectations. The behavior is an organized creation because of the relationship among its components or variables. The systems dynamism is a result of its elastic responsiveness to environmental changes and being able to be altered for new synthesized subunits. It is a purposive creation exhibiting choices to satisfy certain engagements and goals and able to generate multiple goals and alternative actions to fulfill the events.
One of the fallacies of natural and philosophical sciences is inadequate definition of life (Clarkson, 1993; Levy, 1992; Ludwig, 1993). Biologists define life by certain properties that some artificial systems satisfy. These attributes include the ability to maintain steady state with lower entropy or order than the environment. Levine (1992) determined that self-organization enables both a living system and an IAS to function normally in a high-energy environment. Although living systems possess nucleic acid and protein, their function could be algorithmically embedded in IASs. Both living systems and IASs process information, and, like living systems, the life of IASs ends when matter-energy and information ends.
Because properties of life such as growth, reproduction, self-maintenance, metabolism, genetics, and death (Ludwig, 1993) could be mapped into artificial systems, Adami (1998) inferred,
Life is a property of an ensemble of units that share information coded in a physical substrate and which, in the presence of noise, manages to keep its entropy significantly lower than the maximum entropy of the ensemble, on time scales exceeding the “natural” time scale of decay of the (information-bearing) substrate by many orders of magnitude. (p. 6)
An important property this study investigated is the intelligence of the artificial. Because intelligence is a complex manifestation of certain natural laws, properties, and the states of the system that exhibits it, the manifestation possesses qualities that distort observation and measuring instruments.
The effect of measuring instruments depends on the relationship of a system with its environment, which it is in conformity with regarding action and reaction. Observers rely on biological senses, artificial apparatus, and exhaustive inquiry because in science a researcher cannot articulate about a phenomenon unless it is measurable. Therefore, machine intelligence scientists are confronting complex synergies of digital and human manifests in disguise. For this reason, evolutionary psychology posits that innate capacity determines intelligence, which is genetically encoded (Eysenck, 1990; Reiss, 1997). Also the size of innate capacity determines the level of intelligence. This notion is the theory of innateness.
It follows that intelligence differences are statistically or mathematically determined on the basis of genetics and innate capacity when other factors are held constant (Eysenck, 1990; Reiss, 1997). Because the mind evolves according to the principle or laws of genetics, intelligence becomes a collection of information processing mechanisms designed by natural selection for solving adaptation problems (Cosmides & Tooby, 1997). The brain is assumed to be a physical system whose circuits are dynamically designed for any specific task. Cosmides and Tooby (1997) emphasized that certain aspects of machine intelligence as in human behavior should be based on locating relevant circuits and determining how they physically work, the type of information processed by the discovered appropriate circuits, the embedded information programs the circuits use, and the goals of the rudiments the circuits are originally designed to accomplish. Sternberg (1990) added that intelligence is a result of cell regulation. The overall implication according to Wang (1995) is that a working definition of intelligence differentiates artificial systems. Steels (1990) and Wang concluded wide arrays of definitions exist, and one definition is that intelligence is the ability of a system to rigidly derive an adequate solution in what appeared as a complex search-space, the ability to solve hard problems, getting better over time, the ability for information processing systems to adapt to their relevant environment with insufficient knowledge and resources, and the ability to contribute directly or indirectly to the survival of the system. Steels (1990) added that a working definition of machine intelligence falls into two broad categories. The first is the comparative performance, which includes all types of Turing tests. Comparative performance indicates the possibility to build complex systems that perform complex tasks, but at the same time lack qualities of intelligence such as evolution. The second category is knowledge and intentionality. Under this postulation, intelligence is knowledge or a principle of rationality that gives rise to descriptions about how the system maximizes knowledge resources. The intentionality is then the extraction of the right knowledge for a specific task.
Based on these principles and a search for a universal definition of machine intelligence, Onelife (1998) suggested the evolution of intelligence favored three changes. The first is the instincts that satisfy problem avoidance. The second change is the instincts that satisfy problem solving. The third is the instincts that make an organism more dynamic. The instincts are assumed to be neural with signals instead of mechanisms. Whereas the neurons can be collectively strengthened and made inactive through inattention, they cannot be trained.
Bradshaw (1999) postulated that machine intelligence could be defined either as an ascription or as a description. Under this premise, intelligence is what the IASs do and how they do it (Bradshaw), because they act on behalf of their users to satisfy certain engagements. Dennet (1987) also supported physical design and intentional stances for defining intelligence.
Furthermore, Bradshaw (1999) noted that humans could ascribe intelligence based on intentionality because it is natural to the designers, analyzers, and users; it helps in understanding and explaining complex systems; and it exposes available regularities and patterns of the agency that are independent of its philosophical and physical configuration.
Based on ascription perspective, agency is constructed to exhibit certain aspects of intelligent behavior (Wooldridge & Jennings, 1995). Principles of ascription hypothesize that understanding machine intelligence is based on the system’s properties. Wooldridge and Jennings also theorized that the properties exhibit intelligence. A more tacit assumption involves intentionality such as belief or desire, folk psychology. Folk psychology stresses the prediction of behavior from the attribution of attitudes. Dennet (1987) used ascription to show that intelligence can be ascribed to artificial systems. McCarthy (1979) noted intentionality is sufficient for ascribing mental qualities on IASs because “to ascribe certain beliefs, knowledge, free will, intention, consciousness, abilities or wants to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person” (p. 9).
Ascription helps humans to understand the structure and past, present, and future behavior of IASs given the following. The state of IAS at a given moment cannot be known unless by ascribing certain beliefs and goals. Ascription distinguishes IAS’s present, current, and future states. Ascription differentiates intelligent artificial systems from other artificial systems. Also, ascribing beliefs allow the construction of new hypotheses about IAS behaviors that could not be possible through several finites of simulations or other scientific methods. Finally, it is easy to measure beliefs and goals because they are parallel to the intent and structure of IASs. This enhances the effectiveness of the IASs.
According to Brenner, Zarnekow, and Wittig (1998), there are several significant internal properties of the IAS for determining actions within the system. The first is reactivity. Reactivity or situated indicates IASs are able to react appropriately to the environmental resources. Thus, IASs should be equipped with reasonable sensors for monitoring and modeling their environment. Such IASs are called deliberative systems, whereas IASs without the models are called reactive agents. There is also proactivity and goal-oriented IASs that exhibit proactive behaviors if they initiate tasks in addition to their reactions to environmental changes. Proactivity and goal orientation require a well-defined and embeddable goal on a complex goal system. The system may also consist of subgoals that allow the IAS to perform certain precise tasks.
Brenner et al. (1998) articulated that IASs should maintain an internal knowledge base, retain reasoning capabilities from the knowledge base, and be able to adapt to the environment through learning. Thus, an artificial system should maintain a certain degree of intelligence to be considered IASs. For example, IASs should be able to learn from mistakes and users’ preferences and be capable of updating knowledge bases to avoid future mistakes. Autonomy differentiates IASs from others artificial systems because they are independent of the user for each step of decision making. Thus IASs must have control over their activities (Wooldridge & Jennings, 1995). In contrast, the meaning of mobility differs within the fields of artificial intelligence and between scientists and technocrats. These differences are compromised here to understand and effectively apply IAS technology. Mobility can be implied to software or hardware. Both have unique features for performing tasks independently for their users (Brenner et al., 1998). Mobility exhibits several properties, including roaming and navigating within a given environment.
Other properties include communication, cooperation, and character. In addition to these properties, IASs cooperate and communicate with the environment. IASs require channels for interacting with the environment such as other IASs, humans, and mathematical data. Communication protocols are used for maintaining contact with the environment. Using well-defined rules, IASs are able to communicate effectively with the environment. IASs perform more effectively by cooperating with others and the environment for information sharing and obtaining additional knowledge (Nwana & Azarmi, 1991). Collectively, a smart IAS is one that collaborates with other systems and has interface capabilities or properties. Each engagement requires mutual negotiation and compromise among the IASs. Collaborative IASs tend to be static, large, and coarse-grained. Such IASs might be benevolent, truthful, or both or may lack these qualities. The theoretical reason for collaboration is to create a unified microworld for interconnection and reasonable functions that are beyond the capabilities of each system’s functions. A microworld is a computer environment.
Based on the character property, IASs are able to demonstrate certain personalities that are similar to humans’. In most cases, virtual agents’ and robotics’ appearances are human-like. Even though some characters are ascribed implicitly, some important characters include honesty, trustworthiness, and reliability. A major character property is interface. Interface consists of learning and autonomy for performing certain engagements and collaborating, which includes learning about human preferences. Nwana and Azarmi (1997) also noted the IAS acts by observing, monitoring, learning, and imitating its owners; receiving both positive and negative feedback from its owners; receiving explicit instructions from the users; and seeking advice from another IAS.
Accepting Penrose’s (1994) assertion for genuine intelligence as a manifestation of awareness and understanding, McCarthy (1995) asserted genuine intelligence can be embedded on IASs. McCarthy suggested genuine intelligence has four traits. The first trait is to be mathematically reducible. Because knowledge and beliefs could b e represented in a form of pure logic, they are therefore mathematically reducible for an IAS with a part of its memory reserved as consciousness. The memory stores sentences used for reasoning. To construct an intelligent system, according to Kurzweil (1999), a person needs to define the objectives well and reduce and combine the objectives with some dose of mathematical equations for achieving certain goals. Kurzweil, however, agreed that algorithmic procedure could not emulate the most powerful complex and mysterious process of human intelligence.
The second trait is to be logically deduced. That is, reasoning involves logical deduction and consciousness. The third trait is that the system is automated. Awareness of the system’s environment is accomplished through the automation of certain classes of programmable sentences about the environment. The class is some data set in the system’s memory, which is in contrast to Deikman’s (1996) notion that awareness is different from the content of the mind and Penrose’s (1994) assumption of awareness as an attribute of genuine intelligence. McCarthy (1995) further articulated that an IAS is capable of self-awareness through events and actions achieved by mental content self-observation.
The fourth trait is that the system is modeled. Kurzweil (1999) noted a common method to embed intelligence on IASs is to build a procedure that models intelligence. The model enables the system to examine and represent itself. If the system reflects itself, then it is conscious, and if it is conscious it must have understanding and awareness. Even though this set of self-reflectiveness is powerful, Penrose (1994) n oted the subjective nature of genuine intelligence enforces its noncomputable physical laws.
The acceptance or rejection of MIQ hinges on theoretic postulates of machine intelligence (McCarthy, 1979, 1995; Paul, 1983; Wooldridge , 1963; Wooldridge & Jennings, 1995). According to Penrose (1994), four viewpoints or assumptions concerning the intelligence of artificial systems can be considered. The first viewpoint is any type of intelligent thinking is computable through an evocation of appropriate algorithms. And IAS self-referential is equivalent to human consciousness. The understanding is that any computer that seems to possess consciousness during a Turing test must be conscious, has a mind, and is intelligent. It is the computational configurations that determine its mental abilities. Accordingly, passing a Turing test is necessary and sufficient for such IASs to be considered intelligent and conscious. The artificial systems understand their engagements and instructions. Those who hold this stance of strong artificial intelligence believe hypothesizing those algorithmic capabilities will supersede human intelligence.
The second viewpoint is that computational simulation cannot evoke awareness as long as awareness is a feature of the brain’s physical action. According to this school , an artificial system might behave consciously without possessing any of the mental qualities that could be used to test a conscious person. What differentiates the two viewpoints is that those who hold the first viewpoint accept the idea that Turing test is the criteria for determining intelligence. A Turing test is necessary for this viewpoint in determining computational intelligence because the attributes could be simulated independent of the physical laws. The presence or absence of consciousness depends upon the physical object that is doing the thinking and the physical action being performed. Thus, any appropriate and sufficient simulation of mental activities is sufficient for asserting that an artificial system has a mind or to have reached a level of genuine intelligence. This view is called weak artificial intelligence. Because computers can be used to simulate behaviors, this school infers that computers can also be used to duplicate behaviors. Linstone (1984) disagreed, noting, “The reality created by the computer model in the mind of the programmer or user can never be a duplication of human or societal reality” (p. 13).
The third viewpoint is that physical actions of the brain evoke awareness and therefore cannot be simulated computationally. According to this school, the first and second viewpoints should be rejected on the grounds that it is impossible to adequately simulate mental activities of humans. Its proponents argued a lack of intelligence of IASs could be deduced after sufficiently long testing with a Turing test. The school purports there are external manifestations of the brain that differ from that of computers. Its hypothesis is there are certain brain activities that cannot be reduced into algorithms. Penrose (1994) added new physics are needed for achieving genuine intelligence. The fourth viewpoint is that intelligence is a mental manifestation that cannot be adequately explained by any type of scientific simulation. Irrespective of the knowledge that can be derived from computational intelligence, the third and fourth schools infer computers are merely working platforms and are for demonstrations of experiments and should remain just that. Penrose (1994) also noted mental activities are far beyond modern computing devices and scientific inquiry.
Realizing the implication of strong artificial intelligence, Alan Turing warned that if a person is able to explain and predict a system’s behavior, there is little temptatio n to imagine intelligence (Kurzweil, 1999). With the same system, it is possible that on e person would consider it intelligent and another would not; the second person would have found out the rules of the system’s behavior. Penrose (1994) and Wilber (1997) claimed quantum physics are needed to fully understand the external and internal manifestation of the human mind before it can be truthfully simulated or represented. Thus, the outlined manifestations are the result of the philosophical configurations of each perspective about the human mind and its representation.
According to Wang (1995), intelligence is not always better at applications because unintelligent systems can perform better with guaranteed solutions. Some artificial systems are capable of carrying out tasks beyond the capabilities of humans. The implication is that unintelligent systems are good at technical operations where the goa l is to retrieve knowledge in a way that the procedures are reprodu cible by any Turing machin e (Long, 2000). An ordinary calculator, for instance, outperforms most humans in most calculations, yet humans are more intelligent.
To separate unintelligent artificial systems (UASs) from intelligent ones, it is important to differentiate reasoning from intelligence (Wang, 1995). Reasoning machines possess the following qualities. First is a formal language that is well-defined for the systems to communicate with their environment. Second is a semantical explanation apparatus for the meaning and truth-values of words and axiomatic sentences. Third is an inference set for mapping questions to the right solutions. Fourth is the memory or innate capacity that provides working and storage for the inference, solutio ns, and problems. Fifth is control mechanisms that act as resource management.
These qualities give rise to additional features, including static knowledge. New knowledge is unnecessary for the system to function as desired; correct postulates and axioms; right solutions to the axioms and postulates, which ensure valid answers; larg e innate capacity for processing the finite postulates and axioms and intermediate solutions provided by the designers; algorithms and required inferences for all axioms and postulates; and quickness to satisfy time constraints. Thus, unintelligence arises when axiomatic machines lack the manifest ation for solving phenomena of insufficient knowledge and resources.
The axiomatic approach to machine intelligence misleads some researchers to compare human intelligence to the most advanced machine available. It is natural fo r them to assume that language is thinking and a sign of intelligence because humans express concepts and ideas through language (H. C. Ander son, 1987). Von Neumann’s machines operate based on this premise. Such machines perform arithmetic operations quicker than humans do, rarely make mistakes, are able to memorize vast amount of information immediately with a perfect recall, and a minor hardware failure could distort recall ability. In contrast, humans excel at pattern recognition; confuse, associate, and mix data; forget without being instructed; are poor at arithmetic operation s; and do not perfectly recall data as machines do.
Nonaxiomatic machines have the appara tus to coexist in both knowledge and resources sufficient and insufficient environments. It follows that the components of th e system are irrelevant; the roles they play are important. Lloyd, (1995) asserted von Neumann’s machines are capable of altering themselves in response to information t hey obtain about the environment in which they wish to exist.
To differentiate intelligent from unintelligent systems, a reaso nable meaning of intelligence must be determined; otherwise, it is better to derive a new word or a concept for the intended assumption (Wang, 1995). Intelligence is meaningless if applied to all things such as speed or innate capacity. Doing so is erroneous because it could lead to ascribing intelligence to things such as ordinary chairs or the human stomach (Searle, 1983).
This study sought a different approach by analyzing and synthesizing machine intelligence based on the merits and validity of an IAS as an intelligent system. The study did not compare machine and human intelligence for these are philosophical issues outside the scope of the study. The synthesis and analysis of the compelling measurements of machine intelligence is the fundamental objective.
Human intelligence and quotient are the precursors of machine intelligence and its quotient (Bien et al., 2002; Bosque, 2002; Lee, Bang, & Bien, 2000; Konerding, 2001; Long, 2000; Park, Kim, & Lim, 2001; Wolfram, 2002; Zadeh, 2000). Due to a great desire for IASs to behave like humans, optimization algorithms are very popular in computational intelligence literature (Goldberg, 1989; Haupt & Haupt, 1998; John & Innocent, 1998; Minping & Guanglu, 1999; Mitchell, 1999; Sangalli, 1998). Human intelligent q uotient (IQ), or the g, began in 1904 when the French Ministry of Education required Alfred Binet to design a test for identifying students with learning problems. H. H. Goddard introduced the test to the United States in 1910. The g is an acceptabl e working definition and measure of intelligence (Dewdney, 1997; Gottfredson, 1998). Whereas IQ is a measure of human intelligence, MIQ is for machinery that designers, developers, or consumers claim to be intelligent. IQ is more or less cons tant and MIQ is machine specific and time dynamic. Certain dimensions of MIQ testing such as speech recognition, vision, and hearing are excluded in human IQ testing (Zadeh, 1972, 1973).
Kurzweil (1999) contended that IASs will supersede and control humans. According to such claims, any device with such algorithms would experience feeling s and also have consciousness. It becomes a mind. Thinking, feeling, consciousness, intelligence, and understanding are considered mathematically reducible and programmable (Penrose, 1990). Despite this strong assertion, Penrose (1994) concluded that true machine intelligence is impossible because of limitations of curre nt computi ng devices, the existence of noncomputable property of the mind, and an unwillingness of scientists to seek new physics of consciousness. Thus, any mental activity, including consciousness and intelligence, can be embedded on a machine in a well-defined manner through algorithms. Penrose (1990) argued that some aspects of intelligence are not computable, and the noncomputability is an integral of consciousne ss, understanding, and awareness. This leads researchers to ask what level of consciousne ss, understanding, or awareness affects the acceptance of machine intelligence (Bo den, 1990).
Wilber (1997) noted there are conflicting variations of the meanings of consciousness attributed to fields of study. From cognition science, consciousness is a steady state and a functional schema of the brain or mind. It is a representational form o f the computational mind or an emergence of hierarchically integrated neural ne tworks. Artificial neural networks are the common models of the human neurons by th e IAS computing community. Unlike cognition science, introspectionism infers tha t consciousness should be understood in terms of intentionality within the first-person perspective. This involves the immediate interpretation and inspection of awareness a nd life experiences. In contrast, neuropsychology contends that consciousness is central to the neural system, neurotransmitters, and org anic brain mechanisms. This approach to consciousness is more biological compared to cognition science. Unlike both neuropsychology and cognition science, individual psychotherapy infers that consciousness is anchored in the individual organism’s adaptive capacities. Developmental psychology contends that consciousness is a developmental process w ith different architecture at each stage of human development. Furthermore, the field of psychosomatic medicine insists that consciousness is interactiveness coupled with the process of the organic body (Wilber, 1997).
The variations affect the interpretations of machine intelligence. For consciousness to be present, there must be an understanding that possesses awareness. Genuine understanding for any engagement is absent if there is no awareness of the engagement (Penrose, 1994). Thus, understanding a characterization of external beh avior that is mentally obvious is a manifestation of consciousness. Free will and awareness are the prerequisites of consciousness, understanding, and intelligence (Penrose, 1990, 1994 ). Baars (1997) concluded that humans are only aware of the outer signs of consciousness. The inner asp ect, which intelligence is an integral, is beyond human understanding. Baars indicated the inner aspect is made of inner speech or visual imagery. Although the postulation is about the realization and representation of machine intelligence with respect to human intelligence, this study was only grounded on the measurement theory of machine intelligence. All types of machine intelligence measures were analyzed and synthesized, including those for software agents, which is the frame of reference for the postulation.
A genuine intelligence needs understanding, which requires awareness. Awareness is a formless, featureless, and subjective platform in which the content o f the mind manifests, appears, and disappears (Deikman, 1996). It varies in intensity according to changes in total mental state. It is different from the content of the mind . In principle it is beyond everything else, including emotion or sensation or thoughts and memory .
Deikman (1996) asserted that experience, which some researchers claim as the major aspect of awareness, is nothing but a dualism of awareness and the content of a mind. Experience is observation-based, whereas awareness is an integral of an observer, which is prior to mental thoughts (Deikman). McCarthy (1995) added that the misunderstanding between awareness and the content of the mind causes misinterpretation of the natural mind and intelligence. The misunderstanding also contributes to the confusion on how genuine intelligence can be modeled. Intelligence is comprised of both subjective and objective illusionary manifestations of human brain activities induce by the fuzziness of our language and notational system (Kurzweil, 1999).
Little has been written on MIQ since 2003. However, Ulinwa (2006) characterized MIQ as a linguistic complex fuzzy numeral. The substantiation as analy zed in chapters 4 and 5 of this study showed that the significance of MIQ calculus is quite different from the significance of machine intelligence (the phenomenon that is calibrated). Notwithstanding, a number of experts rather debate on how machine intelligence aught to be represented in machines. Johnson, Tewfik, Madh u, and Erdman (2007) developed a novel recognition algorithm to differentiate normal and abnormal (disease) heart sounds. Similarly, Chechik, Heitz, Elidan, et al (2008) prop osed max-margin classification method for data that lack some features. Huang, Lars, and Rasco (2007) articulated that artificial neural networks and other machine intelligence tools had been used in the food sector to model microbial growth for predicting food safety.
Garcia, Taylor, Manatunga, and Folks Garcia (2007) evaluated a reaso ning process explanatory inference engine of a diagnosis expert system for renal obstruction from diuresis renography. Whereas, Garcia, Taylor, Halkar, Folks, et al (2006) posited a high diagnostic accuracy rule-based expert system that excluded renal obstruction from diuresis nephrograms. It suffices that
artificial intelligence has created conceptual frameworks, techniques, and tools that can be applied to construct computer programs that perform tasks such as diagnosis, explanation, and planning. However, a more detailed analysis of and research on expert systems reveal that the clinical acceptability of an ex pert system strongly depends on user acceptance (Porenta, 2007, p. 335).
As a specialized discipline, educational technology focuses on the design of technology aided inductive and deductive curriculums. The exclusion of machine intellig ence measurement is also evidenced by the theme and interests of educational technologists on technology programs evaluation. The evaluation is about the program; and it is quite different from measuring the machine intelligence that lurks in the background of the technology. For this reason, most educational technologists concentrate on how the technology aught to be used. Manton, Fernandez, Balch, and. Meredith (2004) validated this position when they articulated that
eLearning development has long been divided between the world of training w ith defined job roles and management techniques and the academic world where development was first pioneered by enthusiasts and only more recently supported by teams dedicated to the Use of artificial intelligence to support a university education. (p. 2)
The foregoing postulation was derived from the following educational technology experts. Brouns, Koper, Manderveld, Bruggen, Sloep, et al (2005) emphasized that “one way to develop effective online courses is the Use of artificial intelligence patterns, since patterns capture successful solutions” p. 1. Conole and Fill (2005) described a learning design toolkit that guides practitioners on how to create pedagogically informed learning activities. For an effective and efficient educational technology, G. Conole and K. Fill stated that “despite the plethora of information and communication technologies (IC T) tools and resources available, practitioners are still not making effective Use of artificial intelligence to enrich the student experience” p. 1. Thus, Timmis, O’Leary, Weedon, et al (2004) investigated the online learning experiences of students from different disciplines and introduced the Students’ Online Learning Experiences (SOLE) methodology. S. Timmis, R. O’Leary, E. Weedon, et al found that
the students described, [with the exception of the Education studies], appeared to be primarily concerned with the organizational aspects of [virtual learning environment] how it affected their ability to learn rather than the nature and quality of the learning experiences they had. They see the benefits in terms of how it will help them manage rather than learn. (p. 17)
Although humans acknowledge the existence of machine intelligen ce, they know little about it. Long (2000) claimed humans are only competent at machine intelligen ce, like other complex systems, because competency is about following a set of generally acceptable behavioral and thought rules. Understanding machine intelligence is about knowing the theoretic why of its manifestation. Therefore if the concept of intelligenc e should be accorded to any artifact, then that system must be aware of whatever it does. That is Searle’s premise for objecting to Turing’s elucidation that any machine is indeed intelligent by passing a certain behavioral test, the Turing test.
A Turing test verifies and compares computational with human intelligence (Kurzweil, 1999). In a famous work on computing machinery and intelligence, A lan Turing set a standard for determining if a given artificial device is intelligent (Arbib, 1965; Boden, 1990; Balkenius, 1995; Mershin et al., 2000; Penrose, 1994; Wolfram, 2002). That thought-model is similar to a human interrogator and a person and a ma chine. The interrogator interacts with the person and the machine through an input device suc h as a keyboard. The interrogator is not told which is human and which is the machine; this minimizes or eliminates interviewing bias against the machine. The machine is t hen considered intelligent if for any reason, after a question and answer session, the interviewer could not reliably identify it.
Searle (1989) described a Chinese room concept for testing machine awareness, in which a person who does not understand Chinese but speaks English is locked in a Chinese room where a series of Chinese symbolic stories are shown from the outside. Behavioral and response instructions, written in English, for responding to questions about the stories that are in Chinese symbols (each instruction is mapped to the appropriate Chinese story symbol) and no other form of information is allowed from the outside. The person is only allowed to manipulate both the symbol and the appropriate story by referencing the instruction that directs each answer to a corresponding question and story. The person could answer yes or no to those outside the room through a typ e of opening; and the expected answer format makes it possible to be mapped into a computer program.
Searle (1989) contended that machines lack awareness of what is going on, much like a person in a Chinese room who could not correctly answer questions about the stories without understanding the stories. The person can only act on the given answer manual in the same way machines act on algorithms. Machines, without underst anding, manipulate programs by acting on algorithms. The nature of Searle’s test, therefore, makes use of ideas about factors of machi ne intelligence to be measured (Penrose, 1990).
Determining measurable factors of machine intelligence is a complex task, but knowing the reasons for the measurement could minimize the complexity (Scacchi, 1995).Several reasons include the following three steps. First is to develop a machine with a higher market value through more realistic human-like intelligence. Another is to re port error densities for consumers to discriminate between intelligent machines and unintelligent machines. The other is for developers to identify resources that give rise to certain degrees of machine intelligence.
Scacchi (1995) observed that internal attributes of machines, such as source cod es, do not necessarily lead to a measure of the manifestation of intelligence in terms of productivity. There are some inconsistencies in MIQ measures because of the characteristics of what is to be measured. A person cannot know ahead of time the characteristics of the factors that lead to MIQ. It is expected that qualitative and quantitative factors of machine intelligence be taken into consideration when measurin g MIQ. Martin and Davenport (1999) noted that the nature of information processing is grounded in algorithms that are composed of rules with unambiguous finite instructio ns for carrying out intended tasks in such a way that the following occur: the instruction s can be mechanically traced regardless of the meaning of the manipulated symbols, and there always exist mechanical methods to examine if the necessary instruction for any given task is successfully carried out. The method is constructed with the idea that an appropriate instruction has a stop condition that actualizes if a given instruction is carried out or not.
Stop conditions of machines, as in the mind, specify the rules that compose, validate, and establish an information processing and intelligent system. Martin and Kleindorfer (1986) noted machine intelligence is a manifestation of stop-condition rules. And lack of stop-condition rules invalidates a machine as computational and intelligent machinery.
Based on this axiomatic stance, Kant (1787) posited that general logic contains no precepts because intelligence is a rule-governed manifestation. There is a chain or hierarchical ordered judgment executable in ascending order. That is, a person could theoretically through descending order trace any manifestation of any given intellig ent machine. Machine intelligence under this axiom is exclusively a rule-governed capacit y.
Penrose (1994) asserted there is a possibility of superordered judgment given t hat the mind, like a machine, is noncomputational. Penr ose, using a Turing analogy, demonstrated that analysts are not using sound judgment, for such judgment does not give wrong answers to questions. Analysts therefore should be capable of validating a sound judgment based on the mathematical fact that if a series of inputs are given to a sound algorithm for which it could respond with the right answer to the rightness of the inputs, it would not err because it is a sound algorithm. However, the algorithm is incapable of assessing if it is sound when given a copy of itself as input, even though analysts know it is a sound algorithm. The implication is that knowing what an algorithm could do also indicates what it is not capable of doing. This makes the algorithm incapable of encapsulating the an alysts’ understanding. Penrose’s axiom is that the algorithm is incapable of specifying the analysts’ mathematical insight. It follows that artificial systems, like humans, as information processing units are incapable of knowing their soundness of judgment. Rene Descartes might have started the notion of measuring machine intelligence by articulating some ideas for disproving machine intelligence (Wolfram, 2002). Even though the Turing test is a form of verifying machine intelligence (Boden, 1996; Wolfram), there are varying thoughts on measures of machine intelligence. S earle contended the Turing test fails to measure machine intelligence because machines lack consciousness, understanding, and awareness, which are the traits of intelligence (Bod en, 1996). To Konar (2000), the Turing test is unsuitable when the measurement of performance is complex because it is not amenable. Kunar (2000) offered a number of Lockean quantitative methods for performanc e measures where a system’s performance is evaluated alongside some experts. The measure of interobserver reliability or the measure of consistency among the exper ts must correlate to establish the acceptance of the machine’s intelligence. The measurement approach introduces concern regarding the merit of experts’ consensus. The lack of peers’ consensus affects the measure of machine intelligence (Konar, 2000). Some machine intelligence validation researchers hold that the Turing test is still a valid andreliable measurement. This literature, although based on an extension of the Turing test, shows various ways to extend the Turing test both qualitatively and quantitatively (Bradford & Wollowski, 1994; Knauf & Gonzalez, 1997). Nwana and Azarmi (1991) believed an MIQ should be measured according to the level of control a computationa l intelligent system has over its autonomy. According to this approach, machines should be attributed different MIQ levels based on unique intellectual and contextual properties.
To Deneen (2001), a good measure of machine intelligence should be the number of people a machine could recognize. Other researchers recommend intelligence should be measured from a social perspective (Hunt, 1995; Person, Laaksolahti, & Lonnqvist, 2001). Under this scheme, measuring social traits of an intelligent machine is a good indicator. Presentations at the Workshop on Performance Metrics for Intelligent System s (PerMIS) emphasized that measures of machine intelligence should be exclusively base d on systems’ performance (Messina et al., 2001; Meystel, 2000). Other machine intelligence researchers such as Lee (2000) offer measures that are engineering perspective specific. Still others offer information theoretic metrics for machine intelligence (Ahmad, Alvaez, & Wah, 1992; Musto & Saridis, 1997; Shereshevsky , Ammari, Gradetsky, Mili, & Ammar, 2002). There are yet others who articulate that a meaningful measure of machine intelligence should be based on machine-human interaction or cooperation (Park et al., 2001). In contrast, Wang (1995) asserted a working definition of intelligence differentiates artificial systems and should be a standard for judging systems. Thus, MIQ should depend on operations choices of working definitions of machine intelligence. Zadeh (1994) proposed MIQ measures should be relative to certain characteristics such as time. Finally, the general theory of problem solving holds that complexities such as measuring machine intelligence should not revolve around properties of machines, but attention should focus on finding a set of universal characteristics or functions of machine intelligence that are simple to describe with simple statements (Ahmad et al., 1992; Andersen, 1994; Axelrod, 198 4; Wolfram, 2002).
The most recent measure is from Park et al. (2001) and Kang and Seong (2002). Park et al. measured MIQ as the sum of task costs across an allocation matrix minus the human intelligence quotient, which is based on the summation of the task allocation to the human operator. The task cost side is considered the task quotient. Although it is simple to recalculate the MIQ as the task changes, the paradox is that the application of the standard measurement methods, which includes physiologica l variables, information measures, and eye movement, will not guarantee a uniform MIQ (Kantowitz, 2002). Repperger’s (2001) articulation of a multidimensional scheme for MIQ through a polytope-convex hull failed to account for the very important aspect of the intelligence, the users. Repperger focused only on the design and implementation. The fundamental implication is how to interface the unknown with the relevant properties of the known to limit the avalanche and infinite dimensions of MIQ Repperger’s model may cause, which is the reason this study used the multiple perspective inquiring system .
Some machine intelligence researchers and philosophers claimed that with the possibility that artificial systems could be intelligent or perhaps more intelligent than humans, machines intelligence should be measured. For a strong artificial intelligen ce school, a device is intelligent and has a mind if and only if it passes a Turing test (Penrose, 1994). For such researchers, computers could be developed to perform amazing tasks beyond human imagination and limited to the psychology and sociology of those involved in the conceptualization, development, and de ployment (Linstone, 1984). Kurzweil (1999) noted, “The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration” (p. 71). Regardless, there exist certain natural laws that modern compu ters cannot realize (Sangalli, 1998). Modern computers are based on a Turing machine. Theoretically, only time and memory space limit a Turing machine. Based on a Turing machine, a functi on is computable if a Turing machine can perform each of its input values. Agreeing with Penrose (1990), Stapp (1995) noted that mental process, whi ch is assumed to be governed by mathematical laws, cannot be simulated to any arbitrary accuracy, in principle, by any given computer. In other words, the simulation of intelligence is far removed from genuine intelligence. Long (2000) added that the comple xity of systems such as machine intelligence may be due to the absence of right notational systems:
Notational systems that we use every day and think we understand—ranging from natural languages to mathematics . . .—are not merely passive tools like a pencil or pen that we can wield as we wish. Rather, they are a very rare type of tool that I call a cognitive lens. (Long, p. 6)
These cognitive lenses govern certain qualities about machine intelligence and affect ho w the manifestation has been seen, considered, and expressed. Note that like other scie ntific instruments, notational systems are also scientific tools. Kurzweil (1999) added the mathematical realization of genuine intelligence for the IAS requires the discovery of a complete set of unifying formulas that manifest intelligence. Referring to Max Plank, Kurzweil also noted that because humans are part of the mystery they are trying to solve, it is beyond the competency of modern scientific methods. Long (2000) suggested the reason for not understanding complex system such as machine intelligence could be due to object fixation inherited in Western culture as opposed to process-oriented. What you see is what you get (WYSIWYG ) has serious implications because of object fixation. WYSIWIG indicates that only visible features of machine intelligence are measurable whereas invisible features are consid ered fictitious. Fixation raises some concerns about the character of universal computability and measurability of machine intelligence. Coupled with ideas about consciousness and awareness, it is possible to realize MIQ by discovering new notations or physics o f computing (Long). Wolfram (2002) indicated the conventional science for measuring machine intelligence is to find mathematically reduced methods that lower the amount of computational exertion involved in measuring machine intelligence or behavior. It seemed that performance measure is a shortcut for finding underlying rules of machine intelligence even if the phenomenon could be computationally irreducible. Knowing the underlying rules and initial condition leads to measurable intelligence. Wolfram ad ded that “even if in principle one has all the information one needs to work out how some particular systems will behave, it can still take an irreducible amount of computational work actually to do this” (p. 739). Thus when there is an irreducibility p roperty in machine intelligence, there are no shortcuts or methods to determine its intelligence, except to monitor each state of the machine’s computational intelligence or behavior. Computational reducible machine intelligence is a result of nested and repeatable behaviors, wh ich are those systems that lend themselves to be measured with traditional mathem atical methods. Wolfram suggested that theoretical science such as the ones for MIQ lend themselves to define shortcuts to avoid computationally irreducible phenomen a. The irreducibility of certain machine intelligence prevents a shortcut notation fo r summarizing the behavior from being developed (Wolfram, 2002). Because by the ver y nature of such systems they are irreducible and each state involves a certain amount of computational effort, they must be measured completely from one state to the next. It follows that every effort of machine intelligence, qualitative and quantitative, visible or invisible, requires serious considera tions. Wolfram noted there are behaviors such as machin e intelligence that cannot be measured correctly with existing notation because o f their computational irreducibility.
The foregoing review indicated that what is believed to be machine intelligenc e is a result of a concept contagion laid by Descente’s (Wolfram, 2002) and advanced by Turing’s (1950) testing primer and Searle’s (1983) argument. It is an infectious concept from philosophical computer science. The meme generalized machine intelligence as a proxy of human intelligence and shou ld be measurement worthy. The idea of meme is validate d with the following events.
Some time ago in England, a set of suicide cases were considered infectious because many of the victims read a tragic tale about Werter in Sorrows of Young Werter, who took his own life. To stem the epidemic, the authorities banned the book (Marsden, 1998). Similarly, when scientific journals would not publish fuzzy logic theory, Zadeh published it in the Information and Control journal while serving as the editor, and from which Abe Mamdani and a student at the University of London controlled a steam engine. As a result the following manifested:
They spent a weekend setting their steam engine up with the world’s first ever fuzzy-control system . . . and went directly into the history books by harnessing the power of a force in use by humans for . . . years, but never before defined and used for the control of machine. (Sowell, 1998, p. 16)
Also, one event nearly stopped research and funding for artificial neural networks. It began after Frank Rosenblatt could not provide rigorous and necessary mathematical-learning algorithms for a model of neural network (Blum, 1992). Based on the result, Minsky and Papert publicized the drawback: the model could not learn the exclusive OR (XOR) problems. The publication affected machine intelligence research, interest, and funding for a number of years. Another event that impacted machine intelligence was when Holland distinguished evolutionary biology from other physical sciences by manifesting genetic algorithms at Los Alamos in 1987 (Small, 1999). Since then several works, including Goldberg (1989), have shown the purposes and dynamics of genetic algorithms on optimization, searching, and machine learning. Concepts and thoughts, as well as suggestions, can spread like an infectious disease. The meme of machine intelligence and the MIQ phenomenon in particular is validated by the following:
If a scientist hears, or reads about a good idea, he or she passes it on to colleagues and students. He or she mentions it in articles and lectures. If the idea catches on, it can be said to propagate itself, spreading from brain to brain. (Bjarneskans, Gronnevik, & Sandberg, 1999, p. 7)
The idea is that reading, observing, or hearing that certain machines are intelligent propagates the acceptance or rejection of the notion from one person to another. It is then said to be infectious, or a meme. Bjarneskans et al. (1999) suggested a meme is a virus because there is a continuing abundance of different elements and varieties; the elements of the heredity or replication property involved self-recreation. The new elements vary depending on their features and environmental factors. According to the theory of meme, a person is infected with a meme when the person internalizes and or acts on a concept or an idea (Brodie, 1996). Meme is a form of imitating an idea, a behavior, or information. A meme is a transmissible cognitive phenomenon that uses available host or vector channels. It is similar to a selfish gene that propagates to other hosts (Blackmore, 1999). A host or a vector is a required replicating or transmission mechanism. Unlike the host, a vector is a channel that lacks the capacity to alter or reflect on the meme. An e-mail program is a good example of a vector. Literature confirms a memetic axiom that suggests attitude, beliefs, and behaviors are communicable because they spread and leap in and between populations. Using software agents, Gatherer (2002) explained that human behaviors are transmittable and technology likewise infects quite easily across cultures (Langrish, 1999). Langrish added that the evolution of technology is about ideas concerning artifacts within technological society that have the capability to select or reject competing technological ideas. Each meme then competes for approval by people who control the resources necessary for converting a very small proportion of the ideas into actual artifacts, processes, or systems. Artifacts, which are also tools, are transmitted nonverbally and without invoking some cultural field (Langrish).
Despite the fact that meme has been studied at social levels, Langrish (1999) noted it is also appropriate for technological studies. Although there is no study on the acceptance or rejection of machine intelligence or other MIQ hypotheses using memetic theory, literature suggests researchers propagate the meme of machine intelligence. Thus researchers define and publicize their ideas about thinking machines. Moreover, readers of computing machinery, for example, are constantly reminded they are in an era of thinking machines.
Because meme is useful for guaranteeing conformity and societal knowledge, the acceptance or rejection of the notion that machines think rests on the acceptance or rejection of a pressing and emerging meme: MIQ is exclusively a measure of machine performance. Bien (2002), like other MIQ researchers, relied on performance to convince technology users and consumers to believe it is an exclusive measure of machine intelligence.
Moreover, computing organizations and those who regulate the industry share common knowledge, rules, roles, and scripts. They use meme to validate, increasingly distribute, and exploit communication machine intelligence values within their environment. Thus, memetic mechanisms are effective through systems thinking, which indicates that effective and efficient establishments should encourage personal mastery, teamwork, and shared vision (Senge, 1990). Systems thinking enhances interaction between human and technological society. Individuals who thirst for knowledge, have the necessary skills, and are able to accomplish tasks with these skills are said to be lifelong learners and as such are infected by a meme (Brodie, 1996; Senge, 1990). Thus because of personal mastery, humans are more likely to expose themselves to infectious beliefs and behavior from peers.
When individuals learn, so does the organization they work for (Senge, 1990). Such learning requires a well-aligned team. Infected individuals must perform tasks as a team to uniformly accomplish things. An effective organization is one that is well aligned and focused on its vision as its team aligns. Thus, members become active and complement each other because a shared vision develops from each member’s vision. Shared vision develops from an idea that uses people to create something, such as believing in thinking machines (Brodie, 1996; Langrish, 1999; Senge, 1990). In essence, once such a force develops and is shared by more than one person, through a meme, it becomes real or a mental image that is common to all individuals in the organization. It is an acceptable vision if the mental image of each individual is similar and supports each other. That is, the meme must be identical to the source. Generally, vision is the mental image a person wants to create whereas systems thinking shows how to infect others with it appropriately and organizationally. It is only when these criteria are satisfied that organizations, through humans, could approve the development or the retirement of any machinery or propagate the view that MIQ is exclusively a measure of machine performance.
To appreciate fully our current state of information processing technology, we must examine some of the precursors to modern mechanics and electronics. The most obvious examples of intelligent systems are found in computers and mechanized applications. Calculating machines, which are used to perform mathematical operations such as addition (incrementing) and subtraction (decrementing) were the first machines to make a noticeable impact both socially and economically. Michael Williams describes six basic elements in the design of most mechanical calculating machines, which are as follows: a set-up mechanism, a selector mechanism, a registering mechanism, a carry mechanism, a control mechanism, and an erasing mechanism (2002). The set-up mechanism allowed some sort of initial configuration or data input for the machine. A selector mechanism is used to select the appropriate mathematical function and instantiate the correct mechanical movements for that function. The other mechanisms were used to control the various states of intermediate and final numbers within the device and eventually indicate an answer and allow for the reset of the registering mechanism back to zero (Williams 2002). Early computational devices did not require silicon or electricity, but functioned using some type of mechanical interface with moving parts such as beads, cogs, or elaborate mechanical gear systems.
The earliest computing devices represented numerical quantities in an analog form. In analog mechanics, a continuous representation that does not depend on sampling is available for representing quantitative information. Because of the precise nature of such a representation, analog machines were useful in computing ballistic trajectories and other calculations such as navigational and astronomical charting which involved complex mathematical transformations of data using mathematical operations such as integration. The earliest analog device is probably the astrolabe, which is believed to have been in existence as early as 180 B.C. and was used for planetary calculations (Williams 194-195). Another highly sophisticated analog device, now known as the Antikythera device after the island near which it was found, was discovered in a Greek shipwreck in 1900 and is believed to be another mechanism used for astronomical computations. This device is believed to date back to Cicero and the Romans, and contained a “sophisticated set of gears called an epicyclic differential turntable” which would not again enter into mechanical devices until the middle 1500s (Williams 196-198). More modern examples of analog machines include Lord Kelvin’s tide predictor and Vanevar Bush’s differential analyzer, which used rotating disks to calculate the area underneath curves in integration problems (Williams 198-202). Another early computing device is the abacus, which was invented more than 5,000 years ago and was used to perform arithmetic functions using rows of beads (Kurzweil 262).
More modern computing devices moved from beads to gears, with inventors such as Blaise Pascal (1623-1662), Gottfried Wilhelm Leibniz (1646-1716), and Charles Babbage (1792-1871) inventing devices that “represented data through gear positioning, with data being input mechanically to establish gear positions” (Brookshear, 2000).
In 1642, Pascal invented the world’s first automatic calculating machine, called the Pascaline (Kurzweil 1999). The Pascaline was a fairly primitive machine in terms of mathematical capabilities, and was only able to add and subtract numbers. Fifty-two years later, in 1694, the Leibniz computer was created by G.W. Leibniz. Leibniz was a German mathematician, perhaps most famous for inventing calculus, who had grand ideas for the categorization of human knowledge. Martin Davis writes, “He dreamt of an encyclopedic compilation, of a universal artificial mathematical language in which each facet of knowledge could be expressed, of calculational rules which would reveal all the logical interrelationships among these propositions. Finally, he dreamed of machines capable of carrying out calculations, freeing the mind for creative thought” (Davis 1993). In the spirit of this endeavor Leibniz defined his symbols for fundamental operations in calculus to be closely related to their function. For example, the .symbol denoting an integration operation was designed as a modified “S” to suggest “sum” and the “d” symbol used for differentiation was used to reflect the idea of “difference” (Davis 1993). While his dream of creating an all-encompassing mathematical language for knowledge classification was never fully realized, he was able to further the development of a mathematical machine when he improved upon Babbage’s calculating machine by adding the features of multiplication and division. His machine, which depended on a device that eventually became known as the “Leibniz wheel,” was so well-received that this type of processing device continued to be used in calculating machines well into the twentieth century (Davis 1993). In addition, his method of algorithmically performing repetitive additions is the same type of process that is still used in our modern computers and computer software (Kurzweil).
Another important calculating machine, known as the Curta and named after its inventor Curt Herzstark, was a mechanical device that could perform basic mathematical operations and was accurate up to eleven digits (Stoll 2004). This device, invented in 1947, was the first true handheld calculating device and contained over 600 mechanical parts (Stoll 2004).
Finite-state operation suggests there may be some hardware-based limitations for computers wishing to express creativity or human-like behaviors. Unfortunately, hardware limitations are just the beginning. Many humanists, linguists, and philosophers assert that human-level intelligence is impossible using any type of computer or machinery. John Searle, for example, is one of the leading critics of the possibility of true strong artificial intelligence. Searle writes of the homunculus fallacy, in which “the idea is always to treat the brain as if there were some agent inside it using it to compute with” (“Is the Brain a Digital Computer” online). This idea is often used by researchers in order to get around Searle’s idea that computation needs an observer to interpret the computation in order to be valid.
Other philosophers such as Hubert Dreyfus have written multiple books explaining why strong artificial intelligence in computing devices is implausible and perhaps impossible to achieve. Dreyfus’ many objections to the current methodologies used by AI researchers include their unwillingness to acknowledge their failures, their ignorance about serious knowledge representation problems, and their tendency towards citing undeveloped technologies as reasons why current AI systems do not yet work as they should (1-47). For example, at one point Dreyfus writes of robotics researchers who are waiting for an adequate knowledge representation program to emerge while at the same time the knowledge representation researchers are waiting for a robotics model to help them figure out how to account for body-centered types of knowledge. Dreyfus explains, “… the field is in a loop—the computer world’s conception of a crisis” (46).
The immediate willingness of AI researchers to accept what is known as the metaphysical assumption is also cited by Dreyfus as a serious problem in AI research. This assumption, originally identified and dismantled by Martin Heidegger, states that “the background can be treated as just another object to be represented in the same sort of structured description in which everyday objects are represented …” (Dreyfus 56). He later extends this definition to also mean “that whatever is required for everyday intelligence can be objectified and represented in a belief system” (65). Dreyfus explains that such an assumption is in fact what Edmund Husserl would call an “infinite task” given our immense body of cultural and social conventions that influence how we see the world and how we draw our observations from the world (57). He explains, “Thus in the last analysis all intelligibility and all intelligent behavior must be traced back to our sense of what we are, which is, according to this argument, necessarily, on pain of regress, something we can never explicitly know” (57). Even prominent researchers with landmark AI applications have accepted the metaphysical assumption with no qualms. Joseph Weizenbaum, the creator of ELIZA, is one such example (Dreyfus 65).
Another barrier to achieving strong AI is found in the way we define commonsense-knowledge. Commonsense knowledge is of course the knowledge we as humans consider to be common sense. Unfortunately, attempts to catalog this type of knowledge have revealed that common sense facts and rules can extend to upwards of 10 million facts (Dreyfus xi). Dreyfus explains that the commonsense knowledge program actually emerged from researchers who were trying to program computers to understand simple children’s stories. He writes, “The programs lacked the common sense of a four-year old, and yet no one knew how to give them the background knowledge necessary for understanding even the simplest stories” (x). Thus a computer trying to understand the story of Goldilocks and the Three Bears might not understand why the three bowls of porridge situated on the table were adjacent to one another and within Goldilocks’ easy reach. For humans it is common sense that we eat together at a dinner table, but for a computer this would be need to be explicitly defined and represented as knowledge. The common sense addressing the personification of the bears’ actions would also need to be programmed into the software. The problem here, obviously, is that there are also hundreds and perhaps thousands of other tiny details in this story alone that we consider to be common sense. The simple problem of showing a computer how to understand a child’s story has therefore grown exponentially more complex.
Dreyfus’ main argument, though, is that intelligence is both situated and context-dependent. He writes, “… since intelligence must be situated it cannot be separated from the rest of human life” (62). All of the other problems involved with intelligence and knowledge representation, then, including the common sense problem, the embodiment problem, and the inherent inconsistencies with accepting Martin Heidegger’s metaphysical assumption as truth, can be traced back to this idea (Heidegger 3-35). Without a cyborg or other bio-mechanical being available to house our information processing devices, the quest for strong AI may continue to be a fruitless endeavor.
A exploratory analysis of the implementation and barriers to AI will be done through conducting surveys and interviews with the management of the selected organization(s). The purpose of these surveys and interviews will be to ascertain from the research population how they have implemented or going to implement artificial intelligence into their routine work and how they see its advantages and benefits. Further, potential barriers to the implementation of AI will also be analyzed through semi-structured interviews with the management and the IT experts of the organizations.
After collecting the data from the surveys and the interviews the information will be carefully analyzed to determine if the results obtained answered the research objectives of this study. The results will be illustrated through graphical presentation of the data.
Gay and Airasian (1999) define a reliable test instrument as one that provides consistent measurements. They explain that a valid test instrument collects measurement information about what it is supposed to measure. Kirakowski (n.d.) states that a reliable questionnaire is one that provides similar results throughout multiple testing sessions. He also asserts that a valid questionnaire elicits and reports the data it was designed to collect. To insure the reliability and validity of the data a pilot test will be conducted with the colleagues of my university.
The following Gantt chart illustrates the course of research that the researcher will be carrying out once the proposal is accepted.
|Research plan – Master Schedule expressed in weeks (Assuming project take about 26 weeks to complete)|
|Dates are Mondays||September 09||October||Novermber 09||December 09|
|Prepare preliminary assessment|
|Revise assessment report|
|Prepare final research report|