Collecting Common Sense — MIT Open Mind and How We Got Here
The following is republished from my old blog https://tuhinanshu.blogspot.com/2011/07/collecting-common-sense-mit-open-mind.html. It is presented as is, with no modifications or changes. Any potentially controversial views presented herein are more than ten years old. Please do not cancel me for it.
This is a paper I wrote for my final submission to Dr. Frank Lee’s CS630 Cognitive Modeling class in Fall 2010 at Drexel University. I have tried to format it well for the web, but some links and DOIs are missing. Expect updates soon!
Introduction
There are many definitions of intelligence, but if they all agree on one thing it is this: the measure of intelligence* varies from person to person. For an average person, in layman's terms, a general class-wide idea of intelligence is common sense. Common sense is distinct from wit and wisdom: it does not make one exceptionally smart or particularly successful at accomplishing any given task in an exemplary manner, nor does it give one the instincts or far-sightedness or imagination necessary for handling long and complex situations. It simply allows us to do everyday tasks in an everyday manner. Common sense does not separate us from animals and other creatures: it is our advanced conceptual, linguistic, mathematical and other such abilities which do that. Furthermore, animals can be said to have some amount of common sense, as they familiarize themselves with laws of nature such as gravity.
Common sense is different from specialized intelligence, which has proved far easier to program. While specialized intelligence functions on rules, logic, laws and procedures, common sense is more intuitive, fuzzy, generic and automatic. Common sense is hard to introduce to machines because of the following main reasons:
- Automatic Nature: Common sense is so automatic that in most cases it goes unnoticed. While the entrepreneurial suggestion of aggregating the web's data into a knowledge base and extracting common sense from that is appealing, Minsky states [13] that common sense is never documented because it is common sense. Because of its ubiquity most people would never explicitly talk or write about it. Secondly, because common sense is ingrained in our minds, it is hard to figure out the processes we use to access, acquire and manipulate common sense information.
- Genericity: There is no goal to a common sense program, no objective direction to proceed in. It can be stated that expanding on and amassing vast amounts of common sense might be a direction (not a goal), as there is nothing for it to do with the gathered data the activity becomes indiscriminate. A system such as this will have no way to tell bad data from good data because it has no way to test its data. And as soon as a testing mechanism is introduced, the system becomes specific to that field of tests (language, subject-related, real world, etcetera) and is no longer a generic system.
- Fuzziness: Humans are better at solving everyday problems because they have common sense. That common sense allows us to eliminate unlikely processes and arrive quickly at the right decision, but also allows us to step back when we fail, and review the problem from a new perspective. Programs are written to use the best procedure on the best data sets in the best configuration, and are thus unable to resort to alternative approaches when one fails. This is less a limitation of programming itself, and more a limitation of the way we have learned to program [13].
- Reflection: A common sense program will need to look at its own actions and constantly update itself. To learn from mistakes, experience and example, a program needs an understanding of its actions and their consequences. The system would need to think about thinking. Thus, common sense is in many ways a recursive problem.
Hence, common sense is hard to think about and program. And up till the mid 1980s it was never even attempted. However, as the limitations of existing AI techniques were encountered more and more, scientists and researchers began to question the success of their approach and started looking at alternative ideas.
Brittleness of Intelligence
The early Artificial Intelligence (AI) researchers grappled with issues ranging from control of search to organization of memory, from knowledge representation to logical reasoning, yet their efforts were unsuccessful when applied to the real world which had a myriad of variables that the AI programs had not (and possibly could not) account for, despite many advanced and sophisticated algorithms having been developed. The field had reached a “brittleness bottleneck” [7] which made their programs ill suited for the real world, regardless of their sophistication in their immediate applicative field. According to Lenat [7], while these efforts succeeded in developing languages and sets of procedures to manipulate that language, they failed at storing the knowledge gained and represented by that language, thus operating in an almost time-agnostic manner in which no long term experience was stored.
McCarthy [9] was one of the first to suggest the significance of knowledge representations in programs and initiated work to represent concepts such as time and agenthood. Developing along this vein came the knowledge base (KB) driven expert systems of the 1980s. They served as encouraging examples of how just knowledge limited to a particular field can help enhance the success of AI (and pseudo AI) programs by orders of magnitude. However, they exhibited the same brittleness that Lenat spoke of, simply because they had a limited number of ways (rules in the KB) to do things. Humans are not brittle because they have many ways to solve a problem: ask for advice (or read up), refer to common sense, compare to similar but unrelated past experiences, and so on. Lenat uses this point to argue that if we include the “common sense” part, and a large and wide KB which spans almost all of human knowledge of reality, it is possible for machines and programs to exhibit the kind of resilience seen only in human cognition and function.
This resilience is achieved when a system has multiple ways to tackle any given problem, and can choose which to apply while still being able to revert to others in the case of failure. The ability to revert in this manner requires a number of factors [13, 7]:
- Multiple Perspectives of Data: In Minsky's model [12] of problems, he describes problems with a large number of highly influential factors as intractable. He says that these problems are often solved by using a different representation of the data which reduces either the number of factors or their influence into a more chewable size.
- Semantic Web of Concepts: Concepts need to relate to each other in a semantic web [13, 6]. Whenever we think about concept, such as bread, we instantly know a number of things about it: bread is a type of food, bread can be eaten, butter is spread on bread, you can buy bread at the supermarket, bread is a perishable item, and so on. This is common sense. Isolated concepts that do not relate to anything are alien to us, and as soon as we face one we try to learn more about it. Until we relate a new concept to existing ones, we can not function it at all.
- Alternatives for Action: For any given task, we have a number of ways to execute it. Take cutting a piece of paper in half for example: the simplest way is to tear it by hand; the cleanest way is to draw a line through the center and have a laser cut it; the quickest way is to hold down half the paper with a ruler and pull the other side. Apart from these cases with very clear qualities, there are a myriad of ways in between, including perhaps the most common way of using a sharp blade like object. The most common method offers neither the cleanliness of the laser, nor the simplicity of the hand, nor the speed of the ruler, yet is in real life expected to perform better than all these other techniques. Why? Because it satisfies and / or approximates all the desirable qualities (simplicity, accuracy, low cost, speed) sufficiently. Yet, if the blade is blunt or notched or otherwise incapacitated, we can use another strategy of folding the paper in half and then pulling softly along the crease. The ability to fall back on other options is at the heart of resilience.
- Soft Failures: Taking the previous example of cutting paper, the final technique of folding and tearing along the crease has none of the clear benefits that all the other techniques have. Yet, it is a valuable technique to know for when we lack the resources (lasers, knives) for other techniques. A common sense system must be a fail-soft system [16], meaning that any failures which may occur must not shut down the entire system. In such a system, there can be two kinds of failures: one when a given technique does not work and the system must choose an alternative to proceed, and the second when a non-optimal technique is chosen. For example, I set out to cut the paper, and I wish to do so using a pair of scissors. I look for a pair of scissors and fail to find one, hence I am unable to proceed with the first approach and it has failed. Yet, the paper can still be cut by other ways, and the world has not ended. I simply choose to cut the paper using the folding method. Now consider the opposite example: a pair of scissors was nearby when I needed to cut the paper, but for any reason I chose to cut it using the folding method. Even now, the objective (cut the paper in half) has been achieved sufficiently, and the system and its environment continue to function. Both are failures, but soft failures. A hard failure is one that incapacitates the system. For example, in a limited world view, if I cut my fingers with the scissors then I am incapable of (and distracted from) cutting the paper. Of course, due to me being a person (and having common sense), I am capable of finding medical aid either in the form of cotton or bandages or anti-septic, or in the form of assistance by people at the hospital or a nursing home.
Cyc
The Cyc project, started by Lenat and others in late 1984 [7], was ambitious to say the least. It aimed to collect 108 axioms spanning human consensus knowledge. Being started before the coming of the Internet, the task of collecting so much data itself was daunting and bewildering. For the almost a decade the data collection was manual (in representational languages). Natural Language Understanding (NLU) itself would require a KB, which wasn't available at the time.
While insisting that Cyc focused on developing a KB, Lenat observes that the project has spurred progress in knowledge representation and procedure formulation as well (together with a KB, these make the three ingredients of AI, according to Lenat [7]). Cyc itself had parts of all three. For knowledge representation, Lenat's team developed (iteratively over many years) a language called CycL, “a frame-based language embedded in a more expressive predicated calculus framework along with features for representing defaults, for reification, and for reflection.” CycL [6] was designed keeping in mind the need for simple and clear semantics, as it was expected to be used by a variety of other programs to query and leverage the Cyc KB. Another goal was to have CycL provide certain inferential capabilities that are reasonably efficient. Other goals included the inclusion of knowledge defaults (as most common sense works that way), be expressive like first order predicate calculus and yet be compatible with open ended propositional attitudes like beliefs and goals.
Because of the conflicting requirements of some of these goals, specifically between the language being simple and efficient, they decided to provide two interfaces: an epistemological level (EL) which was simple and easy to access, and a heuristic level (HL), where the terms are used in the sense proposed by McCarthy and Hayes [10]. The EL had a simple interface and was meant to give an account of the KB in a form that is easy to use and communicate. The HL had specialized semantics and structures used to perform optimized, efficient tasks, and was needed whenever inferences were required. As they both accessed the same information (although represented differently), it was suggested to think of EL as the “real” information and HL an optimization of the interface.
Unlike other AI projects at the time which needed logic to be interwoven with data for the purpose of reasoning, Cyc uses the data itself for the purposes of reasoning. While some of the assertions in the KB are monotonic, most (~90%) are not. In an environment where most information is non-monotonic, CycL employs the use of argumentation axioms instead of circumscription axioms for reasoning. Most of common sense is not the absolute truth, ranging from being simplifications to just wrong. An argument differs from a proof in its relativity, as it is based on information known so far. Later information can not invalidate a proof, but may invalidate an argument.
Reasoning was important for Cyc because even though common sense is “shallow”, that “shallowness” is still one or two links away from basic knowledge, for example questions like “Did Einstein's left big toe have a toenail?”. These results are too many and too infrequent to be cached or remembered. Also, because one or two links away from existing knowledge can easily be in a space of trillions and quadrillions, finding the right links is also important. The HL, which included many optimizations, was used for inferencing such questions. This reasoning leads to the inference engine, which is the second large part of Cyc. The inference engine used constructs such as Tell, Ask and Deny to reason given the assumptions in the KB.
The ontology of the Cyc KB was a definition-instance-inheritance like scheme. The universal set was Thing, which was at the most fundamental level partitioned into InternalMachineThing and RepresentationThing. Another partition of Thing was into IndividualObject and Collection. Predicates (such as makeSenseFor, instanceOf, etcetera) were strongly typed in order to leverage the optimization capabilities of HL. Further classifications included Substances, Processes, Individuals, PersistentObjects, and Events.
In summary, Cyc is the first project to tackle the problem of commonsense reasoning. It's architecture and approach was largely rational and logical, thus limiting the flexibility in the system. It also employed specialists and experts to build its database, which was a costly and time-consuming, with limited scope for parallelism and stringent quality assurance methods required for the input's validity. It was a large scale investment in knowledge infrastructure development [5]. Today, the Cyc project continues under Cycorp, Inc. Cycorp provides releases and APIs to the Cyc database. OpenCyc, a release debuted in 2002 and re-released as OpenCyc 2.0 in 2009, is a free release which includes all of Cyc's KB and CycL and SubL interpreters, but it is only the binary without source code. This package has taxonomic assertions relating terms to each other, but does not include the complex rules available in Cyc. ResearchCyc, released in 2006 also free of charge but only to the research community, included in addition to everything in OpenCyc significantly more semantic knowledge, a larger lexicon, English parsing and generation tools, and Java based interfaces for knowledge editing and querying.
Open Mind
Over the years, Cyc had proven to be an expensive affair, employing thousands of man-hours to slowly populate the KB. With the popularity of the Internet, the viability of an open-for-public common sense acquisition technique increased.
The Open Mind Initiative began in late 1999 at the MIT Media Lab in collaboration with IBM Thomas J Watson Research Center, and the first public product was Open Mind Common Sense (OMCS) - 1, a prototypical website meant to gather data from the public. Its fundamental tenet: every ordinary person has the common sense we want to give our machines [17]. Being inspired by the distributed human projects such as IMDB and Yahoo!'s Open Directory Project, the website aimed to leverage the massive parallelism and distribution of effort provided by the Internet for gather data, with the goal of studying whether a small investment in a good collaborative tool for knowledge acquisition could support the construction of a common sense database by many people in their free time. This open, public-participatory nature of Open Mind is the single most prominent differentiator between it and Cyc. The Open Mind project was spearheaded by Push Singh.
When it came to knowledge elicitation from the general public, the creators considered many input methods, including CycL. However, they found that CycL was far too complicated for a casual participant to learn. The existing methods to minimize cost of participation, which while limiting inputs in ways that make inference easier still use natural language (English primarily), such as pull-down menus with restricted options that can be used to construct sentences, or to use a subset of English which allows easy mapping to a first-order predicate logic space, were still considered too costly. In a bold move, they allowed users to express themselves in free-form natural language. Instead of limiting their inputs, they instead engaged the user in “activities” which attempted to elicit this knowledge in a form which would (in common cases) lead to simple answers. One such activity was to present the user with a story, and ask the user to provide knowledge that would be helpful to understand the story.
When using a free-form input, they shifted the burden of interpretation from the knowledge acquisition system to the methods for using the acquired knowledge. By using information extraction methods (which had progressed much from the time of Cyc's beginnings), they were successful in extracting meaningful information from a large percentage of plain English input without manual supervision. Numerous extraction patterns were developed to mine hundreds of types knowledge out of the database and into simple frame representations, such as:
[a / an / the] N1 (is / are) [a / an / the] [A1] N2
Dogs are mammals
Hurricanes are powerful stormsa person [does not] want[s] to V1 A1
A person wants to be attractive
A person does not want to be cold
In their original paper [17], Singh et. al. discuss the next generation of Open Mind, OMCS-2, and potential applications. The most obvious applications are in search, where statements like “my cat is sick” can be reasoned as “cats are pets”, “people care about their pets”, “cats are animals”, “veterinarians cure animals” and can thus return a result with veterinarians. Another application is in helping tagging of people in photographs. For example, if the system knows that Jane is Mary's sister, and also knows that “in a wedding, bridesmaids are often the bride's sisters”, it can pull pictures of Mary's wedding and ask “which of these bridesmaids is Jane?”
One of the major changes in OMCS-2 was the realization of work flow distribution. OMCS was built targeting the general public for knowledge acquisition, as opposed to Cyc which used experts and end-users. This difference in user base resulted in a new problem: the general public would leave the system as soon as they encountered something difficult. However, it is not possible to build a system which requires no effort at all. To counter this problem the creators built a distributed work flow model based on the fact that different people like to do different things. Some people like to create new content, while others would like to evaluate, while yet others would like to refine. They designed a model in which the different stages of knowledge acquisition, as in elicitation, refinement, and restructuring, may be performed separately by different participants, resulting in a finalized piece of knowledge with its senses tagged, clarified, validated and ready to participate in some inference.
The second was the use of templates instead of free-form English. Based on the data collected in OMCS-1, they formulated templates which were the most common forms of description and asked the participants to fill in those formats instead of free-from. This improved the quality of data significantly. A set of template sentences could also be used as a story, for example:
The idea behind collecting stories is that they allow us to extract larger causal and temporal constraints between states and events, which are necessary ingredients of commonsense reasoning. In a survey Mueller [14] discovered that most systems were collecting facts and rules, instead of cases and stories against which analogical reasoning could be performed. OMCS-2 was one of the first systems to do so.?N1 is ?ADJ
Bob is hungry?N1 ?V [a] ?N2
Bob eats a sandwich?N1 is not ?ADJ
Bob is not hungry
OMCS vs. Cyc
The essential improvements that OMCS had over the Cyc project were:- Crowd-sourcing: OMCS started in the era of Web 1.0. While the involvement of the public with the Internet today (in Web 2.6) is much greater than back in 1999, it still was a huge improvement over the absolute nothingness of 1984. Because OMCS used the general public as the source of its KB, OMCS had the advantage of parallelism while at the same time lowering costs significantly.
- Flexible Data Structuring: The inference engine and knowledge acquisition parts of OMCS were more flexible than their corresponding elements in Cyc. The logical constraints of CycL was the reason OMCS chose to model their own structure in the first place. The flexibility gained allowed them to extract information from the KB that had not been done before, while at the same time making it much easier to add new information to the KB.
- Analogical Reasoning: Most of efforts before OMCS, including Cyc, focused on collecting facts and rules: blocks of information that can only support Analytical Reasoning. OMCS, however, proceeded in the direction of stories and cases, blocks of information that supported Analogical Reasoning, a form of thought process much more akin to human common sense. The value of collecting stories and experiences is further elaborated in future iterations of Open Mind projects [16].
Common Sense Computing Initative
Over time, it became apparent to the people at MIT that OMCS could not survive as an island. The speed at which it was evolving and expanding forced them to split off parts of the project into independent projects, and develop new projects to serve the growing needs. The Common Sense Computing Initiative (CSCI) at the MIT Media Lab is an umbrella group which encompasses all of these projects. A brief summary of each project follows:- ConceptNet [8, 19]: ConceptNet is a toolkit aimed to give people access to commonsense knowledge. Its data is accumulated from various sources, although the largest source would be Open Mind. A large semantic network with over 1.6 million links relating a wide variety of objects, events, actions, places and goals by 20 different link types, mined from the OMCS corpus, ConceptNet is the freely available interface to the OMCS KB. It is available as a download, but also (and more importantly) as a web-service with a REST API that anyone could use to add commonsense to their application. The perpetually updating web-based service is preferable, although thy Python driven download could also possibly be used in some cases.
- Divisi [21]: A library for reasoning by analogy and association over semantic networks such as ConceptNet. It provides frequently needed data structures (such as Tensors and Views) and methods for measuring similarity and nearness across the network.
- Analogy Space [20]: A way to represent a KB of common sense in a multidimensional vector space. It uses dimensionality reduction to automatically discover large-scale patterns in the data collected by the common-sense knowledge resource ConceptNet. These patterns, called "eigenconcepts" or "axes", help to classify the knowledge and predict new knowledge by filling in the gaps. AnalogySpace is built on functionality provided by Divisi. A small project named Luminoso visualizes the concepts and their relation to others in the AnalogySpace.
Conclusion
During the early years of Artificial Intelligence research, with the progress made in specialized thinking such as playing games of Chess and Checkers, the optimistic researchers of the era predicted an artificially intelligent system (if not being) within the next 20 years. However, as time dragged on and their efforts repeatedly proved futile, the hapless researchers were forced to reconsider their understanding and their strategy. AI faced a slump in enthusiasm and funding, and most research for intelligent beings was all but given up.
Then came the era of Expert Systems: systems that, while not truly intelligent, had a repository of information on a particular domain that could be accessed almost instantaneously. Users were allowed to draw their own conclusions from that information, but the information itself was made available easily, along with some fundamental ways to manipulate it, such as statistical conclusion extraction. While the idea of an intelligent being still seemed impossible, this novel way of modeling “stunted but possible” intelligence became appealing to businesses and governments who found immediate uses for them. Funding came back, but was largely targeted towards these expert systems and knowledge base oriented projects.
Given the hindsight of today, it seems quite obvious that someone would find a way to channel all that funding towards an application which was more ambitious and reminiscent of the old-school optimism of AI, but near enough to the general area of KBs that the sponsors would still be willing to pay. In true Black Swan fashion [22], one might be led to believe that using KB to build a database that may facilitate sentient AI is a direct conclusion of the constraints of the time. However, the idea of using commonsense to improve AI was completely unheard of. Of course, if ever Lenat's intentions were to build a truly sentient being based solely on commonsense, he has kept it to himself. He always pushed for more intelligent systems, but always for weak AI, where the term is used in the sense used by Searle.
During the same time that Lenat started Cyc, Minsky published his book The Society of Mind [11]. Minsky, in association with Seymour Papert, had been developing the theory since the early 1970s. The idea had revealed itself via Minsky's work which involved trying to create a machine that used a robotic arm, a camera and a computer to build using children's blocks. The concept was simple: the mind we perceive in ourselves and other “intelligent” beings is simply an illusion that emerges from a number of smaller and independent processes happening simultaneously inside the brain. Minsky's work has been seminal in many common sense projects.
While this paper has focused on OMCS, Open Mind was not the only commonsense project inspired (at least in part) by Marvin Minsky. Chris McKinstry, an independent computer scientist who wanted to build an intelligent system using a large database, emailed Minsky in the mid 1990s asking if it were possible “to train a neural network into something resembling human using a database of binary propositions.” Marvin's reply was affirmative, with the caveat of “the training corpus would have to be enormous.” McKinstry then decided to “spend the rest of my life building and validating the most enormous corpus I could.” But McKinstry was alone, and had neither the guidance nor the resources Singh did at MIT. After developing a number of independent theories, McKinstry started the MindPixel project in 2000 aimed to create a KB of millions of human validated true / false statements, or probabilistic propositions. MindPixel was the front-end of McKinstry's conception of MISTIC (Minimum Intelligent Signal Test Item Corpus), a larger initiative aimed at building the aforementioned corpus. The database and its software was known as GAC (Generic Artificial Consciousness, pronounced Jak). McKinstry held the belief that the database, if used with a neural net, would produce a body of commonsense with market value. The project never matured to that extent, and lost its free server in 2005. The project was being rewritten by McKinstry for a relaunch in a server in France, but that day never came. McKinstry committed suicide on January 23, 2006. The future or the project and integrity of its data remains uncertain.
During their lifetimes, Singh and McKinstry interacted a number of times, mostly through email. McKinstry, who was known to pick fights in online discussion boards, was defended at times by Singh. Singh respected McKinstry's idea, but thought that he was going about it the wrong way. The crucial difference between these two approaches is that MindPixel uses true / false assertions, which are closer to rules and facts, whereas OMCS collected stories and cases; things that Singh believed were necessary for true commonsense reasoning. His own project, on the other hand, was much more successful both in terms of reception by the public (OMCS had more participation than MindPixel), the academia, and the backing he received from MIT. Singh was moving on to an evolution of OMCS named OMEX (Open Mind Experiences) [16], a project which asked the users to write short simple stories, and then give commonsense explanations for the events in the story. At the same time, other people and projects had begun to come and develop around Open Mind. Unfortunately, the Open Mind project also received a significant blow when its fountainhead Push Singh, who was slated to become Professor in 2007 and lead of the Common Sense Computing group, committed suicide on February 28, 2006. The project is currently run by the Software Agents Group at MIT Media Lab, under the supervision of Marvin Minsky, Catherine Havasi and Henry Lieberman. For a detailed account of the lives and work of Push Singh and Chris McKinstry, I highly recommend reading the article “Two AI Pioneers. Two Bizarre Suicides. What Really Happened?” by David Kushner [4].
Other commonsense projects, however, are blooming. Metaweb, an American software company, launched Freebase in 2007. Described by the company as “an open shared database of the world's knowledge”, Freebase relied less on common sense and more on common knowledge. It harvested data from sources such as Wikipedia, ChefMoz, NNDB, and MusicBrainz, as well as compiling data from its users. The idea of Freebase developed from Danny Hillis' idea of Aristotle [3]. Metaweb was purchased by Google in July 2010. DBPedia [1] is another similar project that aims to extract structured semantic information from Wikipedia. This structured information is then made available to the World Wide Web, allowing users to query relationships and properties associated with Wikipedia resources, including links to other data sets.
We already see Google searches using common sense to deliver better results. When associated with the wealth of information collectible by modern smart-phones such as location, orientation, audio and video information and movement, extremely intelligent systems are capable of (and bound to be) designed. However, we still need more research in the forms of data representation and information retrieval while maintaining flexibility and standards (two often opposing attributes) for such diverse and distributed systems to arise. The role of common sense systems in weak AI is abundantly clear. However, the question of being able to build a collective consciousness from a large database of human knowledge remains unanswered. Whether or not the goal is ever attainable still remains a romantic thought, but one thing is for sure: if it is attainable, common sense will play a crucial role in achieving it.
References
- AUER, S., AND LEHMANN, J. What have innsbruck and leipzig in common? Extracting semantics from wiki content. In Proceedings of European Semantic Web Conference (Tyrol, Innsbruck, Austria, 2007), A. Franconi, Ed., vol. LNCS 4519, Springer.
- GOULD, S. J. The Mismeasure of Man. W W Norton & Co, 1981.
- HILLIS, D. Aristotle: The knowledge web. Paper. www.edge.org/3rd_culture/hillis04/hillis04_index.html, May 2004.
- KUSHNER, D. Two AI Pioneers. Two bizarre suicides. What really happened? Wired Magazine. www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all, January 2008.
- LENAT, D. B. Cyc: a large-scale investment in knowledge infrastructure. Commun. ACM 38 (November 1995), 33–38.
- LENAT, D. B., AND GUHA, R. V. The evolution of cycl, the cyc representation language. SIGART Bull. 2 (June 1991), 84–87.
- LENAT, D. B., GUHA, R. V., PITTMAN, K., PRATT, D., AND SHEPHERD, M. Cyc: toward programs with common sense. Commun. ACM 33 (August 1990), 30–49.
- LIU, H., AND SINGH, P. Conceptnet: A practical commonsense reasoning tool-kit. BT Technology Journal 22 (October 2004), 211–226.
- MCCARTHY, J. Programs with common sense. In Readings in Knowledge Representation, H. Levesque and R. Brachman, Eds. Morgan Kaufman, Los Altos, CA, 1986.
- MCCARTHY, J., AND HAYES, P. J. Some philosophical problems from the standpoint of artificial intelligence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1987, pp. 26–45.
- MINSKY, M. The Society of Mind. Simon & Schuster, 1988.
- MINSKY, M. Future of ai technology. Toshiba Review 47 (July 1992), 7. Also at media.mit.edu/people/minsky/papers/CausalDiversity.txt.
- MINSKY, M. Commonsense-based interfaces. Communications of the ACM 43(8) (2000), 67–73.
- MUELLER, E. T. A database and lexicon of scripts for thoughttreasure, December 1999.
- SCHONEMANN, P. H. Psychometrics of intelligence. Encyclopedia of Social Measurement 3 (2005), 193–201.
- SINGH, P., AND BARRY, B. Collecting commonsense experiences. In Proceedings of the 2nd international conference on Knowledge capture (New York, NY, USA, 2003), K-CAP ’03, ACM, pp. 154–161.
- SINGH, P., LIN, T., MUELLER, E. T., LIM, G., PERKINS, T., AND ZHU, W. L. Open mind common sense: Knowledge acquisition from the general public. In Proceedings of the First International Conference on Ontologies, Databases, and Applications of Semantics for Large Scale Information Systems. Lecture Notes in Computer Science (Heidelberg, 2002), vol. 2519, Springer-Verlag.
- SPEARMAN, C. General intelligence objectively determined and measured. American Journal of Psychology 15 (1904), 201–293.
- SPEER, R., ARNOLD, K., AND HAVASI, C. Conceptnet. Web Site. csc.media.mit.edu/conceptnet.
- SPEER, R., HAVASI, C., ALONSO, J., ARNOLD, K., AND LIEBERMAN, H. Analogyspace. Web Site. csc.media.mit.edu/analogyspace.
- SPEER, R., HAVASI, C., ARNOLD, K., ALONSO, K., AND KRISHNAMURTHY, J. Divisi. Web Site. csc.media.mit.edu/divisi.
- TALEB, N. N. The Black Swan: The Impact of the Highly Improbable. Random House, 2007.