Indexing Commonsense Knowledge
Jonathan Gordon (Computer Science)
To enable human-level artificial intelligence, machines need access to the same kind of commonsense knowledge about the world that people have. This “knowledge acquisition bottleneck” is apparent in hard problems like question-answering, coreference resolution, and syntactic parsing. To this end, computational methods have been devised to extract commonsense knowledge implicit in large collections of text. However, given many thousands of these “factoids” about the world, how do we know which are most relevant? E.g., if we're told “Sylvie is a cat”, it’s more likely to be useful to know that cats like treats than that cats have femurs. In this project, we will develop methods to index large-scale collections of world knowledge and rank the most relevant axioms for particular queries. An additional question to investigate is how best to abstract claims to an appropriate level of generality, e.g., while it’s true that an accountant can drive a Prius, this is usefully subsumed by the claim that people can drive motor vehicles.
Required: CMPU 101; good programming skills; interest in artificial intelligence, information extraction, or logic.
Preferred: CMPU 102, CMPU 145, CMPU 203. Experience with Python, Linux, databases, HTML, Flask.
How should students express interest in this project?
Interested students should contact me by email (firstname.lastname@example.org) to arrange a brief appointment to discuss the project.