Joshi, Aravind K
Email Address
ORCID
Disciplines
relationships.isProjectOf
relationships.isOrgUnitOf
Position
Introduction
Research Interests
Collection
21 results
Search Results
Now showing 1 - 10 of 21
Publication Centering: A Framework for Modelling the Local Coherence of Discourse(1995) Joshi, Aravind K.; Grosz, Barbara J.; Weinstein, ScottThis paper concerns relationships among focus of attention, choice of referring expression, and perceived coherence of utterances within a discourse segment. It presents a framework and initial theory of centering which are intended to model the local component of attentional state. The paper examines interactions between local coherence and choice of referring expressions; it argues that differences in coherence correspond in part to the inference demands made by different types of referring expressions given a particular attentional state. It demonstrates that the attentional state properties modelled by centering can account for these differences.Publication The Linguistic Relevance of Tree Adjoining Grammar(1985-04-01) Kroch, Anthony S; Joshi, Aravind KIn this paper we apply a new notation for the writing of natural language grammars to some classical problems in the description of English. The formalism is the Tree Adjoining Grammar (TAG) of Joshi, Levy and Takahashi 1975, which was studied, initially only for its mathematical properties but which now turns out to be a interesting candidate for the proper notation of meta-grammar; that is for the universal grammar of contemporary linguistics. Interest in the application of the TAG formalism to the writing of natural language grammars arises out of recent work on the possibility of writing grammars for natural languages in a metatheory of restricted generative capacity (for example, Gazdar 1982 and Gazdar et al. 1985). There have been also several recent attempts to examine the linguistic metatheory of restricted grammatical formalisms, in particular, context-free grammars. The inadequacies of context-free grammars have been discussed both from the point of view of strong generative capacity (Bresnan et al. 1982) and weak generative capacity (Shieber 1984, Postal and Langendoen 1984, Higginbothem 1984, the empirical claims of the last two having been disputed by Pullum (Pullum 1984)). At this point TAG grammar becomes interesting because while it is more powerful than context-free grammar, it is only "mildly" so. This extra power of TAG is a direct corollary of the way TAG factors recursion and dependencies, and it can provide reasonable structural descriptions for constructions like Dutch verb raising where context-free grammar apparently fails. These properties of TAG and some of its mathematical properties were discussed by Joshi 1983.Publication A Default Temporal Logic for Regulatory Conformance Checking(2008-04-05) Dinesh, Nikhil; Joshi, Aravind K; Lee, Insup; Sokolsky, OlegThis paper considers the problem of checking whether an organization conforms to a body of regulation. Conformance is cast as a trace checking question – the regulation is represented in a logic that is evaluated against an abstract trace or run representing the operations of an organization. We focus on a problem in designing a logic to represent regulation. A common phenomenon in regulatory texts is for sentences to refer to others for conditions or exceptions. We motivate the need for a formal representation of regulation to accommodate such references between statements. We then extend linear temporal logic to allow statements to refer to others. The semantics of the resulting logic is defined via a combination of techniques from Reiter’s default logic and Kripke’s theory of truth. This paper is an expanded version of [1].Publication Parsing Strategies With 'Lexicalized' Grammars: Application to Tree Adjoining Grammars(1988-08-01) Schabes, Yves; Abeillé, Anne; Joshi, Aravind KIn this paper, we present a parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from some recent linguistic work in TAGs (Abeillé: 1988a). In our approach, each elementary structure is systematically associated with a lexical head. These structures specify extended domains of locality (as compared to a context-free grammar) over which constraints can be stated. These constraints either hold within the elementary structure itself or specify what other structures can be composed with a given elementary structure. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the head. There are no separate grammar rules. There are, of course, 'rules' which tell us how these structures are composed. A grammar of this form will be said to be 'lexicalized'. We show that in general context-free grammars cannot be 'lexicalized'. We then show how a 'lexicalized' grammar naturally follows from the extended domain of locality of TAGs and examine briefly some of the linguistic implications of our approach. A general parsing strategy for 'lexicalized' grammars is discussed. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. The strategy is independent of nature of the elementary structures in the underlying grammar. However, we focus our attention on TAGs. Since the set of trees selected at the end of the first stage is not infinite, the parser can use in principle any search strategy. Thus, in particular, a top-down strategy can be used since problems due to recursive structures are eliminated. We then explain how the Earley-type parser for TAGs can be modified to take advantage of this approach.Publication Unification-Based Tree Adjoining Grammars(1991-03-01) Vijay-Shanker, K.; Joshi, Aravind KMany current grammar formalisms used in computational linguistics take a unification-based approach that use structures (called feature structures) containing sets of feature-value pairs. In this paper, we describe a unification-based approach to Tree Adjoining Grammars (TAG). The resulting formalism (UTAG) retains the principle of factoring dependencies and recursion that is fundamental to TAGs. We also extend the definition of UTAG to include the lexicalized approach to TAGs (see [Schabes et al., 1988]). We give some linguistic examples using UTAG and informally discuss the descriptive capacity of UTAG, comparing it with other unificationbased formalisms. Finally, based on the linguistic theory underlying TAGs, we propose some stipulations that can be placed on UTAG grammars. In particular, we stipulate that the feature structures associated with the nodes in an elementary tree are bounded ( there is an analogous stipulation in GPSG). Grammars that satisfy these stipulations are equivalent to TAG. Thus, even with these stipulations, UTAGs have more power than CFG-based unification grammars with the same stipulations.Publication Processing Crossed and Nested Dependencies: An Automaton Perspective on the Psycholinguistic Results(1989-09-01) Joshi, Aravind KThe clause-final verbal clusters in Dutch and German (and in general, in West Germanic languages) have been studied extensively in different syntactic theories. Standard Dutch prefers crossed dependencies (between verbs and their arguments) while Standard German prefers nested dependencies. Recently Bach, Brown, and Marslen-Wilson (1986) have investigated the consequences of these differences between Dutch and German for the processing complexity of sentences, containing either crossed or nested dependencies. Stated very simply, their results show that Dutch is 'easier' than German, thus showing that the push-down automaton (PDA) cannot be the universal basis for the human parsing mechanism. They provide an explanation for the inadequacy of PDA in terms of the kinds of partial interpretations the dependencies allow the listener to construct. Motivated by their results and their discussion of these results we introduce a principle of partial interpretation (PPI) and present an automaton, embedded push-down automaton (EPDA), which permits processing of crossed and nested dependencies consistent with PPI. We show that there are appropriate complexity measures (motivated by the discussion in Bach, Brown, and Marslen-Wilson (1986)) according to which the processing of crossed dependencies is easier than the processing of nested dependencies. We also discuss a case of mixed dependencies. This EPDA characterization of the processing of crossed and nested dependencies is significant because EPDAs are known to be exactly equivalent to Tree Adjoining Grammars (TAG), which are also capable of providing a linguistically motivated analysis for the crossed dependencies of Dutch (Kroch and Santorini 1988). This significance is further enhanced by the fact that two other grammatical formalisms, (Head Grammars (Pollard, 1984) and Combinatory Grammars (Steedman, 1987)), also capable of providing analysis for crossed dependencies of Dutch, have been shown recently to be equivalent to TAGS in their generative power. We have also discussed briefly some issues concerning the EPDAs and their associated grammars, and the relationship between these associated grammars and the corresponding 'linguistic' grammars.Publication Processing Crossed and Nested Dependencies: An Automaton Perspective on the Psycholinguistic Results(1988-09-01) Joshi, Aravind KThe clause-final verbal clusters in Dutch and German (and in general, in West Germanic languages) has been extensively studied in different syntactic theories. Standard Dutch prefers crossed dependencies (between verbs and their arguments) while Standard German prefers nested dependencies. Recently Bach, Brown, and Marslen-Wilson (1986) have investigated the consequences of these differences between Dutch and German for the processing complexity of sentences, containing either crossed or nested dependencies. Stated very simply, their results show that Dutch is 'easier' than German, thus showing that the push-down automaton (PDA) cannot be the universal basis for the human parsing mechanism. They provide an explanation for the inadequacy of PDA in terms of the kinds of partial interpretations the dependencies allow the listener to construct. Motivated by their results and their discussion of these results we introduce a principle of partial interpretation (PPI) and present an automaton, embedded push-down automaton (EPDA) which permits processing of crossed and nested dependencies consistent with PPI. We show that there are appropriate complexity measures (motivated by the discussion in Bach, Brown, and Marslen-Wilson (1986) according to which the processing of crossed dependencies is easier than the processing of nested dependencies. This EPDA characterization of the processing of crossed and nested dependencies is significant because EPDAs are known to be exactly equivalent to Tree Adjoining Grammars (TAG), which are also capable of providing a linguistically motivated analysis for the crossed dependencies of Dutch (Kroch and Santorini 1988). This significance is further enhanced by the fact that two other grammatical formalisms, (Head Grammars (Pollard, 1984) and Combinatory Grammars (Steedman, 1987), also capable of providing analysis for crossed dependencies of Dutch, have been recently shown to be equivalent to TAGS in their generative power. We have also briefly discussed some issues concerning the degree to which grammars directly encode the processing by automata, in accordance with PPI.Publication Relative compositionality of multi-word expressions: a study of verb-noun (V-N) collocations(2005-10-11) Venkatapathy, Sriram; Joshi, Aravind KRecognition of Multi-word Expressions (MWEs) and their relative compositionality are crucial to Natural Language Processing. Various statistical techniques have been proposed to recognize MWEs. In this paper, we integrate all the existing statistical features and investigate a range of classifiers for their suitability for recognizing the non-compositional Verb-Noun (V-N) collocations. In the task of ranking the V-N collocations based on their relative compositionality, we show that the correlation between the ranks computed by the classifier and human ranking is significantly better than the correlation between ranking of individual features and human ranking. We also show that the properties ‘Distributed frequency of object’ (as defined in [27] ) and ‘Nearest Mutual Information’ (as adapted from [18]) contribute greatly to the recognition of the non-compositional MWEs of the V-N type and to the ranking of the V-N collocations based on their relative compositionality.Publication Living Up to Expectations: Computing Expert Responses(2007-01-01) Joshi, Aravind K; Webber, Bonnie L; Weischedel, RalphIn cooperative man-machine interaction, it is necessary but not sufficient for a system to respond truthfully and informatively to a user's question. In particular, if the system has reason to believe that its planned response might mislead the user, then it must block that conclusion by modifying its response. This paper focuses on identifying and avoiding potentially misleading responses by acknowledging types of 'informing behavior' usually expected of an expert. We attempt to give a formal account of several types of assertions that should be included in response to questions concerning the achievement of some goal (in addition to the simple answer), lest the questioner otherwise be misled.Publication A Processing Model for Free Word Order Languages(1995-04-01) Rambow, Owen; Joshi, Aravind KLike many verb-final languages, German displays considerable word-order freedom: there is no syntactic constraint on the ordering of the nominal arguments of a verb, as long as the verb remains in final position. This effect is referred to as “scrambling”, and is interpreted in transformational frameworks as leftward movement of the arguments. Furthermore, arguments from an embedded clause may move out of their clause; this effect is referred to as “long-distance scrambling”. While scrambling has recently received considerable attention in the syntactic literature, the status of long-distance scrambling has only rarely been addressed. The reason for this is the problematic status of the data: not only is long-distance scrambling highly dependent on pragmatic context, it also is strongly subject to degradation due to processing constraints. As in the case of center-embedding, it is not immediately clear whether to assume that observed unacceptability of highly complex sentences is due to grammatical restrictions, or whether we should assume that the competence grammar does not place any restrictions on scrambling (and that, therefore, all such sentences are in fact grammatical), and the unacceptability of some (or most) of the grammatically possible word orders is due to processing limitations. In this paper, we will argue for the second view by presenting a processing model for German.
- «
- 1 (current)
- 2
- 3
- »

