CSS-LM

Форк
0
/
train.json_8 
1 строка · 14.4 Кб
1
[{"sentiment": "COMPARE", "sentence": "Our experiments with IARPA-Babel languages show that << bottleneck features >> trained on the most similar source language perform better than [[ those ]] trained on all available source languages .", "aspect": "scii"}, {"sentiment": "COMPARE", "sentence": "Since product analysis is a generalization of factor analysis , [[ product analysis ]] always finds a higher data likelihood than << factor analysis >> .", "aspect": "scii"}, {"sentiment": "COMPARE", "sentence": "We compare two wide-coverage lexicalized grammars of English , [[ LEXSYS ]] and << XTAG >> , finding that the two grammars exploit EDOL in different ways .", "aspect": "scii"}, {"sentiment": "COMPARE", "sentence": "Specifically , this system is designed to deterministically choose between pronominalization , [[ superordinate substitution ]] , and << definite noun phrase reiteration >> .", "aspect": "scii"}, {"sentiment": "COMPARE", "sentence": "In experiments using the Penn WSJ corpus , our automatically trained [[ model ]] gave a performance of 86.6 % -LRB- F1 , sentences < 40 words -RRB- , which is comparable to that of an << unlexicalized PCFG parser >> created using extensive manual feature selection .", "aspect": "scii"}, {"sentiment": "COMPARE", "sentence": "In contrast to existing [[ methods ]] that consider only the guidance image , our << method >> can selectively transfer salient structures that are consistent in both guidance and target images .", "aspect": "scii"}, {"sentiment": "COMPARE", "sentence": "Unlike the [[ quantitative prior ]] , the << qualitative prior >> is often ignored due to the difficulty of incorporating them into the model learning process .", "aspect": "scii"}, {"sentiment": "COMPARE", "sentence": "Our experiments also show that current technology for extracting subcategorization frames initially designed for [[ written texts ]] works equally well for << spoken language >> .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "This paper investigates some computational problems associated with [[ probabilistic translation models ]] that have recently been adopted in the literature on << machine translation >> .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "The << recognizer >> makes use of continuous density HMM with Gaussian mixture for acoustic modeling and [[ n-gram statistics ]] estimated on the newspaper texts for language modeling .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "To further scale beyond this dataset , we propose a semi-supervised learning framework to expand the pool of labeled data with << high confidence predictions >> obtained from [[ unlabeled data ]] .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "In this paper we formulate story link detection and << new event detection >> as [[ information retrieval task ]] and hypothesize on the impact of precision and recall on both systems .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "Previous studies have shown that allowing the [[ parser ]] to resolve << pos tag ambiguity >> does not improve performance .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "We train our models on a dataset of urban aerial imagery consisting of ` same ' and ` different ' pairs , collected for this purpose , and characterize the problem via a << human study >> with [[ annotations from Amazon Mechanical Turk ]] .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "We propose a solution to the challenge of the CoNLL 2008 shared task that uses a [[ generative history-based latent variable model ]] to predict the most likely derivation of a << synchronous dependency parser >> for both syntactic and semantic dependencies .", "aspect": "scii"}, {"sentiment": "USED-FOR", "sentence": "The << class structures >> of original samples can be characterized and deformed by [[ local metrics of the semi-Riemannian space ]] .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "We describe both the [[ syntax ]] and semantics of a general << propositional language of context >> , and give a Hilbert style proof system for this language .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "The base parser produces a set of candidate parses for each input sentence , with associated probabilities that define an initial [[ ranking ]] of these << parses >> .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "The format of the << corpus >> adopts the [[ Child Language Data Exchange System -LRB- CHILDES -RRB- ]] .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "We present an << operable definition >> of focus which is argued to be of a [[ cognito-pragmatic nature ]] and explore how it is determined in discourse in a formalized manner .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "However , due to the [[ linearity ]] of << PCA >> , non-linearities like rotations or independently moving sub-parts in the data can deteriorate the resulting model considerably .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "This approach is sufficient for << languages >> with little [[ inflection ]] such as English , but fails for highly inflective languages such as Czech , Russian , Slovak or other Slavonic languages .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "We present the computational model for POS learning , and present results for applying it to << Bulgarian >> , a Slavic language with relatively [[ free word order ]] and rich morphology .", "aspect": "scii"}, {"sentiment": "FEATURE-OF", "sentence": "This paper describes the understanding process of the << spatial descriptions >> in [[ Japanese ]] .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "These applications require high [[ accuracy ]] for the << estimation of the motion field >> since the most interesting parameters of the dynamical processes studied are contained in first-order derivatives of the motion field or in dynamical changes of the moving objects .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "We show that there is unambiguous association between visual content and natural language descriptions in our dataset , making [[ it ]] an ideal benchmark for the << visual content captioning task >> .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "[[ Speech recognition ]] experiments in simulated and real reverberant environments show the efficiency of our approach which outperforms standard << channel normaliza-tion techniques >> .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "In experiments using the Penn WSJ corpus , our automatically trained model gave a performance of 86.6 % -LRB- [[ F1 ]] , sentences < 40 words -RRB- , which is comparable to that of an << unlexicalized PCFG parser >> created using extensive manual feature selection .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "Experiments evaluating the effectiveness of our answer resolution algorithm show a 35.0 % relative improvement over our << baseline system >> in the number of questions correctly answered , and a 32.8 % improvement according to the [[ average precision metric ]] .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "However , for << grammar formalisms >> which use more fine-grained grammatical categories , for example tag and ccg , [[ tagging accuracy ]] is much lower .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "Following recent developments in the [[ automatic evaluation ]] of << machine translation >> and document summarization , we present a similar approach , implemented in a measure called POURPRE , for automatically evaluating answers to definition questions .", "aspect": "scii"}, {"sentiment": "EVALUATE-FOR", "sentence": "To implement the two speech enhancement systems based on real-time VC , one from NAM to a whispered voice and the other from electrolaryngeal speech to a natural voice , we propose several << methods >> for reducing computational cost while preserving [[ conversion accuracy ]] .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "We address appropriate [[ user modeling ]] in order to generate cooperative responses to each user in << spoken dialogue systems >> .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "The model acts as an interlingua within a new multi-pathway MT architecture design that also incorporates transfer and [[ direct approaches ]] into a single << system >> .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "Our << algorithm >> considers [[ chordal QCNs ]] and a new form of partial consistency which we define as \u25c6 G-consistency .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "The << co-occurrence pattern >> , a combination of [[ binary or local features ]] , is more discriminative than individual features and has shown its advantages in object , scene , and action recognition .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "We argue that the method is an appealing alternative - in terms of both simplicity and efficiency - to work on [[ feature selection methods ]] within << log-linear -LRB- maximum-entropy -RRB- models >> .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "This << system >> consists of one or more reference times and temporal perspective times , the speech time and the [[ location time ]] .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige , which is a << formalization of defeasible reasoning >> , that includes arguments and [[ defeat rules ]] that capture defeasibility .", "aspect": "scii"}, {"sentiment": "PART-OF", "sentence": "We identified two tasks : First , how [[ linguistic concepts ]] are acquired from training examples and organized in a << hierarchy >> ; this task was discussed in previous papers -LSB- Zernik87 -RSB- .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "The compact description of a video sequence through a single image map and a dominant motion has applications in several << domains >> , including video browsing and retrieval , compression , [[ mosaicing ]] , and visual summarization .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "We use a maximum likelihood criterion to train a log-linear block bigram model which uses << real-valued features >> -LRB- e.g. a [[ language model score ]] -RRB- as well as binary features based on the block identities themselves , e.g. block bigram features .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "We study the question of how to make loss-aware predictions in image segmentation settings where the << evaluation function >> is the [[ Intersection-over-Union -LRB- IoU -RRB- measure ]] that is used widely in evaluating image segmentation systems .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "The Interval Algebra -LRB- IA -RRB- and a subset of the << Region Connection Calculus -LRB- RCC -RRB- >> , namely [[ RCC-8 ]] , are the dominant Artificial Intelligence approaches for representing and reasoning about qualitative temporal and topological relations respectively .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "In cross-domain learning , there is a more challenging problem that the domain divergence involves more than one << dominant factors >> , e.g. , different [[ viewpoints ]] , various resolutions and changing illuminations .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "Experiments were done for two << ag-glutinative and morphologically rich languages >> : [[ Finnish ]] and Turk-ish .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "[[ Synchronous dependency insertion grammars ]] are a version of << synchronous grammars >> defined on dependency trees .", "aspect": "scii"}, {"sentiment": "HYPONYM-OF", "sentence": "The system participated in all the tracks of the << segmentation bakeoff >> -- PK-open , [[ PK-closed ]] , AS-open , AS-closed , HK-open , HK-closed , MSR-open and MSR - closed -- and achieved the state-of-the-art performance in MSR-open , MSR-close and PK-open tracks .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "A Bayesian framework is used to probabilistically model : people 's trajectories and intents , [[ constraint map of the scene ]] , and << locations of functional objects >> .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "We show that there is unambiguous association between [[ visual content ]] and << natural language descriptions >> in our dataset , making it an ideal benchmark for the visual content captioning task .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process [[ conjunctions ]] , << fragmentary input >> , and ungrammatical structures , as well as less exotic , grammatically correct input .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "The compact description of a video sequence through a single image map and a dominant motion has applications in several domains , including video browsing and retrieval , [[ compression ]] , << mosaicing >> , and visual summarization .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "The research effort focusses on developing advanced acoustic modelling , [[ rapid search ]] , and << recognition-time adaptation techniques >> for robust large-vocabulary CSR , and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to military application tasks .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "The application of the techniques to the analysis of plant growth , to [[ ocean surface microturbulence in IR image sequences ]] , and to << sediment transport >> is demonstrated .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "We primarily focus on the description of the syntactically motivated relations in discourse , basing our findings on the theoretical background of the [[ Prague Dependency Treebank 2.0 ]] and the << Penn Discourse Treebank 2 >> .", "aspect": "scii"}, {"sentiment": "CONJUNCTION", "sentence": "Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions , [[ fragmentary input ]] , and << ungrammatical structures >> , as well as less exotic , grammatically correct input .", "aspect": "scii"}]

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.