context
stringlengths
5k
202k
question
stringlengths
17
163
answer
stringlengths
1
619
length
int64
819
31.7k
dataset
stringclasses
5 values
context_range
stringclasses
5 values
10pt 1.10pt [ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users. ] Introduction While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited. Since the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers. Such use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”. Our current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter. Specifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views. For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties. From our results, the following main observations can be made: Our findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter. The rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work. Defining Fake news Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news. With respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation. In relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization. The conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows: Research Hypotheses Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information: Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset. Exposure. Characterization. Polarization. Data and Methodology For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016). One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times. Once we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused. Finally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news: In the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal. Results The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window. The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth. The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered. Exposure Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time. However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal. In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant. Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news. Characterization We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts. Turning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions. If we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant. A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed. With respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant. The analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant. On the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 ). Polarization Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates. Discussion As a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them: These findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier. Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising). Finally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news. Conclusions With the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news. We believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle. Within the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample. Author Disclosure Statement No competing financial interest exist.
How is the ground truth for fake news established?
Ground truth is not established in the paper
3,141
qasper
4k
Introduction Recently, deep learning algorithms have successfully addressed problems in various fields, such as image classification, machine translation, speech recognition, text-to-speech generation and other machine learning related areas BIBREF0 , BIBREF1 , BIBREF2 . Similarly, substantial improvements in performance have been obtained when deep learning algorithms have been applied to statistical speech processing BIBREF3 . These fundamental improvements have led researchers to investigate additional topics related to human nature, which have long been objects of study. One such topic involves understanding human emotions and reflecting it through machine intelligence, such as emotional dialogue models BIBREF4 , BIBREF5 . In developing emotionally aware intelligence, the very first step is building robust emotion classifiers that display good performance regardless of the application; this outcome is considered to be one of the fundamental research goals in affective computing BIBREF6 . In particular, the speech emotion recognition task is one of the most important problems in the field of paralinguistics. This field has recently broadened its applications, as it is a crucial factor in optimal human-computer interactions, including dialog systems. The goal of speech emotion recognition is to predict the emotional content of speech and to classify speech according to one of several labels (i.e., happy, sad, neutral, and angry). Various types of deep learning methods have been applied to increase the performance of emotion classifiers; however, this task is still considered to be challenging for several reasons. First, insufficient data for training complex neural network-based models are available, due to the costs associated with human involvement. Second, the characteristics of emotions must be learned from low-level speech signals. Feature-based models display limited skills when applied to this problem. To overcome these limitations, we propose a model that uses high-level text transcription, as well as low-level audio signals, to utilize the information contained within low-resource datasets to a greater degree. Given recent improvements in automatic speech recognition (ASR) technology BIBREF7 , BIBREF2 , BIBREF8 , BIBREF9 , speech transcription can be carried out using audio signals with considerable skill. The emotional content of speech is clearly indicated by the emotion words contained in a sentence BIBREF10 , such as “lovely” and “awesome,” which carry strong emotions compared to generic (non-emotion) words, such as “person” and “day.” Thus, we hypothesize that the speech emotion recognition model will be benefit from the incorporation of high-level textual input. In this paper, we propose a novel deep dual recurrent encoder model that simultaneously utilizes audio and text data in recognizing emotions from speech. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods by 68.8% to 71.8% when applied to the IEMOCAP dataset, which is one of the most well-studied datasets. Based on an error analysis of the models, we show that our proposed model accurately identifies emotion classes. Moreover, the neutral class misclassification bias frequently exhibited by previous models, which focus on audio features, is less pronounced in our model. Related work Classical machine learning algorithms, such as hidden Markov models (HMMs), support vector machines (SVMs), and decision tree-based methods, have been employed in speech emotion recognition problems BIBREF11 , BIBREF12 , BIBREF13 . Recently, researchers have proposed various neural network-based architectures to improve the performance of speech emotion recognition. An initial study utilized deep neural networks (DNNs) to extract high-level features from raw audio data and demonstrated its effectiveness in speech emotion recognition BIBREF14 . With the advancement of deep learning methods, more complex neural-based architectures have been proposed. Convolutional neural network (CNN)-based models have been trained on information derived from raw audio signals using spectrograms or audio features such as Mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) BIBREF15 , BIBREF16 , BIBREF17 . These neural network-based models are combined to produce higher-complexity models BIBREF18 , BIBREF19 , and these models achieved the best-recorded performance when applied to the IEMOCAP dataset. Another line of research has focused on adopting variant machine learning techniques combined with neural network-based models. One researcher utilized the multiobject learning approach and used gender and naturalness as auxiliary tasks so that the neural network-based model learned more features from a given dataset BIBREF20 . Another researcher investigated transfer learning methods, leveraging external data from related domains BIBREF21 . As emotional dialogue is composed of sound and spoken content, researchers have also investigated the combination of acoustic features and language information, built belief network-based methods of identifying emotional key phrases, and assessed the emotional salience of verbal cues from both phoneme sequences and words BIBREF22 , BIBREF23 . However, none of these studies have utilized information from speech signals and text sequences simultaneously in an end-to-end learning neural network-based model to classify emotions. Model This section describes the methodologies that are applied to the speech emotion recognition task. We start by introducing the recurrent encoder model for the audio and text modalities individually. We then propose a multimodal approach that encodes both audio and textual information simultaneously via a dual recurrent encoder. Audio Recurrent Encoder (ARE) Motivated by the architecture used in BIBREF24 , BIBREF25 , we build an audio recurrent encoder (ARE) to predict the class of a given audio signal. Once MFCC features have been extracted from an audio signal, a subset of the sequential features is fed into the RNN (i.e., gated recurrent units (GRUs)), which leads to the formation of the network's internal hidden state INLINEFORM0 to model the time series patterns. This internal hidden state is updated at each time step with the input data INLINEFORM1 and the hidden state of the previous time step INLINEFORM2 as follows: DISPLAYFORM0 where INLINEFORM0 is the RNN function with weight parameter INLINEFORM1 , INLINEFORM2 represents the hidden state at t- INLINEFORM3 time step, and INLINEFORM4 represents the t- INLINEFORM5 MFCC features in INLINEFORM6 . After encoding the audio signal INLINEFORM7 with the RNN, the last hidden state of the RNN, INLINEFORM8 , is considered to be the representative vector that contains all of the sequential audio data. This vector is then concatenated with another prosodic feature vector, INLINEFORM9 , to generate a more informative vector representation of the signal, INLINEFORM10 . The MFCC and the prosodic features are extracted from the audio signal using the openSMILE toolkit BIBREF26 , INLINEFORM11 , respectively. Finally, the emotion class is predicted by applying the softmax function to the vector INLINEFORM12 . For a given audio sample INLINEFORM13 , we assume that INLINEFORM14 is the true label vector, which contains all zeros but contains a one at the correct class, and INLINEFORM15 is the predicted probability distribution from the softmax layer. The training objective then takes the following form: DISPLAYFORM0 where INLINEFORM0 is the calculated representative vector of the audio signal with dimensionality INLINEFORM1 . The INLINEFORM2 and the bias INLINEFORM3 are learned model parameters. C is the total number of classes, and N is the total number of samples used in training. The upper part of Figure shows the architecture of the ARE model. Text Recurrent Encoder (TRE) We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 is last hidden state of the text-RNN, INLINEFORM1 , and the INLINEFORM2 and bias INLINEFORM3 are learned model parameters. The lower part of Figure indicates the architecture of the TRE model. Multimodal Dual Recurrent Encoder (MDRE) We present a novel architecture called the multimodal dual recurrent encoder (MDRE) to overcome the limitations of existing approaches. In this study, we consider multiple modalities, such as MFCC features, prosodic features and transcripts, which contain sequential audio information, statistical audio information and textual information, respectively. These types of data are the same as those used in the ARE and TRE cases. The MDRE model employs two RNNs to encode data from the audio signal and textual inputs independently. The audio-RNN encodes MFCC features from the audio signal using equation EQREF2 . The last hidden state of the audio-RNN is concatenated with the prosodic features to form the final vector representation INLINEFORM0 , and this vector is then passed through a fully connected neural network layer to form the audio encoding vector A. On the other hand, the text-RNN encodes the word sequence of the transcript using equation EQREF2 . The final hidden states of the text-RNN are also passed through another fully connected neural network layer to form a textual encoding vector T. Finally, the emotion class is predicted by applying the softmax function to the concatenation of the vectors A and T. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 is the feed-forward neural network with weight parameter INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 are final encoding vectors from the audio-RNN and text-RNN, respectively. INLINEFORM4 and the bias INLINEFORM5 are learned model parameters. Multimodal Dual Recurrent Encoder with Attention (MDREA) Inspired by the concept of the attention mechanism used in neural machine translation BIBREF28 , we propose a novel multimodal attention method to focus on the specific parts of a transcript that contain strong emotional information, conditioning on the audio information. Figure shows the architecture of the MDREA model. First, the audio data and text data are encoded with the audio-RNN and text-RNN using equation EQREF2 . We then consider the final audio encoding vector INLINEFORM0 as a context vector. As seen in equation EQREF9 , during each time step t, the dot product between the context vector e and the hidden state of the text-RNN at each t-th sequence INLINEFORM1 is evaluated to calculate a similarity score INLINEFORM2 . Using this score INLINEFORM3 as a weight parameter, the weighted sum of the sequences of the hidden state of the text-RNN, INLINEFORM4 , is calculated to generate an attention-application vector Z. This attention-application vector is concatenated with the final encoding vector of the audio-RNN INLINEFORM5 (equation EQREF7 ), which will be passed through the softmax function to predict the emotion class. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 and the bias INLINEFORM1 are learned model parameters. Dataset We evaluate our model using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) BIBREF18 dataset. This dataset was collected following theatrical theory in order to simulate natural dyadic interactions between actors. We use categorical evaluations with majority agreement. We use only four emotional categories happy, sad, angry, and neutral to compare the performance of our model with other research using the same categories. The IEMOCAP dataset includes five sessions, and each session contains utterances from two speakers (one male and one female). This data collection process resulted in 10 unique speakers. For consistent comparison with previous work, we merge the excitement dataset with the happiness dataset. The final dataset contains a total of 5531 utterances (1636 happy, 1084 sad, 1103 angry, 1708 neutral). Feature extraction To extract speech information from audio signals, we use MFCC values, which are widely used in analyzing audio signals. The MFCC feature set contains a total of 39 features, which include 12 MFCC parameters (1-12) from the 26 Mel-frequency bands and log-energy parameters, 13 delta and 13 acceleration coefficients The frame size is set to 25 ms at a rate of 10 ms with the Hamming function. According to the length of each wave file, the sequential step of the MFCC features is varied. To extract additional information from the data, we also use prosodic features, which show effectiveness in affective computing. The prosodic features are composed of 35 features, which include the F0 frequency, the voicing probability, and the loudness contours. All of these MFCC and prosodic features are extracted from the data using the OpenSMILE toolkit BIBREF26 . Implementation details Among the variants of the RNN function, we use GRUs as they yield comparable performance to that of the LSTM and include a smaller number of weight parameters BIBREF29 . We use a max encoder step of 750 for the audio input, based on the implementation choices presented in BIBREF30 and 128 for the text input because it covers the maximum length of the transcripts. The vocabulary size of the dataset is 3,747, including the “_UNK_" token, which represents unknown words, and the “_PAD_" token, which is used to indicate padding information added while preparing mini-batch data. The number of hidden units and the number of layers in the RNN for each model (ARE, TRE, MDRE and MDREA) are selected based on extensive hyperparameter search experiments. The weights of the hidden units are initialized using orthogonal weights BIBREF31 ], and the text embedding layer is initialized from pretrained word-embedding vectors BIBREF32 . In preparing the textual dataset, we first use the released transcripts of the IEMOCAP dataset for simplicity. To investigate the practical performance, we then process all of the IEMOCAP audio data using an ASR system (the Google Cloud Speech API) and retrieve the transcripts. The performance of the Google ASR system is reflected by its word error rate (WER) of 5.53%. Performance evaluation As the dataset is not explicitly split beforehand into training, development, and testing sets, we perform 5-fold cross validation to determine the overall performance of the model. The data in each fold are split into training, development, and testing datasets (8:0.5:1.5, respectively). After training the model, we measure the weighted average precision (WAP) over the 5-fold dataset. We train and evaluate the model 10 times per fold, and the model performance is assessed in terms of the mean score and standard deviation. We examine the WAP values, which are shown in Table 1. First, our ARE model shows the baseline performance because we use minimal audio features, such as the MFCC and prosodic features with simple architectures. On the other hand, the TRE model shows higher performance gain compared to the ARE. From this result, we note that textual data are informative in emotion prediction tasks, and the recurrent encoder model is effective in understanding these types of sequential data. Second, the newly proposed model, MDRE, shows a substantial performance gain. It thus achieves the state-of-the-art performance with a WAP value of 0.718. This result shows that multimodal information is a key factor in affective computing. Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 . However, the MDREA model does not match the performance of the MDRE model, even though it utilizes a more complex architecture. We believe that this result arises because insufficient data are available to properly determine the complex model parameters in the MDREA model. Moreover, we presume that this model will show better performance when the audio signals are aligned with the textual sequence while applying the attention mechanism. We leave the implementation of this point as a future research direction. To investigate the practical performance of the proposed models, we conduct further experiments with the ASR-processed transcript data (see “-ASR” models in Table ). The label accuracy of the processed transcripts is 5.53% WER. The TRE-ASR, MDRE-ASR and MDREA-ASR models reflect degraded performance compared to that of the TRE, MDRE and MDREA models. However, the performance of these models is still competitive; in particular, the MDRE-ASR model outperforms the previous best-performing model, 3CNN-LSTM10H (WAP 0.691 to 0.688). Error analysis We analyze the predictions of the ARE, TRE, and MDRE models. Figure shows the confusion matrix of each model. The ARE model (Fig. ) incorrectly classifies most instances of happy as neutral (43.51%); thus, it shows reduced accuracy (35.15%) in predicting the the happy class. Overall, most of the emotion classes are frequently confused with the neutral class. This observation is in line with the findings of BIBREF30 , who noted that the neutral class is located in the center of the activation-valence space, complicating its discrimination from the other classes. Interestingly, the TRE model (Fig. ) shows greater prediction gains in predicting the happy class when compared to the ARE model (35.15% to 75.73%). This result seems plausible because the model can benefit from the differences among the distributions of words in happy and neutral expressions, which gives more emotional information to the model than that of the audio signal data. On the other hand, it is striking that the TRE model incorrectly predicts instances of the sad class as the happy class 16.20% of the time, even though these emotional states are opposites of one another. The MDRE model (Fig. ) compensates for the weaknesses of the previous two models (ARE and TRE) and benefits from their strengths to a surprising degree. The values arranged along the diagonal axis show that all of the accuracies of the correctly predicted class have increased. Furthermore, the occurrence of the incorrect “sad-to-happy" cases in the TRE model is reduced from 16.20% to 9.15%. Conclusions In this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features. In the future work, we aim to extend the modalities to audio, text and video inputs. Furthermore, we plan to investigate the application of the attention mechanism to data derived from multiple modalities. This approach seems likely to uncover enhanced learning schemes that will increase performance in both speech emotion recognition and other multimodal classification tasks. Acknowledgments K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144).
By how much does their model outperform the state of the art results?
the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688)
3,207
qasper
4k
Introduction This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ In the spirit of the brevity of social media's messages and reactions, people have got used to express feelings minimally and symbolically, as with hashtags on Twitter and Instagram. On Facebook, people tend to be more wordy, but posts normally receive more simple “likes” than longer comments. Since February 2016, Facebook users can express specific emotions in response to a post thanks to the newly introduced reaction feature (see Section SECREF2 ), so that now a post can be wordlessly marked with an expression of say “joy" or “surprise" rather than a generic “like”. It has been observed that this new feature helps Facebook to know much more about their users and exploit this information for targeted advertising BIBREF0 , but interest in people's opinions and how they feel isn't limited to commercial reasons, as it invests social monitoring, too, including health care and education BIBREF1 . However, emotions and opinions are not always expressed this explicitly, so that there is high interest in developing systems towards their automatic detection. Creating manually annotated datasets large enough to train supervised models is not only costly, but also—especially in the case of opinions and emotions—difficult, due to the intrinsic subjectivity of the task BIBREF2 , BIBREF3 . Therefore, research has focused on unsupervised methods enriched with information derived from lexica, which are manually created BIBREF3 , BIBREF4 . Since go2009twitter have shown that happy and sad emoticons can be successfully used as signals for sentiment labels, distant supervision, i.e. using some reasonably safe signals as proxies for automatically labelling training data BIBREF5 , has been used also for emotion recognition, for example exploiting both emoticons and Twitter hashtags BIBREF6 , but mainly towards creating emotion lexica. mohammad2015using use hashtags, experimenting also with highly fine-grained emotion sets (up to almost 600 emotion labels), to create the large Hashtag Emotion Lexicon. Emoticons are used as proxies also by hallsmarmulti, who use distributed vector representations to find which words are interchangeable with emoticons but also which emoticons are used in a similar context. We take advantage of distant supervision by using Facebook reactions as proxies for emotion labels, which to the best of our knowledge hasn't been done yet, and we train a set of Support Vector Machine models for emotion recognition. Our models, differently from existing ones, exploit information which is acquired entirely automatically, and achieve competitive or even state-of-the-art results for some of the emotion labels on existing, standard evaluation datasets. For explanatory purposes, related work is discussed further and more in detail when we describe the benchmarks for evaluation (Section SECREF3 ) and when we compare our models to existing ones (Section SECREF5 ). We also explore and discuss how choosing different sets of Facebook pages as training data provides an intrinsic domain-adaptation method. Facebook reactions as labels For years, on Facebook people could leave comments to posts, and also “like” them, by using a thumbs-up feature to explicitly express a generic, rather underspecified, approval. A “like” could thus mean “I like what you said", but also “I like that you bring up such topic (though I find the content of the article you linked annoying)". In February 2016, after a short trial, Facebook made a more explicit reaction feature available world-wide. Rather than allowing for the underspecified “like” as the only wordless response to a post, a set of six more specific reactions was introduced, as shown in Figure FIGREF1 : Like, Love, Haha, Wow, Sad and Angry. We use such reactions as proxies for emotion labels associated to posts. We collected Facebook posts and their corresponding reactions from public pages using the Facebook API, which we accessed via the Facebook-sdk python library. We chose different pages (and therefore domains and stances), aiming at a balanced and varied dataset, but we did so mainly based on intuition (see Section SECREF4 ) and with an eye to the nature of the datasets available for evaluation (see Section SECREF5 ). The choice of which pages to select posts from is far from trivial, and we believe this is actually an interesting aspect of our approach, as by using different Facebook pages one can intrinsically tackle the domain-adaptation problem (See Section SECREF6 for further discussion on this). The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney. Note that thankful was only available during specific time spans related to certain events, as Mother's Day in May 2016. For each page, we downloaded the latest 1000 posts, or the maximum available if there are fewer, from February 2016, retrieving the counts of reactions for each post. The output is a JSON file containing a list of dictionaries with a timestamp, the post and a reaction vector with frequency values, which indicate how many users used that reaction in response to the post (Figure FIGREF3 ). The resulting emotion vectors must then be turned into an emotion label. In the context of this experiment, we made the simple decision of associating to each post the emotion with the highest count, ignoring like as it is the default and most generic reaction people tend to use. Therefore, for example, to the first post in Figure FIGREF3 , we would associate the label sad, as it has the highest score (284) among the meaningful emotions we consider, though it also has non-zero scores for other emotions. At this stage, we didn't perform any other entropy-based selection of posts, to be investigated in future work. Emotion datasets Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation. Affective Text dataset Task 14 at SemEval 2007 BIBREF7 was concerned with the classification of emotions and valence in news headlines. The headlines where collected from several news websites including Google news, The New York Times, BBC News and CNN. The used emotion labels were Anger, Disgust, Fear, Joy, Sadness, Surprise, in line with the six basic emotions of Ekman's standard model BIBREF8 . Valence was to be determined as positive or negative. Classification of emotion and valence were treated as separate tasks. Emotion labels were not considered as mututally exclusive, and each emotion was assigned a score from 0 to 100. Training/developing data amounted to 250 annotated headlines (Affective development), while systems were evaluated on another 1000 (Affective test). Evaluation was done using two different methods: a fine-grained evaluation using Pearson's r to measure the correlation between the system scores and the gold standard; and a coarse-grained method where each emotion score was converted to a binary label, and precision, recall, and f-score were computed to assess performance. As it is done in most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , we also treat this as a classification problem (coarse-grained). This dataset has been extensively used for the evaluation of various unsupervised methods BIBREF2 , but also for testing different supervised learning techniques and feature portability BIBREF10 . Fairy Tales dataset This is a dataset collected by alm2008affect, where about 1,000 sentences from fairy tales (by B. Potter, H.C. Andersen and Grimm) were annotated with the same six emotions of the Affective Text dataset, though with different names: Angry, Disgusted, Fearful, Happy, Sad, and Surprised. In most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , only sentences where all annotators agreed are used, and the labels angry and disgusted are merged. We adopt the same choices. ISEAR The ISEAR (International Survey on Emotion Antecedents and Reactions BIBREF11 , BIBREF12 ) is a dataset created in the context of a psychology project of the 1990s, by collecting questionnaires answered by people with different cultural backgrounds. The main aim of this project was to gather insights in cross-cultural aspects of emotional reactions. Student respondents, both psychologists and non-psychologists, were asked to report situations in which they had experienced all of seven major emotions (joy, fear, anger, sadness, disgust, shame and guilt). In each case, the questions covered the way they had appraised a given situation and how they reacted. The final dataset contains reports by approximately 3000 respondents from all over the world, for a total of 7665 sentences labelled with an emotion, making this the largest dataset out of the three we use. Overview of datasets and emotions We summarise datasets and emotion distribution from two viewpoints. First, because there are different sets of emotions labels in the datasets and Facebook data, we need to provide a mapping and derive a subset of emotions that we are going to use for the experiments. This is shown in Table TABREF8 , where in the “Mapped” column we report the final emotions we use in this paper: anger, joy, sadness, surprise. All labels in each dataset are mapped to these final emotions, which are therefore the labels we use for training and testing our models. Second, the distribution of the emotions for each dataset is different, as can be seen in Figure FIGREF9 . In Figure FIGREF9 we also provide the distribution of the emotions anger, joy, sadness, surprise per Facebook page, in terms of number of posts (recall that we assign to a post the label corresponding to the majority emotion associated to it, see Section SECREF2 ). We can observe that for example pages about news tend to have more sadness and anger posts, while pages about cooking and tv-shows have a high percentage of joy posts. We will use this information to find the best set of pages for a given target domain (see Section SECREF5 ). Model There are two main decisions to be taken in developing our model: (i) which Facebook pages to select as training data, and (ii) which features to use to train the model, which we discuss below. Specifically, we first set on a subset of pages and then experiment with features. Further exploration of the interaction between choice of pages and choice of features is left to future work, and partly discussed in Section SECREF6 . For development, we use a small portion of the Affective data set described in Section SECREF4 , that is the portion that had been released as development set for SemEval's 2007 Task 14 BIBREF7 , which contains 250 annotated sentences (Affective development, Section SECREF4 ). All results reported in this section are on this dataset. The test set of Task 14 as well as the other two datasets described in Section SECREF3 will be used to evaluate the final models (Section SECREF4 ). Selecting Facebook pages Although page selection is a crucial ingredient of this approach, which we believe calls for further and deeper, dedicated investigation, for the experiments described here we took a rather simple approach. First, we selected the pages that would provide training data based on intuition and availability, then chose different combinations according to results of a basic model run on development data, and eventually tested feature combinations, still on the development set. For the sake of simplicity and transparency, we first trained an SVM with a simple bag-of-words model and default parameters as per the Scikit-learn implementation BIBREF13 on different combinations of pages. Based on results of the attempted combinations as well as on the distribution of emotions in the development dataset (Figure FIGREF9 ), we selected a best model (B-M), namely the combined set of Time, The Guardian and Disney, which yields the highest results on development data. Time and The Guardian perform well on most emotions but Disney helps to boost the performance for the Joy class. Features In selecting appropriate features, we mainly relied on previous work and intuition. We experimented with different combinations, and all tests were still done on Affective development, using the pages for the best model (B-M) described above as training data. Results are in Table TABREF20 . Future work will further explore the simultaneous selection of features and page combinations. We use a set of basic text-based features to capture the emotion class. These include a tf-idf bag-of-words feature, word (2-3) and character (2-5) ngrams, and features related to the presence of negation words, and to the usage of punctuation. This feature is used in all unsupervised models as a source of information, and we mainly include it to assess its contribution, but eventually do not use it in our final model. We used the NRC10 Lexicon because it performed best in the experiments by BIBREF10 , which is built around the emotions anger, anticipation, disgust, fear, joy, sadness, and surprise, and the valence values positive and negative. For each word in the lexicon, a boolean value indicating presence or absence is associated to each emotion. For a whole sentence, a global score per emotion can be obtained by summing the vectors for all content words of that sentence included in the lexicon, and used as feature. As additional feature, we also included Word Embeddings, namely distributed representations of words in a vector space, which have been exceptionally successful in boosting performance in a plethora of NLP tasks. We use three different embeddings: Google embeddings: pre-trained embeddings trained on Google News and obtained with the skip-gram architecture described in BIBREF14 . This model contains 300-dimensional vectors for 3 million words and phrases. Facebook embeddings: embeddings that we trained on our scraped Facebook pages for a total of 20,000 sentences. Using the gensim library BIBREF15 , we trained the embeddings with the following parameters: window size of 5, learning rate of 0.01 and dimensionality of 100. We filtered out words with frequency lower than 2 occurrences. Retrofitted embeddings: Retrofitting BIBREF16 has been shown as a simple but efficient way of informing trained embeddings with additional information derived from some lexical resource, rather than including it directly at the training stage, as it's done for example to create sense-aware BIBREF17 or sentiment-aware BIBREF18 embeddings. In this work, we retrofit general embeddings to include information about emotions, so that emotion-similar words can get closer in space. Both the Google as well as our Facebook embeddings were retrofitted with lexical information obtained from the NRC10 Lexicon mentioned above, which provides emotion-similarity for each token. Note that differently from the previous two types of embeddings, the retrofitted ones do rely on handcrafted information in the form of a lexical resource. Results on development set We report precision, recall, and f-score on the development set. The average f-score is reported as micro-average, to better account for the skewed distribution of the classes as well as in accordance to what is usually reported for this task BIBREF19 . From Table TABREF20 we draw three main observations. First, a simple tf-idf bag-of-word mode works already very well, to the point that the other textual and lexicon-based features don't seem to contribute to the overall f-score (0.368), although there is a rather substantial variation of scores per class. Second, Google embeddings perform a lot better than Facebook embeddings, and this is likely due to the size of the corpus used for training. Retrofitting doesn't seem to help at all for the Google embeddings, but it does boost the Facebook embeddings, leading to think that with little data, more accurate task-related information is helping, but corpus size matters most. Third, in combination with embeddings, all features work better than just using tf-idf, but removing the Lexicon feature, which is the only one based on hand-crafted resources, yields even better results. Then our best model (B-M) on development data relies entirely on automatically obtained information, both in terms of training data as well as features. Results In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 . Our B-M model relies on subsets of Facebook pages for training, which were chosen according to their performance on the development set as well as on the observation of emotions distribution on different pages and in the different datasets, as described in Section SECREF4 . The feature set we use is our best on the development set, namely all the features plus Google-based embeddings, but excluding the lexicon. This makes our approach completely independent of any manual annotation or handcrafted resource. Our model's performance is compared to the following systems, for which results are reported in the referred literature. Please note that no other existing model was re-implemented, and results are those reported in the respective papers. Discussion, conclusions and future work We have explored the potential of using Facebook reactions in a distant supervised setting to perform emotion classification. The evaluation on standard benchmarks shows that models trained as such, especially when enhanced with continuous vector representations, can achieve competitive results without relying on any handcrafted resource. An interesting aspect of our approach is the view to domain adaptation via the selection of Facebook pages to be used as training data. We believe that this approach has a lot of potential, and we see the following directions for improvement. Feature-wise, we want to train emotion-aware embeddings, in the vein of work by tang:14, and iacobacci2015sensembed. Retrofitting FB-embeddings trained on a larger corpus might also be successful, but would rely on an external lexicon. The largest room for yielding not only better results but also interesting insights on extensions of this approach lies in the choice of training instances, both in terms of Facebook pages to get posts from, as well as in which posts to select from the given pages. For the latter, one could for example only select posts that have a certain length, ignore posts that are only quotes or captions to images, or expand posts by including content from linked html pages, which might provide larger and better contexts BIBREF23 . Additionally, and most importantly, one could use an entropy-based measure to select only posts that have a strong emotion rather than just considering the majority emotion as training label. For the former, namely the choice of Facebook pages, which we believe deserves the most investigation, one could explore several avenues, especially in relation to stance-based issues BIBREF24 . In our dataset, for example, a post about Chile beating Colombia in a football match during the Copa America had very contradictory reactions, depending on which side readers would cheer for. Similarly, the very same political event, for example, would get very different reactions from readers if it was posted on Fox News or The Late Night Show, as the target audience is likely to feel very differently about the same issue. This also brings up theoretical issues related more generally to the definition of the emotion detection task, as it's strongly dependent on personal traits of the audience. Also, in this work, pages initially selected on availability and intuition were further grouped into sets to make training data according to performance on development data, and label distribution. Another criterion to be exploited would be vocabulary overlap between the pages and the datasets. Lastly, we could develop single models for each emotion, treating the problem as a multi-label task. This would even better reflect the ambiguity and subjectivity intrinsic to assigning emotions to text, where content could be at same time joyful or sad, depending on the reader. Acknowledgements In addition to the anonymous reviewers, we want to thank Lucia Passaro and Barbara Plank for insightful discussions, and for providing comments on draft versions of this paper.
Which Facebook pages did they look at?
FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney
3,411
qasper
4k
Introduction A hashtag is a keyphrase represented as a sequence of alphanumeric characters plus underscore, preceded by the # symbol. Hashtags play a central role in online communication by providing a tool to categorize the millions of posts generated daily on Twitter, Instagram, etc. They are useful in search, tracking content about a certain topic BIBREF0 , BIBREF1 , or discovering emerging trends BIBREF2 . Hashtags often carry very important information, such as emotion BIBREF3 , sentiment BIBREF4 , sarcasm BIBREF5 , and named entities BIBREF6 , BIBREF7 . However, inferring the semantics of hashtags is non-trivial since many hashtags contain multiple tokens joined together, which frequently leads to multiple potential interpretations (e.g., lion head vs. lionhead). Table TABREF3 shows several examples of single- and multi-token hashtags. While most hashtags represent a mix of standard tokens, named entities and event names are prevalent and pose challenges to both human and automatic comprehension, as these are more likely to be rare tokens. Hashtags also tend to be shorter to allow fast typing, to attract attention or to satisfy length limitations imposed by some social media platforms. Thus, they tend to contain a large number of abbreviations or non-standard spelling variations (e.g., #iloveu4eva) BIBREF8 , BIBREF9 , which hinders their understanding. The goal of our study is to build efficient methods for automatically splitting a hashtag into a meaningful word sequence. Our contributions are: Our new dataset includes segmentation for 12,594 unique hashtags and their associated tweets annotated in a multi-step process for higher quality than the previous dataset of 1,108 hashtags BIBREF10 . We frame the segmentation task as a pairwise ranking problem, given a set of candidate segmentations. We build several neural architectures using this problem formulation which use corpus-based, linguistic and thesaurus based features. We further propose a multi-task learning approach which jointly learns segment ranking and single- vs. multi-token hashtag classification. The latter leads to an error reduction of 24.6% over the current state-of-the-art. Finally, we demonstrate the utility of our method by using hashtag segmentation in the downstream task of sentiment analysis. Feeding the automatically segmented hashtags to a state-of-the-art sentiment analysis method on the SemEval 2017 benchmark dataset results in a 2.6% increase in the official metric for the task. Background and Preliminaries Current approaches for hashtag segmentation can be broadly divided into three categories: (a) gazeteer and rule based BIBREF11 , BIBREF12 , BIBREF13 , (b) word boundary detection BIBREF14 , BIBREF15 , and (c) ranking with language model and other features BIBREF16 , BIBREF10 , BIBREF0 , BIBREF17 , BIBREF18 . Hashtag segmentation approaches draw upon work on compound splitting for languages such as German or Finnish BIBREF19 and word segmentation BIBREF20 for languages with no spaces between words such as Chinese BIBREF21 , BIBREF22 . Similar to our work, BIBREF10 BansalBV15 extract an initial set of candidate segmentations using a sliding window, then rerank them using a linear regression model trained on lexical, bigram and other corpus-based features. The current state-of-the-art approach BIBREF14 , BIBREF15 uses maximum entropy and CRF models with a combination of language model and hand-crafted features to predict if each character in the hashtag is the beginning of a new word. Generating Candidate Segmentations. Microsoft Word Breaker BIBREF16 is, among the existing methods, a strong baseline for hashtag segmentation, as reported in BIBREF14 and BIBREF10 . It employs a beam search algorithm to extract INLINEFORM0 best segmentations as ranked by the n-gram language model probability: INLINEFORM1 where INLINEFORM0 is the word sequence of segmentation INLINEFORM1 and INLINEFORM2 is the window size. More sophisticated ranking strategies, such as Binomial and word length distribution based ranking, did not lead to a further improvement in performance BIBREF16 . The original Word Breaker was designed for segmenting URLs using language models trained on web data. In this paper, we reimplemented and tailored this approach to segmenting hashtags by using a language model specifically trained on Twitter data (implementation details in § SECREF26 ). The performance of this method itself is competitive with state-of-the-art methods (evaluation results in § SECREF46 ). Our proposed pairwise ranking method will effectively take the top INLINEFORM3 segmentations generated by this baseline as candidates for reranking. However, in prior work, the ranking scores of each segmentation were calculated independently, ignoring the relative order among the top INLINEFORM0 candidate segmentations. To address this limitation, we utilize a pairwise ranking strategy for the first time for this task and propose neural architectures to model this. Multi-task Pairwise Neural Ranking We propose a multi-task pairwise neural ranking approach to better incorporate and distinguish the relative order between the candidate segmentations of a given hashtag. Our model adapts to address single- and multi-token hashtags differently via a multi-task learning strategy without requiring additional annotations. In this section, we describe the task setup and three variants of pairwise neural ranking models (Figure FIGREF11 ). Segmentation as Pairwise Ranking The goal of hashtag segmentation is to divide a given hashtag INLINEFORM0 into a sequence of meaningful words INLINEFORM1 . For a hashtag of INLINEFORM2 characters, there are a total of INLINEFORM3 possible segmentations but only one, or occasionally two, of them ( INLINEFORM4 ) are considered correct (Table TABREF9 ). We transform this task into a pairwise ranking problem: given INLINEFORM0 candidate segmentations { INLINEFORM1 }, we rank them by comparing each with the rest in a pairwise manner. More specifically, we train a model to predict a real number INLINEFORM2 for any two candidate segmentations INLINEFORM3 and INLINEFORM4 of hashtag INLINEFORM5 , which indicates INLINEFORM6 is a better segmentation than INLINEFORM7 if positive, and vice versa. To quantify the quality of a segmentation in training, we define a gold scoring function INLINEFORM8 based on the similarities with the ground-truth segmentation INLINEFORM9 : INLINEFORM10 We use the Levenshtein distance (minimum number of single-character edits) in this paper, although it is possible to use other similarity measurements as alternatives. We use the top INLINEFORM0 segmentations generated by Microsoft Word Breaker (§ SECREF2 ) as initial candidates. Pairwise Neural Ranking Model For an input candidate segmentation pair INLINEFORM0 , we concatenate their feature vectors INLINEFORM1 and INLINEFORM2 , and feed them into a feedforward network which emits a comparison score INLINEFORM3 . The feature vector INLINEFORM4 or INLINEFORM5 consists of language model probabilities using Good-Turing BIBREF23 and modified Kneser-Ney smoothing BIBREF24 , BIBREF25 , lexical and linguistic features (more details in § SECREF23 ). For training, we use all the possible pairs INLINEFORM6 of the INLINEFORM7 candidates as the input and their gold scores INLINEFORM8 as the target. The training objective is to minimize the Mean Squared Error (MSE): DISPLAYFORM0 where INLINEFORM0 is the number of training examples. To aggregate the pairwise comparisons, we follow a greedy algorithm proposed by BIBREF26 cohen1998learning and used for preference ranking BIBREF27 . For each segmentation INLINEFORM0 in the candidate set INLINEFORM1 , we calculate a single score INLINEFORM2 , and find the segmentation INLINEFORM3 corresponding to the highest score. We repeat the same procedure after removing INLINEFORM4 from INLINEFORM5 , and continue until INLINEFORM6 reduces to an empty set. Figure FIGREF11 (a) shows the architecture of this model. Margin Ranking (MR) Loss As an alternative to the pairwise ranker (§ SECREF15 ), we propose a pairwise model which learns from candidate pairs INLINEFORM0 but ranks each individual candidate directly rather than relatively. We define a new scoring function INLINEFORM1 which assigns a higher score to the better candidate, i.e., INLINEFORM2 , if INLINEFORM3 is a better candidate than INLINEFORM4 and vice-versa. Instead of concatenating the features vectors INLINEFORM5 and INLINEFORM6 , we feed them separately into two identical feedforward networks with shared parameters. During testing, we use only one of the networks to rank the candidates based on the INLINEFORM7 scores. For training, we add a ranking layer on top of the networks to measure the violations in the ranking order and minimize the Margin Ranking Loss (MR): DISPLAYFORM0 where INLINEFORM0 is the number of training samples. The architecture of this model is presented in Figure FIGREF11 (b). Adaptive Multi-task Learning Both models in § SECREF15 and § SECREF17 treat all the hashtags uniformly. However, different features address different types of hashtags. By design, the linguistic features capture named entities and multi-word hashtags that exhibit word shape patterns, such as camel case. The ngram probabilities with Good-Turing smoothing gravitate towards multi-word segmentations with known words, as its estimate for unseen ngrams depends on the fraction of ngrams seen once which can be very low BIBREF28 . The modified Kneser-Ney smoothing is more likely to favor segmentations that contain rare words, and single-word segmentations in particular. Please refer to § SECREF46 for a more detailed quantitative and qualitative analysis. To leverage this intuition, we introduce a binary classification task to help the model differentiate single-word from multi-word hashtags. The binary classifier takes hashtag features INLINEFORM0 as the input and outputs INLINEFORM1 , which represents the probability of INLINEFORM2 being a multi-word hashtag. INLINEFORM3 is used as an adaptive gating value in our multi-task learning setup. The gold labels for this task are obtained at no extra cost by simply verifying whether the ground-truth segmentation has multiple words. We train the pairwise segmentation ranker and the binary single- vs. multi-token hashtag classifier jointly, by minimizing INLINEFORM4 for the pairwise ranker and the Binary Cross Entropy Error ( INLINEFORM5 ) for the classifier: DISPLAYFORM0 where INLINEFORM0 is the adaptive gating value, INLINEFORM1 indicates if INLINEFORM2 is actually a multi-word hashtag and INLINEFORM3 is the number of training examples. INLINEFORM4 and INLINEFORM5 are the weights for each loss. For our experiments, we apply equal weights. More specifically, we divide the segmentation feature vector INLINEFORM0 into two subsets: (a) INLINEFORM1 with modified Kneser-Ney smoothing features, and (b) INLINEFORM2 with Good-Turing smoothing and linguistic features. For an input candidate segmentation pair INLINEFORM3 , we construct two pairwise vectors INLINEFORM4 and INLINEFORM5 by concatenation, then combine them based on the adaptive gating value INLINEFORM6 before feeding them into the feedforward network INLINEFORM7 for pairwise ranking: DISPLAYFORM0 We use summation with padding, as we find this simple ensemble method achieves similar performance in our experiments as the more complex multi-column networks BIBREF29 . Figure FIGREF11 (c) shows the architecture of this model. An analogue multi-task formulation can also be used for the Margin Ranking loss as: DISPLAYFORM0 Features We use a combination of corpus-based and linguistic features to rank the segmentations. For a candidate segmentation INLINEFORM0 , its feature vector INLINEFORM1 includes the number of words in the candidate, the length of each word, the proportion of words in an English dictionary or Urban Dictionary BIBREF30 , ngram counts from Google Web 1TB corpus BIBREF31 , and ngram probabilities from trigram language models trained on the Gigaword corpus BIBREF32 and 1.1 billion English tweets from 2010, respectively. We train two language models on each corpus: one with Good-Turing smoothing using SRILM BIBREF33 and the other with modified Kneser-Ney smoothing using KenLM BIBREF34 . We also add boolean features, such as if the candidate is a named-entity present in the list of Wikipedia titles, and if the candidate segmentation INLINEFORM2 and its corresponding hashtag INLINEFORM3 satisfy certain word-shapes (more details in appendix SECREF61 ). Similarly, for hashtag INLINEFORM0 , we extract the feature vector INLINEFORM1 consisting of hashtag length, ngram count of the hashtag in Google 1TB corpus BIBREF31 , and boolean features indicating if the hashtag is in an English dictionary or Urban Dictionary, is a named-entity, is in camel case, ends with a number, and has all the letters as consonants. We also include features of the best-ranked candidate by the Word Breaker model. Implementation Details We use the PyTorch framework to implement our multi-task pairwise ranking model. The pairwise ranker consists of an input layer, three hidden layers with eight nodes in each layer and hyperbolic tangent ( INLINEFORM0 ) activation, and a single linear output node. The auxiliary classifier consists of an input layer, one hidden layer with eight nodes and one output node with sigmoid activation. We use the Adam algorithm BIBREF35 for optimization and apply a dropout of 0.5 to prevent overfitting. We set the learning rate to 0.01 and 0.05 for the pairwise ranker and auxiliary classifier respectively. For each experiment, we report results obtained after 100 epochs. For the baseline model used to extract the INLINEFORM0 initial candidates, we reimplementated the Word Breaker BIBREF16 as described in § SECREF2 and adapted it to use a language model trained on 1.1 billion tweets with Good-Turing smoothing using SRILM BIBREF33 to give a better performance in segmenting hashtags (§ SECREF46 ). For all our experiments, we set INLINEFORM1 . Hashtag Segmentation Data We use two datasets for experiments (Table TABREF29 ): (a) STAN INLINEFORM0 , created by BIBREF10 BansalBV15, which consists of 1,108 unique English hashtags from 1,268 randomly selected tweets in the Stanford Sentiment Analysis Dataset BIBREF36 along with their crowdsourced segmentations and our additional corrections; and (b) STAN INLINEFORM1 , our new expert curated dataset, which includes all 12,594 unique English hashtags and their associated tweets from the same Stanford dataset. Experiments In this section, we present experimental results that compare our proposed method with the other state-of-the-art approaches on hashtag segmentation datasets. The next section will show experiments of applying hashtag segmentation to the popular task of sentiment analysis. Existing Methods We compare our pairwise neural ranker with the following baseline and state-of-the-art approaches: The original hashtag as a single token; A rule-based segmenter, which employs a set of word-shape rules with an English dictionary BIBREF13 ; A Viterbi model which uses word frequencies from a book corpus BIBREF0 ; The specially developed GATE Hashtag Tokenizer from the open source toolkit, which combines dictionaries and gazetteers in a Viterbi-like algorithm BIBREF11 ; A maximum entropy classifier (MaxEnt) trained on the STAN INLINEFORM0 training dataset. It predicts whether a space should be inserted at each position in the hashtag and is the current state-of-the-art BIBREF14 ; Our reimplementation of the Word Breaker algorithm which uses beam search and a Twitter ngram language model BIBREF16 ; A pairwise linear ranker which we implemented for comparison purposes with the same features as our neural model, but using perceptron as the underlying classifier BIBREF38 and minimizing the hinge loss between INLINEFORM0 and a scoring function similar to INLINEFORM1 . It is trained on the STAN INLINEFORM2 dataset. Evaluation Metrics We evaluate the performance by the top INLINEFORM0 ( INLINEFORM1 ) accuracy (A@1, A@2), average token-level F INLINEFORM2 score (F INLINEFORM3 @1), and mean reciprocal rank (MRR). In particular, the accuracy and MRR are calculated at the segmentation-level, which means that an output segmentation is considered correct if and only if it fully matches the human segmentation. Average token-level F INLINEFORM4 score accounts for partially correct segmentation in the multi-token hashtag cases. Results Tables TABREF32 and TABREF33 show the results on the STAN INLINEFORM0 and STAN INLINEFORM1 datasets, respectively. All of our pairwise neural rankers are trained on the 2,518 manually segmented hashtags in the training set of STAN INLINEFORM2 and perform favorably against other state-of-the-art approaches. Our best model (MSE+multitask) that utilizes different features adaptively via a multi-task learning procedure is shown to perform better than simply combining all the features together (MR and MSE). We highlight the 24.6% error reduction on STAN INLINEFORM3 and 16.5% on STAN INLINEFORM4 of our approach over the previous SOTA BIBREF14 on the Multi-token hashtags, and the importance of having a separate evaluation of multi-word cases as it is trivial to obtain 100% accuracy for Single-token hashtags. While our hashtag segmentation model is achieving a very high accuracy@2, to be practically useful, it remains a challenge to get the top one predication exactly correct. Some hashtags are very difficult to interpret, e.g., #BTVbrownSMB refers to the Social Media Breakfast (SMB) in Burlington, Vermont (BTV). The improved Word Breaker with our addition of a Twitter-specific language model is a very strong baseline, which echos the findings of the original Word Breaker paper BIBREF16 that having a large in-domain language model is extremely helpful for word segmentation tasks. It is worth noting that the other state-of-the-art system BIBREF14 also utilized a 4-gram language model trained on 476 million tweets from 2009. Analysis and Discussion To empirically illustrate the effectiveness of different features on different types of hashtags, we show the results for models using individual feature sets in pairwise ranking models (MSE) in Table TABREF45 . Language models with modified Kneser-Ney smoothing perform best on single-token hashtags, while Good-Turing and Linguistic features work best on multi-token hashtags, confirming our intuition about their usefulness in a multi-task learning approach. Table TABREF47 shows a qualitative analysis with the first column ( INLINEFORM0 INLINEFORM1 INLINEFORM2 ) indicating which features lead to correct or wrong segmentations, their count in our data and illustrative examples with human segmentation. As expected, longer hashtags with more than three tokens pose greater challenges and the segmentation-level accuracy of our best model (MSE+multitask) drops to 82.1%. For many error cases, our model predicts a close-to-correct segmentation, e.g., #youbrownknowyoubrownupttoobrownearly, #iseebrownlondoniseebrownfrance, which is also reflected by the higher token-level F INLINEFORM0 scores across hashtags with different lengths (Figure FIGREF51 ). Since our approach heavily relies on building a Twitter language model, we experimented with its sizes and show the results in Figure FIGREF52 . Our approach can perform well even with access to a smaller amount of tweets. The drop in F INLINEFORM0 score for our pairwise neural ranker is only 1.4% and 3.9% when using the language models trained on 10% and 1% of the total 1.1 billion tweets, respectively. Language use in Twitter changes with time BIBREF9 . Our pairwise ranker uses language models trained on the tweets from the year 2010. We tested our approach on a set of 500 random English hashtags posted in tweets from the year 2019 and show the results in Table TABREF55 . With a segmentation-level accuracy of 94.6% and average token-level F INLINEFORM0 score of 95.6%, our approach performs favorably on 2019 hashtags. Extrinsic Evaluation: Twitter Sentiment Analysis We attempt to demonstrate the effectiveness of our hashtag segmentation system by studying its impact on the task of sentiment analysis in Twitter BIBREF39 , BIBREF40 , BIBREF41 . We use our best model (MSE+multitask), under the name HashtagMaster, in the following experiments. Experimental Setup We compare the performance of the BiLSTM+Lex BIBREF42 sentiment analysis model under three configurations: (a) tweets with hashtags removed, (b) tweets with hashtags as single tokens excluding the # symbol, and (c) tweets with hashtags as segmented by our system, HashtagMaster. BiLSTM+Lex is a state-of-the-art open source system for predicting tweet-level sentiment BIBREF43 . It learns a context-sensitive sentiment intensity score by leveraging a Twitter-based sentiment lexicon BIBREF44 . We use the same settings as described by BIBREF42 teng-vo-zhang:2016:EMNLP2016 to train the model. We use the dataset from the Sentiment Analysis in Twitter shared task (subtask A) at SemEval 2017 BIBREF41 . Given a tweet, the goal is to predict whether it expresses POSITIVE, NEGATIVE or NEUTRAL sentiment. The training and development sets consist of 49,669 tweets and we use 40,000 for training and the rest for development. There are a total of 12,284 tweets containing 12,128 hashtags in the SemEval 2017 test set, and our hashtag segmenter ended up splitting 6,975 of those hashtags present in 3,384 tweets. Results and Analysis In Table TABREF59 , we report the results based on the 3,384 tweets where HashtagMaster predicted a split, as for the rest of tweets in the test set, the hashtag segmenter would neither improve nor worsen the sentiment prediction. Our hashtag segmenter successfully improved the sentiment analysis performance by 2% on average recall and F INLINEFORM0 comparing to having hashtags unsegmented. This improvement is seemingly small but decidedly important for tweets where sentiment-related information is embedded in multi-word hashtags and sentiment prediction would be incorrect based only on the text (see Table TABREF60 for examples). In fact, 2,605 out of the 3,384 tweets have multi-word hashtags that contain words in the Twitter-based sentiment lexicon BIBREF44 and 125 tweets contain sentiment words only in the hashtags but not in the rest of the tweet. On the entire test set of 12,284 tweets, the increase in the average recall is 0.5%. Other Related Work Automatic hashtag segmentation can improve the performance of many applications besides sentiment analysis, such as text classification BIBREF13 , named entity linking BIBREF10 and modeling user interests for recommendations BIBREF45 . It can also help in collecting data of higher volume and quality by providing a more nuanced interpretation of its content, as shown for emotion analysis BIBREF46 , sarcasm and irony detection BIBREF11 , BIBREF47 . Better semantic analysis of hashtags can also potentially be applied to hashtag annotation BIBREF48 , to improve distant supervision labels in training classifiers for tasks such as sarcasm BIBREF5 , sentiment BIBREF4 , emotions BIBREF3 ; and, more generally, as labels for pre-training representations of words BIBREF49 , sentences BIBREF50 , and images BIBREF51 . Conclusion We proposed a new pairwise neural ranking model for hashtag segmention and showed significant performance improvements over the state-of-the-art. We also constructed a larger and more curated dataset for analyzing and benchmarking hashtag segmentation methods. We demonstrated that hashtag segmentation helps with downstream tasks such as sentiment analysis. Although we focused on English hashtags, our pairwise ranking approach is language-independent and we intend to extend our toolkit to languages other than English as future work. Acknowledgments We thank Ohio Supercomputer Center BIBREF52 for computing resources and the NVIDIA for providing GPU hardware. We thank Alan Ritter, Quanze Chen, Wang Ling, Pravar Mahajan, and Dushyanta Dhyani for valuable discussions. We also thank the annotators: Sarah Flanagan, Kaushik Mani, and Aswathnarayan Radhakrishnan. This material is based in part on research sponsored by the NSF under grants IIS-1822754 and IIS-1755898, DARPA through the ARO under agreement number W911NF-17-C-0095, through a Figure-Eight (CrowdFlower) AI for Everyone Award and a Criteo Faculty Research Award to Wei Xu. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of the U.S. Government. Word-shape rules Our model uses the following word shape rules as boolean features. If the candidate segmentation INLINEFORM0 and its corresponding hashtag INLINEFORM1 satisfies a word shape rule, then the boolean feature is set to True.
Do the hashtag and SemEval datasets contain only English data?
Yes
3,735
qasper
4k
Introduction Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to various application, for example, the motivation behind a user's interests BIBREF0. Based on releasing of large text corpus on social media and the emotion categories proposed by BIBREF1, BIBREF2, numerous models have provided and achieved fabulous precision so far. For example, DeepMoji BIBREF3 which utilized transfer learning concept to enhance emotions and sarcasm understanding behind the target sentence. CARER BIBREF4 learned contextualized affect representations to make itself more sensitive to rare words and the scenario behind the texts. As methods become mature, text-based emotion detecting applications can be extended from a single utterance to a dialogue contributed by a series of utterances. Table TABREF2 illustrates the difference between single utterance and dialogue emotion recognition. The same utterances in Table TABREF2, even the same person said the same sentence, the emotion it convey may be various, which may depend on different background of the conversation, tone of speaking or personality. Therefore, for emotion detection, the information from preceding utterances in a conversation is relatively critical. In SocialNLP 2019 EmotionX, the challenge is to recognize emotions for all utterances in EmotionLines dataset, a dataset consists of dialogues. According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT) BIBREF5, FriendsBERT and ChatBERT. In this paper, we introduce our approaches including causal utterance modeling, model pre-training, and fine-turning. Dataset EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”. For the datasets, there are properties worth additional mentioning. Although Friends and EmotionPush share the same data format, they are quite different in nature. Friends is a speech-based dataset which is annotated dialogues from the TV sitcom. It means most of the utterances are generated by the a few main characters. The personality of a character often affects the way of speaking, and therefore “who is the speaker" might provide extra clues for emotion prediction. In contrast, EmotionPush does not have this trait due to the anonymous mechanism. In addition, features such as typo, hyperlink, and emoji that only appear in chat-based data will need some domain-specific techniques to process. Incidentally, the objective of the challenge is to predict the emotion for each utterance. Just, according to EmotionX 2019 specification, there are only four emotions be selected as our label candidates, which are Joy, Sadness, Anger, and Neutral. These emotions will be considered during performance evaluation. The technical detail will also be introduced and discussed in following Section SECREF13 and Section SECREF26. Model Description For this challenge, we adapt BERT which is proposed by BIBREF5 to help understand the context at the same time. Technically, BERT, designed on end-to-end architecture, is a deep pre-trained transformer encoder that dynamically provides language representation and BERT already achieved multiple state-of-the-art results on GLUE benchmark BIBREF7 and many tasks. A quick recap for BERT's architecture and its pre-training tasks will be illustrated in the following subsections. Model Description ::: Model Architecture BERT, the Bidirectional Encoder Representations from Transformers, consists of several transformer encoder layers that enable the model to extract very deep language features on both token-level and sentence-level. Each transformer encoder contains multi-head self-attention layers that provide ability to learn multiple attention feature of each word from their bidirectional context. The transformer and its self-attention mechanism are proposed by BIBREF8. This self-attention mechanism can be interpreted as a key-value mapping given query. By given the embedding vector for token input, the query ($Q$), key ($K$) and value ($V$) are produced by the projection from each three parameter matrices where $W^Q \in \mathbb {R}^{d_{{\rm model}} \times d_{k}}, W^K \in \mathbb {R}^{d_{\rm model} \times d_{k}}$ and $W^V \in \mathbb {R}^{d_{\rm model} \times d_{v}}$. The self-attention BIBREF8 is formally represented as: The $ d_k = d_v = d_{\rm model} = 1024$ in BERT large version and 768 in BERT base version. Once model can extract attention feature, we can extend one self-attention into multi-head self-attention, this extension makes sub-space features can be extracted in same time by this multi-head configuration. Overall, the multi-attention mechanism is adopt for each transformer encoder, and several of encoder layer will be stacked together to form a deep transformer encoder. For the model input, BERT allow us take one sentence as input sequence or two sentences together as one input sequence, and the maximum length of input sequence is 512. The way that BERT was designed is for giving model the sentence-level and token-level understanding. In two sentences case, a special token ([SEP]) will be inserted between two sentences. In addition, the first input token is also a special token ([CLS]), and its corresponding ouput will be vector place for classification during fine-tuning. The outputs of the last encoder layer corresponding to each input token can be treated as word representations for each token, and the word representation of the first token ([CLS]) will be consider as classification (output) representation for further fine-tuning tasks. In BERT, this vector is denoted as $C \in \mathbb {R}^{d_{\rm model}} $, and a classification layer is denoted as $ W \in \mathbb {R}^{K \times d_{\rm model}}$, where $K$ is number of classification labels. Finally, the prediction $P$ of BERT is represented as $P = {\rm softmax}(CW^T)$. Model Description ::: Pre-training Tasks In pre-training, intead of using unidirectional language models, BERT developed two pre-training tasks: (1) Masked LM (cloze test) and (2) Next Sentence Prediction. At the first pre-training task, bidirectional language modeling can be done at this cloze-like pre-training. In detail, 15% tokens of input sequence will be masked at random and model need to predict those masked tokens. The encoder will try to learn contextual representations from every given tokens due to masking tokens at random. Model will not know which part of the input is going to be masked, so that the information of each masked tokens should be inferred by remaining tokens. At Next Sentence Prediction, two sentences concatenated together will be considered as model input. In order to give model a good nature language understanding, knowing relationship between sentence is one of important abilities. When generating input sequences, 50% of time the sentence B is actually followed by sentence A, and rest 50% of the time the sentence B will be picked randomly from dataset, and model need to predict if the sentence B is next sentence of sentence A. That is, the attention information will be shared between sentences. Such sentence-level understanding may have difficulties to be learned at first pre-training task (Masked LM), therefore, the pre-training task (NSP) is developed as second training goal to capture the cross sentence relationship. In this competition, limited by the size of dataset and the challenge in contextual emotion recognition, we consider BERT with both two pre-training tasks can give a good starting point to extract emotion changing during dialogue-like conversation. Especially the second pre-training task, it might be more important for dialogue-like conversation where the emotion may various by the context of continuous utterances. That is, given a set of continues conversations, the emotion of current utterance might be influenced by previous utterance. By this assumption and with supporting from the experiment results of BERT, we can take sentence A as one-sentence context and consider sentence B as the target sentence for emotion prediction. The detail will be described in Section SECREF4. Methodology The main goal of the present work is to predict the emotion of utterance within the dialogue. Following are four major difficulties we concern about: The emotion of the utterances depends not only on the text but also on the interaction happened earlier. The source of the two datasets are different. Friends is speech-based dialogues and EmotionPush is chat-based dialogues. It makes datasets possess different characteristics. There are only $1,000$ dialogues in both training datasets which are not large enough for the stability of training a complex neural-based model. The prediction targets (emotion labels) are highly unbalanced. The proposed approach is summarized in Figure FIGREF3, which aims to overcome these challenges. The framework could be separated into three steps and described as follow: Methodology ::: Causal Utterance Modeling Given a dialogue $D^{(i)}$ which includes sequence of utterances denoted as $D^{(i)}=(u^{(i)}_{1}, u^{(i)}_{2}, ..., u^{(i)}_{n})$, where $i$ is the index in dataset and $n$ is the number of utterances in the given dialogue. In order to conserve the emotional information of both utterance and conversation, we rearrange each two consecutive utterances $u_{t}, u_{t-1}$ into a single sentence representation $x_{t}$ as The corresponding sentence representation corpus $X^{(i)}$ are denoted as $X^{(i)}=(x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{n})$. Note that the first utterance within a conversation does not have its causal utterance (previous sentence), therefore, the causal utterance will be set as [None]. A practical example of sentence representation is shown in Table TABREF11. Since the characteristics of two datasets are not identical, we customize different causal utterance modeling strategies to refine the information in text. For Friends, there are two specific properties. The first one is that most dialogues are surrounding with the six main characters, including Rachel, Monica, Phoebe, Joey, Chandler, and Ross. The utterance ratio of given by the six roles is up to $83.4\%$. Second, the personal characteristics of the six characters are very clear. Each leading role has its own emotion undulated rule. To make use of these features, we introduce the personality tokenization which help learning the personality of the six characters. Personality tokenization concatenate the speaker and says tokens before the input utterance if the speaker is one of the six characters. The example is shown in Table TABREF12. For EmotionPush, the text are informal chats which including like slang, acronym, typo, hyperlink, and emoji. Another characteristic is that the specific name entities are tokenized with random index. (e.g. “organization_80”, “person_01”, and “time_12”). We consider some of these informal text are related to expressing emotion such as repeated typing, purposed capitalization, and emoji (e.g. “:D”, “:(”, and “<3”)). Therefore, we keep most informal expressions but only process hyperlinks, empty utterance, and name entities by unifying the tokens. Methodology ::: Model Pre-training Since the size of both datasets are not large enough for complex neural-based model training as well as BERT model is only pre-train on formal text datasets, the issues of overfitting and domain bias are important considerations for design the pre-training process. To avoid our model overfitting on the training data and increase the understanding of informal text, we adapted BERT and derived two models, namely FriendsBERT and ChatBERT, with different pre-training tasks before the formal training process for Friends and EmotionPush dataset, respectively. The pre-training strategies are described below. For pre-training FriendsBERT, we collect the completed scripts of all ten seasons of Friends TV shows from emorynlp which includes 3,107 scenes within 61,309 utterances. All the utterances are followed the preprocessing methods mentions above to compose the corpus for Masked language model pre-training task. The consequent utterances in the same scenes are considered as the consequent sentences to pre-train the Next Sentence Prediction task. In the pre-training process, the training loss is the sum of the mean likelihood of two pre-train tasks. For pre-training ChatBERT, we pre-train our model on the Twitter dataset, since the text and writing style on Twitter are close to the chat text where both may involved with many informal words or emoticons as well. The Twitter emotion dataset, 8 basic emotions from emotion wheel BIBREF1, was collected by twitter streaming API with specific emotion-related hashtags, such as #anger, #joy, #cry, #sad and etc. The hashtags in tweets are treated as emotion label for model fine-tuning. The tweets were fine-grined processing followed the rules in BIBREF9, BIBREF4, including duplicate tweets removing, the emotion hashtags must appearing in the last position of a tweet, and etc. The statis of tweets were summarized in Table TABREF17. Each tweet and corresponding emotion label composes an emotion classification dataset for pre-training. Methodology ::: Fine-tuning Since our emotion recognition task is treated as a sequence-level classification task, the model would be fine-tuned on the processed training data. Following the BERT construction, we take the first embedding vector which corresponds to the special token [CLS] from the final hidden state of the Transformer encoder. This vector represents the embedding vector of the corresponding conversation utterances which is denoted as $\mathbf {C} \in \mathbb {R}^{H}$, where $H$ is the embedding size. A dense neural layer is treated as a classification layer which consists of parameters $\mathbf {W} \in \mathbb {R}^{K\times H}$ and $\mathbf {b} \in \mathbb {R}^{K}$, where $K$ is the number of emotion class. The emotion prediction probabilities $\mathbf {P} \in \mathbb {R}^{K}$ are computed by a softmax activation function as All the parameters in BERT and the classification layer would be fine-turned together to minimize the Negative Log Likelihood (NLL) loss function, as Equation (DISPLAY_FORM22), based on the ground truth emotion label $c$. In order to tackle the problem of highly unbalanced emotion labels, we apply weighted balanced warming on NLL loss function, as Equation (DISPLAY_FORM23), in the first epoch of fine-tuning procedure. where $\mathbf {w}$ are the weights of corresponding emotion label $c$ which are computed and normalize by the frequency as By adding the weighted balanced warming on NLL loss, the model could learn to predict the minor emotions (e.g. anger and sadness) earlier and make the training process more stable. Since the major evaluation metrics micro F1-score is effect by the number of each label, we only apply the weighted balanced warming in first epoch to optimize the performance. Experiments Since the EmotionX challenge only provided the gold labels in training data, we pick the best performance model (weights) to predict the testing data. In this section, we present the experiment and evaluation results. Experiments ::: Experimental Setup The EmotionX challenge consists of $1,000$ dialogues for both Friends and EmotionPush. In all of our experiments, each dataset is separated into top 800 dialogues for training and last 200 dialogues for validation. Since the EmotionX challenge considers only the four emotions (anger, joy, neutral, and sadness) in the evaluation stage, we ignore all the data point corresponding to other emotions directly. The details of emotions distribution are shown in Table TABREF18. The hyperparameters and training setup of our models (FriendsBERT and ChatBERT) are shown in Table TABREF25. Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency–inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model. All the experiment results are based on the best performances of validation results. Experiments ::: Performance The experiment results of validation on Friends are shown in Table TABREF19. The proposed model and baselines are evaluated based on the Precision (P.), Recall (R.), and F1-measure (F1). For the traditional baselines, namely BOW and TFIDF, we observe that they achieve surprising high F1 scores around $0.81$, however, the scores for Anger and Sadness are lower. This explains that traditional approaches tend to predict the labels with large sample size, such as Joy and Neutral, but fail to take of scarce samples even when an ensemble random forest classifier is adopted. In order to prevent the unbalanced learning, we choose the weighted loss mechanism for both TextCNN and causal modeling TextCNN (C-TextCNN), these models suffer less than the traditional baselines and achieve a slightly balance performance, where there are around 15% and 7% improvement on Anger and Sadness, respectively. We following adopt the casual utterance modeling to original TextCNN, mapping previous utterance as well as target utterance into model. The causal utterance modeling improve the C-TextCNN over TextCNN for 6%, 2% and 1% on Anger, Joy and overall F1 score. Motivated from these preliminary experiments, the proposed FriendsBERT also adopt the ideas of both weighted loss and causal utterance modeling. As compared to the original BERT, single sentence BERT (FriendsBERT-base-s), the proposed FriendsBERT-base improve 1% for Joy and overall F1, and 2% for Sadness. For the final validation performance, our proposed approach achieves the highest scores, which are $0.85$ and $0.86$ for FriendsBERT-base and FriendsBERT-large, respectively. Overall, the proposed FriendsBERT successfully captures the sentence-level context-awarded information and outperforms all the baselines, which not only achieves high performance on large sample labels, but also on small sample labels. The similar settings are also adapted to EmotionPush dataset for the final evaluation. Experiments ::: Evaluation Results The testing dataset consists of 240 dialogues including $3,296$ and $3,536$ utterances in Friends and EmotionPush respectively. We re-train our FriendsBERT and ChatBERT with top 920 training dialogues and predict the evaluation results using the model performing the best validation results. The results are shown in Table TABREF29 and Table TABREF30. The present method achieves $81.5\%$ and $88.5\%$ micro F1-score on the testing dataset of Friends and EmotionPush, respectively. Conclusion and Future work In the present work, we propose FriendsBERT and ChatBERT for the multi-utterance emotion recognition task on EmotionLines dataset. The proposed models are adapted from BERT BIBREF5 with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pre-training, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pre-training helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments. In future work, we consider to include the conditional probabilistic constraint $P ({\rm Emo}_{B} | \hat{\rm Emo}_{A})$. Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of ${\rm Sentence}_B$ directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially.
What are the sources of the datasets?
Friends TV sitcom, Facebook messenger chats
3,185
qasper
4k
Introduction How humans process language has become increasingly relevant in natural language processing since physiological data during language understanding is more accessible and recorded with less effort. In this work, we focus on eye-tracking and electroencephalography (EEG) recordings to capture the reading process. On one hand, eye movement data provides millisecond-accurate records about where humans look when they are reading, and is highly correlated with the cognitive load associated with different stages of text processing. On the other hand, EEG records electrical brain activity across the scalp and is a direct measure of physiological processes, including language processing. The combination of both measurement methods enables us to study the language understanding process in a more natural setting, where participants read full sentences at a time, in their own speed. Eye-tracking then permits us to define exact word boundaries in the timeline of a subject reading a sentence, allowing the extraction of brain activity signals for each word. Human cognitive language processing data is immensely useful for NLP: Not only can it be leveraged to improve NLP applications (e.g. barrett2016weakly for part-of-speech tagging or klerke2016improving for sentence compression), but also to evaluate state-of-the-art machine learning systems. For example, hollenstein2019cognival evaluate word embeddings, or schwartz2019inducing fine-tune language models with brain-relevant bias. Additionally, the availability of labelled data plays a crucial role in all supervised machine learning applications. Physiological data can be used to understand and improve the labelling process (e.g. tokunaga2017eye), and, for instance, to build cost models for active learning scenarios BIBREF0. Is it possible to replace this expensive manual work with models trained on physiological activity data recorded from humans while reading? That is to say, can we find and extract relevant aspects of text understanding and annotation directly from the source, i.e. eye-tracking and brain activity signals during reading? Motivated by these questions and our previously released dataset, ZuCo 1.0 BIBREF1, we developed this new corpus, where we specifically aim to collect recordings during natural reading as well as during annotation. We provide the first dataset of simultaneous eye movement and brain activity recordings to analyze and compare normal reading to task-specific reading during annotation. The Zurich Cognitive Language Processing Corpus (ZuCo) 2.0, including raw and preprocessed eye-tracking and electroencephalography (EEG) data of 18 subjects, as well as the recording and preprocessing scripts, is publicly available at https://osf.io/2urht/. It contains physiological data of each subject reading 739 English sentences from Wikipedia (see example in Figure FIGREF1). We want to highlight the re-use potential of this data. In addition to the psycholinguistic motivation, this corpus is especially tailored for training and evaluating machine learning algorithms for NLP purposes. We conduct a detailed technical validation of the data as proof of the quality of the recordings. Related Work Some eye-tracking corpora of natural reading (e.g. the Dundee BIBREF2, Provo BIBREF3 and GECO corpus BIBREF4), and a few EEG corpora (for example, the UCL corpus BIBREF5) are available. It has been shown that this type of cognitive processing data is useful for improving and evaluating NLP methods (e.g. barrett2018sequence,hollenstein2019cognival, hale2018finding). However, before the Zurich Cognitive Language Processing Corpus (ZuCo 1.0), there was no available data for simultaneous eye-tracking and EEG recordings of natural reading. dimigen2011coregistration studied the linguistic effects of eye movements and EEG co-registration in natural reading and showed that they accurately represent lexical processing. Moreover, the simultaneous recordings are crucial to extract word-level brain activity signals. While the above mentioned studies analyze and leverage natural reading, some NLP work has used eye-tracking during annotation (but, as of yet, not EEG data). mishra2016predicting and joshi2014measuring recorded eye-tracking during binary sentiment annotation (positive/negative). This data was used to determine the annotation complexity of the text passages based on eye movement metrics and for sarcasm detection BIBREF6. Moreover, eye-tracking has been used to analyze the word sense annotation process in Hindi BIBREF7, named entity annotation in Japanese BIBREF8, and to leverage annotator gaze behaviour for coreference resolution BIBREF9. Finally, tomanek2010cognitive used eye-tracking data during entity annotation to build a cost model for active learning. However, until now there is no available data or research that analyzes the differences in the human processing of normal reading versus annotation. Related Work ::: ZuCo1.0 In previous work, we recorded a first dataset of simultaneous eye-tracking and EEG during natural reading BIBREF1. ZuCo 1.0 consists of three reading tasks, two of which contain very similar reading material and experiments as presented in the current work. However, the main difference and reason for recording ZuCo 2.0, consists in the experiment procedure. For ZuCo 1.0 the normal reading and task-specific reading paradigms were recorded in different sessions on different days. Therefore, the recorded data is not appropriate as a means of comparison between natural reading and annotation, since the differences in the brain activity data might result mostly from the different sessions due to the sensitivity of EEG. This, and extending the dataset with more sentences and more subjects, were the main factors for recording the current corpus. We purposefully maintained an overlap of some sentences between both datasets to allow additional analyses (details are described in Section SECREF7). Corpus Construction In this section we describe the contents and experimental design of the ZuCo 2.0 corpus. Corpus Construction ::: Participants We recorded data from 19 participants and discarded the data of one of them due to technical difficulties with the eye-tracking calibration. Hence, we share the data of 18 participants. All participants are healthy adults (mean age = 34 (SD=8.3), 10 females). Their native language is English, originating from Australia, Canada, UK, USA or South Africa. Two participants are left-handed and three participants wear glasses for reading. Details on subject demographics can be found in Table TABREF4. All participants gave written consent for their participation and the re-use of the data prior to the start of the experiments. The study was approved by the Ethics Commission of the University of Zurich. Corpus Construction ::: Reading materials During the recording session, the participants read 739 sentences that were selected from the Wikipedia corpus provided by culotta2006integrating. This corpus was chosen because it provides annotations of semantic relations. We included seven of the originally defined relation types: political_affiliation, education, founder, wife/husband, job_title, nationality, and employer. The sentences were chosen in the same length range as ZuCo 1.0, and with similar Flesch reading ease scores. The dataset statistics are shown in Table TABREF2. Of the 739 sentences, the participants read 349 sentences in a normal reading paradigm, and 390 sentences in a task-specific reading paradigm, in which they had to determine whether a certain relation type occurred in the sentence or not. Table TABREF3 shows the distribution of the different relation types in the sentences of the task-specific annotation paradigm. Purposefully, there are 63 duplicates between the normal reading and the task-specific sentences (8% of all sentences). The intention of these duplicate sentences is to provide a set of sentences read twice by all participants with a different task in mind. Hence, this enables the comparison of eye-tracking and brain activity data when reading normally and when annotating specific relations (see examples in Section SECREF4). Furthermore, there is also an overlap in the sentences between ZuCo 1.0 and ZuCo 2.0. 100 normal reading and 85 task-specific sentences recorded for this dataset were already recorded in ZuCo 1.0. This allows for comparisons between the different recording procedures (i.e. session-specific effects) and between more participants (subject-specific effects). Corpus Construction ::: Experimental design As mentioned above, we recorded two different reading tasks for the ZuCo 2.0 dataset. During both tasks the participants were able to read in their own speed, using a control pad to move to the next sentence and to answer the control questions, which allowed for natural reading. Since each subject reads at their own personal pace, the reading speed between varies between subjects. Table TABREF4 shows the average reading speed for each task, i.e. the average number of seconds a subject spends per sentence before switching to the next one. All 739 sentences were recorded in a single session for each participant. The duration of the recording sessions was between 100 and 180 minutes, depending on the time required to set up and calibrate the devices, and the personal reading speed of the participants. We recorded 14 blocks of approx. 50 sentences, alternating between tasks: 50 sentences of normal reading, followed by 50 sentences of task-specific reading. The order of blocks and sentences within blocks was identical for all subjects. Each sentence block was preceded by a practice round of three sentences. Corpus Construction ::: Experimental design ::: Normal reading (NR) In the first task, participants were instructed to read the sentences naturally, without any specific task other than comprehension. Participants were told to read the sentences normally without any special instructions. Figure FIGREF8 (left) shows an example sentence as it was depicted on the screen during recording. As shown in Figure FIGREF8 (middle), the control condition for this task consisted of single-choice questions about the content of the previous sentence. 12% of randomly selected sentences were followed by such a comprehension question with three answer options on a separate screen. Corpus Construction ::: Experimental design ::: Task-specific reading (TSR) In the second task, the participants were instructed to search for a specific relation in each sentence they read. Instead of comprehension questions, the participants had to decide for each sentence whether it contains the relation or not, i.e. they were actively annotating each sentence. Figure FIGREF8 (right) shows an example screen for this task. 17% of the sentences did not include the relation type and were used as control conditions. All sentences within one block involved the same relation type. The blocks started with a practice round, which described the relation and was followed by three sample sentences, so that the participants would be familiar with the respective relation type. Corpus Construction ::: Linguistic assessment As a linguistic assessment, the vocabulary and language proficiency of the participants was tested with the LexTALE test (Lexical Test for Advanced Learners of English, lemhofer2012introducing). This is an unspeeded lexical decision task designed for intermediate to highly proficient language users. The average LexTALE score over all participants was 88.54%. Moreover, we also report the scores the participants achieved with their answers to the reading comprehension control questions and their relation annotations. The detailed scores for all participants are also presented in Table TABREF4. Corpus Construction ::: Data acquisition Data acquisition took place in a sound-attenuated and dark experiment room. Participants were seated at a distance of 68cm from a 24-inch monitor with a resolution of 800x600 pixels. A stable head position was ensured via a chin rest. Participants were instructed to stay as still as possible during the tasks to avoid motor EEG artifacts. Participants were also offered snacks and water during the breaks and were encouraged to rest. All sentences were presented at the same position on the screen and could span multiple lines. The sentences were presented in black on a light grey background with font size 20-point Arial, resulting in a letter height of 0.8 mm. The experiment was programmed in MATLAB 2016b BIBREF10, using PsychToolbox BIBREF11. Participants completed the tasks sitting alone in the room, while two research assistants were monitoring their progress in the adjoining room. All recording scripts including detailed participant instructions are available alongside the data. Corpus Construction ::: Data acquisition ::: Eye-tracking acquisition Eye position and pupil size were recorded with an infrared video-based eye tracker (EyeLink 1000 Plus, SR Research) at a sampling rate of 500 Hz. The eye tracker was calibrated with a 9-point grid at the beginning of the session and re-validated before each block of sentences. Corpus Construction ::: Data acquisition ::: EEG acquisition High-density EEG data were recorded at a sampling rate of 500 Hz with a bandpass of 0.1 to 100 Hz, using a 128-channel EEG Geodesic Hydrocel system (Electrical Geodesics). The recording reference was set at electrode Cz. The head circumference of each participant was measured to select an appropriately sized EEG net. To ensure good contact, the impedance of each electrode was checked prior to recording, and was kept below 40 kOhm. Electrode impedance levels were checked after every third block of 50 sentences (approx. every 30 mins) and reduced if necessary. Corpus Construction ::: Preprocessing and feature extraction ::: Eye-tracking The eye-tracking data consists of (x,y) gaze location entries for all individual fixations (Figure FIGREF1b). Coordinates were given in pixels with respect to the monitor coordinates (the upper left corner of the screen was (0,0) and down/right was positive). We provide this raw data as well as various engineered eye-tracking features. For this feature extraction only fixations within the boundaries of each displayed word were extracted. Data points distinctly not associated with reading (minimum distance of 50 pixels to the text) were excluded. Additionally, fixations shorter than 100 ms were excluded from the analyses, because these are unlikely to reflect fixations relevant for reading BIBREF12. On the basis of the GECO and ZuCo 1.0 corpora, we extracted the following features: (i) gaze duration (GD), the sum of all fixations on the current word in the first-pass reading before the eye moves out of the word; (ii) total reading time (TRT), the sum of all fixation durations on the current word, including regressions; (iii) first fixation duration (FFD), the duration of the first fixation on the prevailing word; (iv) single fixation duration (SFD), the duration of the first and only fixation on the current word; and (v) go-past time (GPT), the sum of all fixations prior to progressing to the right of the current word, including regressions to previous words that originated from the current word. For each of these eye-tracking features we additionally computed the pupil size. Furthermore, we extracted the number of fixations and mean pupil size for each word and sentence. Corpus Construction ::: Preprocessing and feature extraction ::: EEG The EEG data shared in this project are available as raw data, but also preprocessed with Automagic (version 1.4.6, pedroni2019automagic), a tool for automatic EEG data cleaning and validation. 105 EEG channels (i.e. electrodes) were used from the scalp recordings. 9 EOG channels were used for artifact removal and additional 14 channels lying mainly on the neck and face were discarded before data analysis. Bad channels were identified and interpolated. We used the Multiple Artifact Rejection Algorithm (MARA), a supervised machine learning algorithm that evaluates ICA components, for automatic artifact rejection. MARA has been trained on manual component classifications, and thus captures a wide range of artifacts. MARA is especially effective at detecting and removing eye and muscle artifact components. The effect of this preprocessing can be seen in Figure FIGREF1d. After preprocessing, we synchronized the EEG and eye-tracking data to enable EEG analyses time-locked to the onsets of fixations. To compute oscillatory power measures, we band-pass filtered the continuous EEG signals across an entire reading task for five different frequency bands resulting in a time-series for each frequency band. The independent frequency bands were determined as follows: theta$_1$ (4–6 Hz), theta$_2$ (6.5–8 Hz), alpha$_1$ (8.5–10 Hz), alpha$_2$ (10.5–13 Hz), beta$_1$ (13.5–18 Hz), beta$_2$ (18.5–30 Hz), gamma$_1$ (30.5–40 Hz), and gamma$_2$ (40–49.5 Hz). We then applied a Hilbert transformation to each of these time-series. We specifically chose the Hilbert transformation to maintain the temporal information of the amplitude of the frequency bands, to enable the power of the different frequencies for time segments defined through the fixations from the eye-tracking recording. Thus, for each eye-tracking feature we computed the corresponding EEG feature in each frequency band. Furthermore, we extracted sentence-level EEG features by calculating the power in each frequency band, and additionally, the difference of the power spectra between frontal left and right homologue electrodes pairs. For each eye-tracking based EEG feature, all channels were subject to an artifact rejection criterion of $90\mu V$ to exclude trials with transient noise. Data Validation The aim of the technical validation of the data is to guarantee good recording quality and to replicate findings of previous studies investigating co-registration of EEG and eye movement data during natural reading tasks (e.g. dimigen2011coregistration). We also compare the results to ZuCo 1.0 BIBREF1, which allows a more direct comparison due to the analogous recording procedure. Data Validation ::: Eye-tracking We validated the recorded eye-tracking data by analyzing the fixations made by all subjects through their reading speed and omission rate on sentence level. The omission rate is defined as the percentage of words that is not fixated in a sentence. Figure FIGREF10 (middle) shows the mean reading speed over all subjects, measured in seconds per sentence and Figure FIGREF10 (right) shows the mean omission rates aggregated over all subjects for each task. Clearly, the participants made less fixations during the task-specific reading, which lead to faster reading speed. Moreover, we corroborated these sentence-level metrics by visualizing the skipping proportion on word level (Figure FIGREF13). The skipping proportion is the average rate of words being skipped (i.e. not being fixated) in a sentence. As expected, this also increases in the task-specific reading. Although the reading material is from the same source and of the same length range (see Figure FIGREF10 (left)), in the first task (NR) passive reading was recorded, while in the second task (TSR) the subjects had to annotate a specific relation type in each sentence. Thus, the task-specific annotation reading lead to shorter passes because the goal was merely to recognize a relation in the text, but not necessarily to process the every word in each sentence. This distinct reading behavior is shown in Figure FIGREF15, where fixations occur until the end of the sentence during normal reading, while during task-specific reading the fixations stop after the decisive words to detect a given relation type. Finally, we also analyzed the average reading times for each of the extracted eye-tracking features. The means and distributions for both tasks are shown in Figure FIGREF21. These results are in line with the recorded data in ZuCo 1.0, as well as with the features extracted in the GECO corpus BIBREF4. Data Validation ::: EEG As a first validation step, we extracted fixation-related potentials (FRPs), where the EEG signal during all fixations of one task are averaged. Figure FIGREF24 shows the time-series of the resulting FRPs for two electrodes (PO8 and Cz), as well as topographies of the voltage distributions across the scalp at selected points in time. The five components (for which the scalp topographies are plotted) are highly similar in the time-course of the chosen electrodes to dimigen2011coregistration as well as to ZuCo 1.0. Moreover, these previous studies were able to show an effect of fixation duration on the resulting FRPs. To show this dependency we followed two approaches. First, for each reading task, all single-trial FRPs were ordered by fixation duration and a vertical sliding time-window was used to smooth the data BIBREF13. Figure FIGREF25 (bottom) shows the resulting plots. In line with this previous work, a first positivation can be identified at 100 ms post-fixation onset. A second positive peak is located dependent on the duration of the fixation, which can be explained by the time-jittered succeeding fixation. The second approach is based on henderson2013co in which single trial EEG segments are clustered by the duration of the current fixation. As shown in Figure FIGREF25 (top), we chose four clusters and averaged the data within each cluster to four distinct FRPs, depending on the fixation duration. Again, the same positivation peaks become apparent. Both findings are consistent with the previous work mentioned and with our findings from ZuCo 1.0. Conclusion We presented a new, freely available corpus of eye movement and electrical brain activity recordings during natural reading as well as during annotation. This is the first dataset that allows for the comparison between these two reading paradigms. We described the materials and experiment design in detail and conducted an extensive validation to ensure the quality of the recorded data. Since this corpus is tailored to cognitively-inspired NLP, the applications and re-use potentials of this data are extensive. The provided word-level and sentence-level eye-tracking and EEG features can be used to improve and evaluate NLP and machine learning methods, for instance, to evaluate linguistic phenomena in neural models via psycholinguistic data. In addition, because the sentences contains semantic relation labels and the annotations of all participants, it can also be widely used for relation extraction and classification. Finally, the two carefully constructed reading paradigms allow for the comparison between normal reading and reading during annotation, which can be relevant to improve the manual labelling process as well as the quality of the annotations for supervised machine learning.
Did they experiment with this new dataset?
No
3,445
qasper
4k
Introduction Ancient Chinese is the writing language in ancient China. It is a treasure of Chinese culture which brings together the wisdom and ideas of the Chinese nation and chronicles the ancient cultural heritage of China. Learning ancient Chinese not only helps people to understand and inherit the wisdom of the ancients, but also promotes people to absorb and develop Chinese culture. However, it is difficult for modern people to read ancient Chinese. Firstly, compared with modern Chinese, ancient Chinese is more concise and shorter. The grammatical order of modern Chinese is also quite different from that of ancient Chinese. Secondly, most modern Chinese words are double syllables, while the most of the ancient Chinese words are monosyllabic. Thirdly, there is more than one polysemous phenomenon in ancient Chinese. In addition, manual translation has a high cost. Therefore, it is meaningful and useful to study the automatic translation from ancient Chinese to modern Chinese. Through ancient-modern Chinese translation, the wisdom, talent and accumulated experience of the predecessors can be passed on to more people. Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 has achieved remarkable performance on many bilingual translation tasks. It is an end-to-end learning approach for machine translation, with the potential to show great advantages over the statistic machine translation (SMT) systems. However, NMT approach has not been widely applied to the ancient-modern Chinese translation task. One of the main reasons is the limited high-quality parallel data resource. The most popular method of acquiring translation examples is bilingual text alignment BIBREF5 . This kind of method can be classified into two types: lexical-based and statistical-based. The lexical-based approaches BIBREF6 , BIBREF7 focus on lexical information, which utilize the bilingual dictionary BIBREF8 , BIBREF9 or lexical features. Meanwhile, the statistical-based approaches BIBREF10 , BIBREF11 rely on statistical information, such as sentence length ratio in two languages and align mode probability. However, these methods are designed for other bilingual language pairs that are written in different language characters (e.g. English-French, Chinese-Japanese). The ancient-modern Chinese has some characteristics that are quite different from other language pairs. For example, ancient and modern Chinese are both written in Chinese characters, but ancient Chinese is highly concise and its syntactical structure is different from modern Chinese. The traditional methods do not take these characteristics into account. In this paper, we propose an effective ancient-modern Chinese text alignment method at the level of clause based on the characteristics of these two languages. The proposed method combines both lexical-based information and statistical-based information, which achieves 94.2 F1-score on Test set. Recently, a simple longest common subsequence based approach for ancient-modern Chinese sentence alignment is proposed in BIBREF12 . Our experiments showed that our proposed alignment approach performs much better than their method. We apply the proposed method to create a large translation parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. Furthermore, we test SMT models and various NMT models on the created dataset and provide a strong baseline for this task. Overview There are four steps to build the ancient-modern Chinese translation dataset: (i) The parallel corpus crawling and cleaning. (ii) The paragraph alignment. (iii) The clause alignment based on aligned paragraphs. (iv) Augmenting data by merging aligned adjacent clauses. The most critical step is the third step. Clause Alignment In the clause alignment step, we combine both statistical-based and lexical-based information to measure the score for each possible clause alignment between ancient and modern Chinese strings. The dynamic programming is employed to further find overall optimal alignment paragraph by paragraph. According to the characteristics of the ancient and modern Chinese languages, we consider the following factors to measure the alignment score INLINEFORM0 between a bilingual clause pair: Lexical Matching. The lexical matching score is used to calculate the matching coverage of the ancient clause INLINEFORM0 . It contains two parts: exact matching and dictionary matching. An ancient Chinese character usually corresponds to one or more modern Chinese words. In the first part, we carry out Chinese Word segmentation to the modern Chinese clause INLINEFORM1 . Then we match the ancient characters and modern words in the order from left to right. In further matching, the words that have been matched will be deleted from the original clauses. However, some ancient characters do not appear in its corresponding modern Chinese words. An ancient Chinese dictionary is employed to address this issue. We preprocess the ancient Chinese dictionary and remove the stop words. In this dictionary matching step, we retrieve the dictionary definition of each unmatched ancient character and use it to match the remaining modern Chinese words. To reduce the impact of universal word matching, we use Inverse Document Frequency (IDF) to weight the matching words. The lexical matching score is calculated as: DISPLAYFORM0 The above equation is used to calculate the matching coverage of the ancient clause INLINEFORM0 . The first term of equation ( EQREF8 ) represents exact matching score. INLINEFORM1 denotes the length of INLINEFORM2 , INLINEFORM3 denotes each ancient character in INLINEFORM4 , and the indicator function INLINEFORM5 indicates whether the character INLINEFORM6 can match the words in the clause INLINEFORM7 . The second term is dictionary matching score. Here INLINEFORM8 and INLINEFORM9 represent the remaining unmatched strings of INLINEFORM10 and INLINEFORM11 , respectively. INLINEFORM12 denotes the INLINEFORM13 -th character in the dictionary definition of the INLINEFORM14 and its IDF score is denoted as INLINEFORM15 . The INLINEFORM16 is a predefined parameter which is used to normalize the IDF score. We tuned the value of this parameter on the Dev set. Statistical Information. Similar to BIBREF11 and BIBREF6 , the statistical information contains alignment mode and length information. There are many alignment modes between ancient and modern Chinese languages. If one ancient Chinese clause aligns two adjacent modern Chinese clauses, we call this alignment as 1-2 alignment mode. We show some examples of different alignment modes in Figure FIGREF9 . In this paper, we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes which account for INLINEFORM0 of the Dev set. We estimate the probability Pr INLINEFORM1 n-m INLINEFORM2 of each alignment mode n-m on the Dev set. To utilize length information, we make an investigation on length correlation between these two languages. Based on the assumption of BIBREF11 that each character in one language gives rise to a random number of characters in the other language and those random variables INLINEFORM3 are independent and identically distributed with a normal distribution, we estimate the mean INLINEFORM4 and standard deviation INLINEFORM5 from the paragraph aligned parallel corpus. Given a clause pair INLINEFORM6 , the statistical information score can be calculated by: DISPLAYFORM0 where INLINEFORM0 denotes the normal distribution probability density function. Edit Distance. Because ancient and modern Chinese are both written in Chinese characters, we also consider using the edit distance. It is a way of quantifying the dissimilarity between two strings by counting the minimum number of operations (insertion, deletion, and substitution) required to transform one string into the other. Here we define the edit distance score as: DISPLAYFORM0 Dynamic Programming. The overall alignment score for each possible clause alignment is as follows: DISPLAYFORM0 Here INLINEFORM0 and INLINEFORM1 are pre-defined interpolation factors. We use dynamic programming to find the overall optimal alignment paragraph by paragraph. Let INLINEFORM2 be total alignment scores of aligning the first to INLINEFORM3 -th ancient Chinese clauses with the first to to INLINEFORM4 -th modern Chinese clauses, and the recurrence then can be described as follows: DISPLAYFORM0 Where INLINEFORM0 denotes concatenate clause INLINEFORM1 to clause INLINEFORM2 . As we discussed above, here we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes. Ancient-Modern Chinese Dataset Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. They used plain and accurate words to express what happened at that time, and thus ensure the generality of the translated materials. Paragraph Alignment. To further ensure the quality of the new dataset, the work of paragraph alignment is manually completed. After data cleaning and manual paragraph alignment, we obtained 35K aligned bilingual paragraphs. Clause Alignment. We applied our clause alignment algorithm on the 35K aligned bilingual paragraphs and obtained 517K aligned bilingual clauses. The reason we use clause alignment algorithm instead of sentence alignment is because we can construct more aligned sentences more flexibly and conveniently. To be specific, we can get multiple additional sentence level bilingual pairs by “data augmentation”. Data Augmentation. We augmented the data in the following way: Given an aligned clause pair, we merged its adjacent clause pairs as a new sample pair. For example, suppose we have three adjacent clause level bilingual pairs: ( INLINEFORM0 , INLINEFORM1 ), ( INLINEFORM2 , INLINEFORM3 ), and ( INLINEFORM4 , INLINEFORM5 ). We can get some additional sentence level bilingual pairs, such as: ( INLINEFORM6 , INLINEFORM7 ) and ( INLINEFORM8 , INLINEFORM9 ). Here INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 are adjacent clauses in the original paragraph, and INLINEFORM13 denotes concatenate clause INLINEFORM14 to clause INLINEFORM15 . The advantage of using this data augmentation method is that compared with only using ( INLINEFORM16 , INLINEFORM17 ) as the training data, we can also use ( INLINEFORM18 , INLINEFORM19 ) and ( INLINEFORM20 , INLINEFORM21 ) as the training data, which can provide richer supervision information for the model and make the model learn the align information between the source language and the target language better. After the data augmentation, we filtered the sentences which are longer than 50 or contain more than four clause pairs. Dataset Creation. Finally, we split the dataset into three sets: training (Train), development (Dev) and testing (Test). Note that the unaugmented dataset contains 517K aligned bilingual clause pairs from 35K aligned bilingual paragraphs. To keep all the sentences in different sets come from different articles, we split the 35K aligned bilingual paragraphs into Train, Dev and Test sets following these ratios respectively: 80%, 10%, 10%. Before data augmentation, the unaugmented Train set contains INLINEFORM0 aligned bilingual clause pairs from 28K aligned bilingual paragraphs. Then we augmented the Train, Dev and Test sets respectively. Note that the augmented Train, Dev and Test sets also contain the unaugmented data. The statistical information of the three data sets is shown in Table TABREF17 . We show some examples of data in Figure FIGREF14 . RNN-based NMT model We first briefly introduce the RNN based Neural Machine Translation (RNN-based NMT) model. The RNN-based NMT with attention mechanism BIBREF0 has achieved remarkable performance on many translation tasks. It consists of encoder and decoder part. We firstly introduce the encoder part. The input word sequence of source language are individually mapped into a INLINEFORM0 -dimensional vector space INLINEFORM1 . Then a bi-directional RNN BIBREF15 with GRU BIBREF16 or LSTM BIBREF17 cell converts these vectors into a sequences of hidden states INLINEFORM2 . For the decoder part, another RNN is used to generate target sequence INLINEFORM0 . The attention mechanism BIBREF0 , BIBREF18 is employed to allow the decoder to refer back to the hidden state sequence and focus on a particular segment. The INLINEFORM1 -th hidden state INLINEFORM2 of decoder part is calculated as: DISPLAYFORM0 Here g INLINEFORM0 is a linear combination of attended context vector c INLINEFORM1 and INLINEFORM2 is the word embedding of (i-1)-th target word: DISPLAYFORM0 The attended context vector c INLINEFORM0 is computed as a weighted sum of the hidden states of the encoder: DISPLAYFORM0 The probability distribution vector of the next word INLINEFORM0 is generated according to the following: DISPLAYFORM0 We take this model as the basic RNN-based NMT model in the following experiments. Transformer-NMT Recently, the Transformer model BIBREF4 has made remarkable progress in machine translation. This model contains a multi-head self-attention encoder and a multi-head self-attention decoder. As proposed by BIBREF4 , an attention function maps a query and a set of key-value pairs to an output, where the queries INLINEFORM0 , keys INLINEFORM1 , and values INLINEFORM2 are all vectors. The input consists of queries and keys of dimension INLINEFORM3 , and values of dimension INLINEFORM4 . The attention function is given by: DISPLAYFORM0 Multi-head attention mechanism projects queries, keys and values to INLINEFORM0 different representation subspaces and calculates corresponding attention. The attention function outputs are concatenated and projected again before giving the final output. Multi-head attention allows the model to attend to multiple features at different positions. The encoder is composed of a stack of INLINEFORM0 identical layers. Each layer has two sub-layers: multi-head self-attention mechanism and position-wise fully connected feed-forward network. Similarly, the decoder is also composed of a stack of INLINEFORM1 identical layers. In addition to the two sub-layers in each encoder layer, the decoder contains a third sub-layer which performs multi-head attention over the output of the encoder stack (see more details in BIBREF4 ). Experiments Our experiments revolve around the following questions: Q1: As we consider three factors for clause alignment, do all these factors help? How does our method compare with previous methods? Q2: How does the NMT and SMT models perform on this new dataset we build? Clause Alignment Results (Q1) In order to evaluate our clause alignment algorithm, we manually aligned bilingual clauses from 37 bilingual ancient-modern Chinese articles, and finally got 4K aligned bilingual clauses as the Test set and 2K clauses as the Dev set. Metrics. We used F1-score and precision score as the evaluation metrics. Suppose that we get INLINEFORM0 bilingual clause pairs after running the algorithm on the Test set, and there are INLINEFORM1 bilingual clause pairs of these INLINEFORM2 pairs are in the ground truth of the Test set, the precision score is defined as INLINEFORM3 (the algorithm gives INLINEFORM4 outputs, INLINEFORM5 of which are correct). And suppose that the ground truth of the Test set contains INLINEFORM6 bilingual clause pairs, the recall score is INLINEFORM7 (there are INLINEFORM8 ground truth samples, INLINEFORM9 of which are output by the algorithm), then the F1-score is INLINEFORM10 . Baselines. Since the related work BIBREF10 , BIBREF11 can be seen as the ablation cases of our method (only statistical score INLINEFORM0 with dynamic programming), we compared the full proposed method with its variants on the Test set for ablation study. In addition, we also compared our method with the longest common subsequence (LCS) based approach proposed by BIBREF12 . To the best of our knowledge, BIBREF12 is the latest related work which are designed for Ancient-Modern Chinese alignment. Hyper-parameters. For the proposed method, we estimated INLINEFORM0 and INLINEFORM1 on all aligned paragraphs. The probability Pr INLINEFORM2 n-m INLINEFORM3 of each alignment mode n-m was estimated on the Dev set. For the hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , the grid search was applied to tune them on the Dev set. In order to show the effect of hyper-parameters INLINEFORM7 , INLINEFORM8 , and INLINEFORM9 , we reported the results of various hyper-parameters on the Dev set in Table TABREF26 . Based on the results of grid search on the Dev set, we set INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 in the following experiment. The Jieba Chinese text segmentation is employed for modern Chinese word segmentation. Results. The results on the Test set are shown in Table TABREF28 , the abbreviation w/o means removing a particular part from the setting. From the results, we can see that the lexical matching score is the most important among these three factors, and statistical information score is more important than edit distance score. Moreover, the dictionary term in lexical matching score significantly improves the performance. From these results, we obtain the best setting that involves all these three factors. We used this setting for dataset creation. Furthermore, the proposed method performs much better than LCS BIBREF12 . Translation Results (Q2) In this experiment, we analyzed and compared the performance of the SMT and various NMT models on our built dataset. To verify the effectiveness of our data augmented method. We trained the NMT and SMT models on both unaugmented dataset (including 0.46M training pairs) and augmented dataset, and test all the models on the same Test set which is augmented. The models to be tested and their configurations are as follows: SMT. The state-of-art Moses toolkit BIBREF19 was used to train SMT model. We used KenLM BIBREF20 to train a 5-gram language model, and the GIZA++ toolkit to align the data. RNN-based NMT. The basic RNN-based NMT model is based on BIBREF0 which is introduced above. Both the encoder and decoder used 2-layer RNN with 1024 LSTM cells, and the encoder is a bi-directional RNN. The batch size, threshold of element-wise gradient clipping and initial learning rate of Adam optimizer BIBREF21 were set to 128, 5.0 and 0.001. When trained the model on augmented dataset, we used 4-layer RNN. Several techniques were investigated to train the model, including layer-normalization BIBREF22 , RNN-dropout BIBREF23 , and learning rate decay BIBREF1 . The hyper-parameters were chosen empirically and adjusted in the Dev set. Furthermore, we tested the basic NMT model with several techniques, such as target language reversal BIBREF24 (reversing the order of the words in all target sentences, but not source sentences), residual connection BIBREF25 and pre-trained word2vec BIBREF26 . For word embedding pre-training, we collected an external ancient corpus which contains INLINEFORM0 134M tokens. Transformer-NMT. We also trained the Transformer model BIBREF4 which is a strong baseline of NMT on both augmented and unaugmented parallel corpus. The training configuration of the Transformer model is shown in Table TABREF32 . The hyper-parameters are set based on the settings in the paper BIBREF4 and the sizes of our training sets. For the evaluation, we used the average of 1 to 4 gram BLEUs multiplied by a brevity penalty BIBREF27 which computed by multi-bleu.perl in Moses as metrics. The results are reported in Table TABREF34 . For RNN-based NMT, we can see that target language reversal, residual connection, and word2vec can further improve the performance of the basic RNN-based NMT model. However, we find that word2vec and reversal tricks seem no obvious improvement when trained the RNN-based NMT and Transformer models on augmented parallel corpus. For SMT, it performs better than NMT models when they were trained on the unaugmented dataset. Nevertheless, when trained on the augmented dataset, both the RNN-based NMT model and Transformer based NMT model outperform the SMT model. In addition, as with other translation tasks BIBREF4 , the Transformer also performs better than RNN-based NMT. Because the Test set contains both augmented and unaugmented data, it is not surprising that the RNN-based NMT model and Transformer based NMT model trained on unaugmented data would perform poorly. In order to further verify the effect of data augmentation, we report the test results of the models on only unaugmented test data (including 48K test pairs) in Table TABREF35 . From the results, it can be seen that the data augmentation can still improve the models. Analysis The generated samples of various models are shown in Figure FIGREF36 . Besides BLEU scores, we analyze these examples from a human perspective and draw some conclusions. At the same time, we design different metrics and evaluate on the whole Test set to support our conclusions as follows: On the one hand, we further compare the translation results from the perspective of people. We find that although the original meaning can be basically translated by SMT, its translation results are less smooth when compared with the other two NMT models (RNN-based NMT and Transformer). For example, the translations of SMT are usually lack of auxiliary words, conjunctions and function words, which is not consistent with human translation habits. To further confirm this conclusion, the average length of the translation results of the three models are measured (RNN-based NMT:17.12, SMT:15.50, Transformer:16.78, Reference:16.47). We can see that the average length of the SMT outputs is shortest, and the length gaps between the SMT outputs and the references are largest. Meanwhile, the average length of the sentences translated by Transformer is closest to the average length of references. These results indirectly verify our point of view, and show that the NMT models perform better than SMT in this task. On the other hand, there still exists some problems to be solved. We observe that translating proper nouns and personal pronouns (such as names, place names and ancient-specific appellations) is very difficult for all of these models. For instance, the ancient Chinese appellation `Zhen' should be translated into `Wo' in modern Chinese. Unfortunately, we calculate the accurate rate of some special words (such as `Zhen',`Chen' and `Gua'), and find that this rate is very low (the accurate rate of translating `Zhen' are: RNN-based NMT:0.14, SMT:0.16, Transformer:0.05). We will focus on this issue in the future. Conclusion and Future Work We propose an effective ancient-modern Chinese clause alignment method which achieves 94.2 F1-score on Test set. Based on it, we build a large scale parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. In addition, we test the performance of the SMT and various NMT models on our built dataset and provide a strong NMT baseline for this task which achieves 27.16 BLEU score (4-gram). We further analyze the performance of the SMT and various NMT models and summarize some specific problems that machine translation models will encounter when translating ancient Chinese. For the future work, firstly, we are going to expand the dataset using the proposed method continually. Secondly, we will focus on solving the problem of proper noun translation and improve the translation system according to the features of ancient Chinese translation. Finally, we plan to introduce some techniques of statistical translation into neural machine translation to improve the performance. This work is supported by National Natural Science Fund for Distinguished Young Scholar (Grant No. 61625204) and partially supported by the State Key Program of National Science Foundation of China (Grant Nos. 61836006 and 61432014).
what NMT models did they compare with?
RNN-based NMT model, Transformer-NMT
3,708
qasper
4k
Introduction We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification. A key problem arisen here, is how to leverage such knowledge to guide the learning process, an interesting problem for both NLP and machine learning communities. Previous studies addressing the problem fall into several lines. First, to leverage prior knowledge to label data BIBREF1 , BIBREF2 . Second, to encode prior knowledge with a prior on parameters, which can be commonly seen in many Bayesian approaches BIBREF3 , BIBREF4 . Third, to formalise prior knowledge with additional variables and dependencies BIBREF5 . Last, to use prior knowledge to control the distributions over latent output variables BIBREF6 , BIBREF7 , BIBREF8 , which makes the output variables easily interpretable. However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable. In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust. More attractively, we do not need manual annotation to label these neutral feature words in our proposed approach. More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later. To summarize, the main contributions of this work are as follows: The rest of the paper is structured as follows: In Section 2, we briefly describe the generalized expectation criteria and present the proposed regularization terms. In Section 3, we conduct extensive experiments to justify the proposed methods. We survey related work in Section 4, and summarize our work in Section 5. Method We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification. Generalized Expectation Criteria Generalized expectation (GE) criteria BIBREF7 provides us a natural way to directly constrain the model in the preferred direction. For example, when we know the proportion of each class of the dataset in a classification task, we can guide the model to predict out a pre-specified class distribution. Formally, in a parameter estimation objective function, a GE term expresses preferences on the value of some constraint functions about the model's expectation. Given a constraint function $G({\rm x}, y)$ , a conditional model distribution $p_\theta (y|\rm x)$ , an empirical distribution $\tilde{p}({\rm x})$ over input samples and a score function $S$ , a GE term can be expressed as follows: $$S(E_{\tilde{p}({\rm x})}[E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]])$$ (Eq. 4) Learning from Labeled Features Druck et al. ge-fl proposed GE-FL to learn from labeled features using generalized expectation criteria. When given a set of labeled features $K$ , the reference distribution over classes of these features is denoted by $\hat{p}(y| x_k), k \in K$ . GE-FL introduces the divergence between this reference distribution and the model predicted distribution $p_\theta (y | x_k)$ , as a term of the objective function: $$\mathcal {O} = \sum _{k \in K} KL(\hat{p}(y|x_k) || p_\theta (y | x_k)) + \sum _{y,i} \frac{\theta _{yi}^2}{2 \sigma ^2}$$ (Eq. 6) where $\theta _{yi}$ is the model parameter which indicates the importance of word $i$ to class $y$ . The predicted distribution $p_\theta (y | x_k)$ can be expressed as follows: $ p_\theta (y | x_k) = \frac{1}{C_k} \sum _{\rm x} p_\theta (y|{\rm x})I(x_k) $ in which $I(x_k)$ is 1 if feature $k$ occurs in instance ${\rm x}$ and 0 otherwise, $C_k = \sum _{\rm x} I(x_k)$ is the number of instances with a non-zero value of feature $k$ , and $p_\theta (y|{\rm x})$ takes a softmax form as follows: $ p_\theta (y|{\rm x}) = \frac{1}{Z(\rm x)}\exp (\sum _i \theta _{yi}x_i). $ To solve the optimization problem, L-BFGS can be used for parameter estimation. In the framework of GE, this term can be obtained by setting the constraint function $G({\rm x}, y) = \frac{1}{C_k} \vec{I} (y)I(x_k)$ , where $\vec{I}(y)$ is an indicator vector with 1 at the index corresponding to label $y$ and 0 elsewhere. Regularization Terms GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust. Neutral features are features that are not informative indicator of any classes, for instance, word player to the baseball-hockey classification task. Such features are usually frequent words across all categories. When we set the preference distribution of the neutral features to be uniform distributed, these neutral features will prevent the model from biasing to the class that has a dominate number of labeled features. Formally, given a set of neutral features $K^{^{\prime }}$ , the uniform distribution is $\hat{p}_u(y|x_k) = \frac{1}{|C|}, k \in K^{^{\prime }}$ , where $|C|$ is the number of classes. The objective function with the new term becomes $$\mathcal {O}_{NE} = \mathcal {O} + \sum _{k \in K^{^{\prime }}} KL(\hat{p}_u(y|x_k) || p_\theta (y | x_k)).$$ (Eq. 9) Note that we do not need manual annotation to provide neutral features. One simple way is to take the most common features as neutral features. Experimental results show that this strategy works successfully. Another way to prevent the model from drifting from the desired direction is to constrain the predicted class distribution on unlabeled data. When lacking knowledge about the class distribution of the data, one feasible way is to take maximum entropy principle, as below: $$\mathcal {O}_{ME} = \mathcal {O} + \lambda \sum _{y} p(y) \log p(y)$$ (Eq. 11) where $p(y)$ is the predicted class distribution, given by $ p(y) = \frac{1}{|X|} \sum _{\rm x} p_\theta (y | \rm x). $ To control the influence of this term on the overall objective function, we can tune $\lambda $ according to the difference in the number of labeled features of each class. In this paper, we simply set $\lambda $ to be proportional to the total number of labeled features, say $\lambda = \beta |K|$ . This maximum entropy term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ . Therefore, $E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]$ is just the model distribution $p_\theta (y|{\rm x})$ and its expectation with the empirical distribution $\tilde{p}(\rm x)$ is simply the average over input samples, namely $p(y)$ . When $S$ takes the maximum entropy form, we can derive the objective function as above. Sometimes, we have already had much knowledge about the corpus, and can estimate the class distribution roughly without labeling instances. Therefore, we introduce the KL divergence between the predicted and reference class distributions into the objective function. Given the preference class distribution $\hat{p}(y)$ , we modify the objective function as follows: $$\mathcal {O}_{KL} &= \mathcal {O} + \lambda KL(\hat{p}(y) || p(y))$$ (Eq. 13) Similarly, we set $\lambda = \beta |K|$ . This divergence term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ and setting the score function to $S(\hat{p}, p) = \sum _i \hat{p}_i \log \frac{\hat{p}_i}{p_i}$ , where $p$ and $\hat{p}$ are distributions. Note that this regularization term involves the reference class distribution which will be discussed later. Experiments In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria. Data Preparation We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features. The movie dataset, in which the task is to classify the movie reviews as positive or negtive, is used for testing the proposed approaches with unbalanced labeled features, unbalanced datasets or different $\lambda $ parameters. All unbalanced datasets are constructed based on the movie dataset by randomly removing documents of the positive class. For each experiment, we conduct 10-fold cross validation. As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ). Similar to BIBREF10 , BIBREF0 , we estimate the reference distribution of the labeled features using a heuristic strategy. If there are $|C|$ classes in total, and $n$ classes are associated with a feature $k$ , the probability that feature $k$ is related with any one of the $n$ classes is $\frac{0.9}{n}$ and with any other class is $\frac{0.1}{|C| - n}$ . Neutral features are the most frequent words after removing stop words, and their reference distributions are uniformly distributed. We use the top 10 frequent words as neutral features in all experiments. With Unbalanced Labeled Features In this section, we evaluate our approach when there is unbalanced knowledge on the categories to be classified. The labeled features are obtained through information gain. Two settings are chosen: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). As shown in Figure 1 , Maximum entropy principle shows improvement only on the balanced case. An obvious reason is that maximum entropy only favors uniform distribution. Incorporating Neutral features performs similarly to maximum entropy since we assume that neutral words are uniformly distributed. Its accuracy decreases slowly when the number of labeled features becomes larger ( $t>4$ ) (Figure 1 (a)), suggesting that the model gradually biases to the class with more labeled features, just like GE-FL. Incorporating the KL divergence of class distribution performs much better than GE-FL on both balanced and unbalanced datasets. This shows that it is effective to control the unbalance in labeled features and in the dataset. With Balanced Labeled Features We also compare with the baseline when the labeled features are balanced. Similar to the experiment above, the labeled features are obtained by information gain. Two settings are experimented with: (a) We randomly select $t \in [1, 20]$ features from the feature pool for each class, and conduct comparisons on the original balanced movie dataset (positive:negtive=1:1). (b) Similar to (a), but the class distribution is unbalanced, by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 2 . When the dataset is balanced (Figure 2 (a)), there is little difference between GE-FL and our methods. The reason is that the proposed regularization terms provide no additional knowledge to the model and there is no bias in the labeled features. On the unbalanced dataset (Figure 2 (b)), incorporating KL divergence is much better than GE-FL since we provide additional knowledge(the true class distribution), but maximum entropy and neutral features are much worse because forcing the model to approach the uniform distribution misleads it. With Unbalanced Class Distributions Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 . Figure 3 (a) shows that when the dataset and the labeled features are both balanced, there is little difference between our methods and GE-FL(also see Figure 2 (a)). But when the class distribution becomes more unbalanced, the difference becomes more remarkable. Performance of neutral features and maximum entropy decrease significantly but incorporating KL divergence increases remarkably. This suggests if we have more accurate knowledge about class distribution, KL divergence can guide the model to the right direction. Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive. The Influence of λ\lambda We present the influence of $\lambda $ on the method that incorporates KL divergence in this section. Since we simply set $\lambda = \beta |K|$ , we just tune $\beta $ here. Note that when $\beta = 0$ , the newly introduced regularization term is disappeared, and thus the model is actually GE-FL. Again, we test the method with different $\lambda $ in two settings: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other class. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 4 . As expected, $\lambda $ reflects how strong the regularization is. The model tends to be closer to our preferences with the increasing of $\lambda $ on both cases. Using LDA Selected Features We compare our methods with GE-FL on all the 9 datasets in this section. Instead of using features obtained by information gain, we use LDA to select labeled features. Unlike information gain, LDA does not employ any instance labels to find labeled features. In this setting, we can build classification models without any instance annotation, but just with labeled features. Table 1 shows that our three methods significantly outperform GE-FL. Incorporating neutral features performs better than GE-FL on 7 of the 9 datasets, maximum entropy is better on 8 datasets, and KL divergence better on 7 datasets. LDA selects out the most predictive features as labeled features without considering the balance among classes. GE-FL does not exert any control on such an issue, so the performance is severely suffered. Our methods introduce auxiliary regularization terms to control such a bias problem and thus promote the model significantly. Related Work There have been much work that incorporate prior knowledge into learning, and two related lines are surveyed here. One is to use prior knowledge to label unlabeled instances and then apply a standard learning algorithm. The other is to constrain the model directly with prior knowledge. Liu et al.text manually labeled features which are highly predictive to unsupervised clustering assignments and use them to label unlabeled data. Chang et al.guiding proposed constraint driven learning. They first used constraints and the learned model to annotate unlabeled instances, and then updated the model with the newly labeled data. Daumé daume2008cross proposed a self training method in which several models are trained on the same dataset, and only unlabeled instances that satisfy the cross task knowledge constraints are used in the self training process. MaCallum et al.gec proposed generalized expectation(GE) criteria which formalised the knowledge as constraint terms about the expectation of the model into the objective function.Graça et al.pr proposed posterior regularization(PR) framework which projects the model's posterior onto a set of distributions that satisfy the auxiliary constraints. Druck et al.ge-fl explored constraints of labeled features in the framework of GE by forcing the model's predicted feature distribution to approach the reference distribution. Andrzejewski et al.andrzejewski2011framework proposed a framework in which general domain knowledge can be easily incorporated into LDA. Altendorf et al.altendorf2012learning explored monotonicity constraints to improve the accuracy while learning from sparse data. Chen et al.chen2013leveraging tried to learn comprehensible topic models by leveraging multi-domain knowledge. Mann and McCallum simple,generalized incorporated not only labeled features but also other knowledge like class distribution into the objective function of GE-FL. But they discussed only from the semi-supervised perspective and did not investigate into the robustness problem, unlike what we addressed in this paper. There are also some active learning methods trying to use prior knowledge. Raghavan et al.feedback proposed to use feedback on instances and features interlacedly, and demonstrated that feedback on features boosts the model much. Druck et al.active proposed an active learning method which solicits labels on features rather than on instances and then used GE-FL to train the model. Conclusion and Discussions This paper investigates into the problem of how to leverage prior knowledge robustly in learning models. We propose three regularization terms on top of generalized expectation criteria. As demonstrated by the experimental results, the performance can be considerably improved when taking into account these factors. Comparative results show that our proposed methods is more effective and works more robustly against baselines. To the best of our knowledge, this is the first work to address the robustness problem of leveraging knowledge, and may inspire other research. We then present more detailed discussions about the three regularization methods. Incorporating neutral features is the simplest way of regularization, which doesn't require any modification of GE-FL but just finding out some common features. But as Figure 1 (a) shows, only using neutral features are not strong enough to handle extremely unbalanced labeled features. The maximum entropy regularization term shows the strong ability of controlling unbalance. This method doesn't need any extra knowledge, and is thus suitable when we know nothing about the corpus. But this method assumes that the categories are uniformly distributed, which may not be the case in practice, and it will have a degraded performance if the assumption is violated (see Figure 1 (b), Figure 2 (b), Figure 3 (a)). The KL divergence performs much better on unbalanced corpora than other methods. The reason is that KL divergence utilizes the reference class distribution and doesn't make any assumptions. The fact suggests that additional knowledge does benefit the model. However, the KL divergence term requires providing the true class distribution. Sometimes, we may have the exact knowledge about the true distribution, but sometimes we may not. Fortunately, the model is insensitive to the true distribution and therefore a rough estimation of the true distribution is sufficient. In our experiments, when the true class distribution is 1:2, where the reference class distribution is set to 1:1.5/1:2/1:2.5, the accuracy is 0.755/0.756/0.760 respectively. This provides us the possibility to perform simple computing on the corpus to obtain the distribution in reality. Or, we can set the distribution roughly with domain expertise.
What are the three regularization terms?
a regularization term associated with neutral features, the maximum entropy of class distribution regularization term, the KL divergence between reference and predicted class distribution
3,604
qasper
4k
Introduction Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These findings emphasize the need to shift towards context-aware machine translation both from modeling and evaluation perspective. Most previous work on context-aware NMT assumed that either all the bilingual data is available at the document level BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 or at least its fraction BIBREF11. But in practical scenarios, document-level parallel data is often scarce, which is one of the challenges when building a context-aware system. We introduce an approach to context-aware machine translation using only monolingual document-level data. In our setting, a separate monolingual sequence-to-sequence model (DocRepair) is used to correct sentence-level translations of adjacent sentences. The key idea is to use monolingual data to imitate typical inconsistencies between context-agnostic translations of isolated sentences. The DocRepair model is trained to map inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. To validate the performance of our model, we use three kinds of evaluation: the BLEU score, contrastive evaluation of translation of several discourse phenomena BIBREF11, and human evaluation. We show strong improvements for all metrics. We analyze which discourse phenomena are hard to capture using monolingual data only. Using contrastive test sets for targeted evaluation of several contextual phenomena, we compare the performance of the models trained on round-trip translations and genuine document-level parallel data. Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations. Our key contributions are as follows: we introduce the first approach to context-aware machine translation using only monolingual document-level data; our approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation; we show which discourse phenomena are hard to capture using monolingual data only. Our Approach: Document-level Repair We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations of a context-agnostic MT system. It does not use any states of a trained MT model whose outputs it corrects and therefore can in principle be trained to correct translations from any black-box MT system. The DocRepair model requires only monolingual document-level data in the target language. It is a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. Consistent groups come from monolingual document-level data. To obtain inconsistent groups, each sentence in a group is replaced with its round-trip translation produced in isolation from context. More formally, forming a training minibatch for the DocRepair model involves the following steps (see also Figure FIGREF9): sample several groups of sentences from the monolingual data; for each sentence in a group, (i) translate it using a target-to-source MT model, (ii) sample a translation of this back-translated sentence in the source language using a source-to-target MT model; using these round-trip translations of isolated sentences, form an inconsistent version of the initial groups; use inconsistent groups as input for the DocRepair model, consistent ones as output. At test time, the process of getting document-level translations is two-step (Figure FIGREF10): produce translations of isolated sentences using a context-agnostic MT model; apply the DocRepair model to a sequence of context-agnostic translations to correct inconsistencies between translations. In the scope of the current work, the DocRepair model is the standard sequence-to-sequence Transformer. Sentences in a group are concatenated using a reserved token-separator between sentences. The Transformer is trained to correct these long inconsistent pseudo-sentences into consistent ones. The token-separator is then removed from corrected translations. Evaluation of Contextual Phenomena We use contrastive test sets for evaluation of discourse phenomena for English-Russian by BIBREF11. These test sets allow for testing different kinds of phenomena which, as we show, can be captured from monolingual data with varying success. In this section, we provide test sets statistics and briefly describe the tested phenomena. For more details, the reader is referred to BIBREF11. Evaluation of Contextual Phenomena ::: Test sets There are four test sets in the suite. Each test set contains contrastive examples. It is specifically designed to test the ability of a system to adapt to contextual information and handle the phenomenon under consideration. Each test instance consists of a true example (a sequence of sentences and their reference translation from the data) and several contrastive translations which differ from the true one only in one specific aspect. All contrastive translations are correct and plausible translations at the sentence level, and only context reveals the inconsistencies between them. The system is asked to score each candidate translation, and we compute the system accuracy as the proportion of times the true translation is preferred to the contrastive ones. Test set statistics are shown in Table TABREF15. The suites for deixis and lexical cohesion are split into development and test sets, with 500 examples from each used for validation purposes and the rest for testing. Convergence of both consistency scores on these development sets and BLEU score on a general development set are used as early stopping criteria in models training. For ellipsis, there is no dedicated development set, so we evaluate on all the ellipsis data and do not use it for development. Evaluation of Contextual Phenomena ::: Phenomena overview Deixis Deictic words or phrases, are referential expressions whose denotation depends on context. This includes personal deixis (“I”, “you”), place deixis (“here”, “there”), and discourse deixis, where parts of the discourse are referenced (“that's a good question”). The test set examples are all related to person deixis, specifically the T-V distinction between informal and formal you (Latin “tu” and “vos”) in the Russian translations, and test for consistency in this respect. Ellipsis Ellipsis is the omission from a clause of one or more words that are nevertheless understood in the context of the remaining elements. In machine translation, elliptical constructions in the source language pose a problem in two situations. First, if the target language does not allow the same types of ellipsis, requiring the elided material to be predicted from context. Second, if the elided material affects the syntax of the sentence. For example, in Russian the grammatical function of a noun phrase, and thus its inflection, may depend on the elided verb, or, conversely, the verb inflection may depend on the elided subject. There are two different test sets for ellipsis. One contains examples where a morphological form of a noun group in the last sentence can not be understood without context beyond the sentence level (“ellipsis (infl.)” in Table TABREF15). Another includes cases of verb phrase ellipsis in English, which does not exist in Russian, thus requires predicting the verb when translating into Russian (“ellipsis (VP)” in Table TABREF15). Lexical cohesion The test set focuses on reiteration of named entities. Where several translations of a named entity are possible, a model has to prefer consistent translations over inconsistent ones. Experimental Setup ::: Data preprocessing We use the publicly available OpenSubtitles2018 corpus BIBREF12 for English and Russian. For a fair comparison with previous work, we train the baseline MT system on the data released by BIBREF11. Namely, our MT system is trained on 6m instances. These are sentence pairs with a relative time overlap of subtitle frames between source and target language subtitles of at least $0.9$. We gathered 30m groups of 4 consecutive sentences as our monolingual data. We used only documents not containing groups of sentences from general development and test sets as well as from contrastive test sets. The main results we report are for the model trained on all 30m fragments. We use the tokenization provided by the corpus and use multi-bleu.perl on lowercased data to compute BLEU score. We use beam search with a beam of 4. Sentences were encoded using byte-pair encoding BIBREF13, with source and target vocabularies of about 32000 tokens. Translation pairs were batched together by approximate sequence length. Each training batch contained a set of translation pairs containing approximately 15000 source tokens. It has been shown that Transformer's performance depends heavily on batch size BIBREF14, and we chose a large batch size to ensure the best performance. In training context-aware models, for early stopping we use both convergence in BLEU score on the general development set and scores on the consistency development sets. After training, we average the 5 latest checkpoints. Experimental Setup ::: Models The baseline model, the model used for back-translation, and the DocRepair model are all Transformer base models BIBREF15. More precisely, the number of layers is $N=6$ with $h = 8$ parallel attention layers, or heads. The dimensionality of input and output is $d_{model} = 512$, and the inner-layer of a feed-forward networks has dimensionality $d_{ff}=2048$. We use regularization as described in BIBREF15. As a second baseline, we use the two-pass CADec model BIBREF11. The first pass produces sentence-level translations. The second pass takes both the first-pass translation and representations of the context sentences as input and returns contextualized translations. CADec requires document-level parallel training data, while DocRepair only needs monolingual training data. Experimental Setup ::: Generating round-trip translations On the selected 6m instances we train sentence-level translation models in both directions. To create training data for DocRepair, we proceed as follows. The Russian monolingual data is first translated into English, using the Russian$\rightarrow $English model and beam search with beam size of 4. Then, we use the English$\rightarrow $Russian model to sample translations with temperature of $0{.}5$. For each sentence, we precompute 20 sampled translations and randomly choose one of them when forming a training minibatch for DocRepair. Also, in training, we replace each token in the input with a random one with the probability of $10\%$. Experimental Setup ::: Optimizer As in BIBREF15, we use the Adam optimizer BIBREF16, the parameters are $\beta _1 = 0{.}9$, $\beta _2 = 0{.}98$ and $\varepsilon = 10^{-9}$. We vary the learning rate over the course of training using the formula: where $warmup\_steps = 16000$ and $scale=4$. Results ::: General results The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments). To see which part of the improvement is due to fixing agreement between sentences rather than simply sentence-level post-editing, we train the same repair model at the sentence level. Each sentence in a group is now corrected separately, then they are put back together in a group. One can see that most of the improvement comes from accounting for extra-sentential dependencies. DocRepair outperforms the baseline and CADec by 0.7 BLEU, and its sentence-level repair version by 0.5 BLEU. Results ::: Consistency results Scores on the phenomena test sets are provided in Table TABREF26. For deixis, lexical cohesion and ellipsis (infl.) we see substantial improvements over both the baseline and CADec. The largest improvement over CADec (22.5 percentage points) is for lexical cohesion. However, there is a drop of almost 5 percentage points for VP ellipsis. We hypothesize that this is because it is hard to learn to correct inconsistencies in translations caused by VP ellipsis relying on monolingual data alone. Figure FIGREF27(a) shows an example of inconsistency caused by VP ellipsis in English. There is no VP ellipsis in Russian, and when translating auxiliary “did” the model has to guess the main verb. Figure FIGREF27(b) shows steps of generating round-trip translations for the target side of the previous example. When translating from Russian, main verbs are unlikely to be translated as the auxiliary “do” in English, and hence the VP ellipsis is rarely present on the English side. This implies the model trained using the round-trip translations will not be exposed to many VP ellipsis examples in training. We discuss this further in Section SECREF34. Table TABREF28 provides scores for deixis and lexical cohesion separately for different distances between sentences requiring consistency. It can be seen, that the performance of DocRepair degrades less than that of CADec when the distance between sentences requiring consistency gets larger. Results ::: Human evaluation We conduct a human evaluation on random 700 examples from our general test set. We picked only examples where a DocRepair translation is not a full copy of the baseline one. The annotators were provided an original group of sentences in English and two translations: baseline context-agnostic one and the one corrected by the DocRepair model. Translations were presented in random order with no indication which model they came from. The task is to pick one of the three options: (1) the first translation is better, (2) the second translation is better, (3) the translations are of equal quality. The annotators were asked to avoid the third answer if they are able to give preference to one of the translations. No other guidelines were given. The results are provided in Table TABREF30. In about $52\%$ of the cases annotators marked translations as having equal quality. Among the cases where one of the translations was marked better than the other, the DocRepair translation was marked better in $73\%$ of the cases. This shows a strong preference of the annotators for corrected translations over the baseline ones. Varying Training Data In this section, we discuss the influence of the training data chosen for document-level models. In all experiments, we used the DocRepair model. Varying Training Data ::: The amount of training data Table TABREF33 provides BLEU and consistency scores for the DocRepair model trained on different amount of data. We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragments). Moreover, we notice that deixis scores are less sensitive to the amount of training data than lexical cohesion and ellipsis scores. The reason might be that, as we observed in our previous work BIBREF11, inconsistencies in translations due to the presence of deictic words and phrases are more frequent in this dataset than other types of inconsistencies. Also, as we show in Section SECREF7, this is the phenomenon the model learns faster in training. Varying Training Data ::: One-way vs round-trip translations In this section, we discuss the limitations of using only monolingual data to model inconsistencies between sentence-level translations. In Section SECREF25 we observed a drop in performance on VP ellipsis for DocRepair compared to CADec, which was trained on parallel data. We hypothesized that this is due to the differences between one-way and round-trip translations, and now we test this hypothesis. To do so, we fix the dataset and vary the way in which the input for DocRepair is generated: round-trip or one-way translations. The latter assumes that document-level data is parallel, and translations are sampled from the source side of the sentences in a group rather than from their back-translations. For parallel data, we take 1.5m parallel instances which were used for CADec training and add 1m instances from our monolingual data. For segments in the parallel part, we either sample translations from the source side or use round-trip translations. The results are provided in Table TABREF35. The model trained on one-way translations is slightly better than the one trained on round-trip translations. As expected, VP ellipsis is the hardest phenomena to be captured using round-trip translations, and the DocRepair model trained on one-way translated data gains 6% accuracy on this test set. This shows that the DocRepair model benefits from having access to non-synthetic English data. This results in exposing DocRepair at training time to Russian translations which suffer from the same inconsistencies as the ones it will have to correct at test time. Varying Training Data ::: Filtering: monolingual (no filtering) or parallel Note that the scores of the DocRepair model trained on 2.5m instances randomly chosen from monolingual data (Table TABREF33) are different from the ones for the model trained on 2.5m instances combined from parallel and monolingual data (Table TABREF35). For convenience, we show these two in Table TABREF36. The domain, the dataset these two data samples were gathered from, and the way we generated training data for DocRepair (round-trip translations) are all the same. The only difference lies in how the data was filtered. For parallel data, as in the previous work BIBREF6, we picked only sentence pairs with large relative time overlap of subtitle frames between source-language and target-language subtitles. This is necessary to ensure the quality of translation data: one needs groups of consecutive sentences in the target language where every sentence has a reliable translation. Table TABREF36 shows that the quality of the model trained on data which came from the parallel part is worse than the one trained on monolingual data. This indicates that requiring each sentence in a group to have a reliable translation changes the distribution of the data, which might be not beneficial for translation quality and provides extra motivation for using monolingual data. Learning Dynamics Let us now look into how the process of DocRepair training progresses. Figure FIGREF38 shows how the BLEU scores with the reference translation and with the baseline context-agnostic translation (i.e. the input for the DocRepair model) are changing during training. First, the model quickly learns to copy baseline translations: the BLEU score with the baseline is very high. Then it gradually learns to change them, which leads to an improvement in BLEU with the reference translation and a drop in BLEU with the baseline. Importantly, the model is reluctant to make changes: the BLEU score between translations of the converged model and the baseline is 82.5. We count the number of changed sentences in every 4-sentence fragment in the test set and plot the histogram in Figure FIGREF38. In over than 20$\%$ of the cases the model has not changed base translations at all. In almost $40\%$, it modified only one sentence and left the remaining 3 sentences unchanged. The model changed more than half sentences in a group in only $14\%$ of the cases. Several examples of the DocRepair translations are shown in Figure FIGREF43. Figure FIGREF42 shows how consistency scores are changing in training. For deixis, the model achieves the final quality quite quickly; for the rest, it needs a large number of training steps to converge. Related Work Our work is most closely related to two lines of research: automatic post-editing (APE) and document-level machine translation. Related Work ::: Automatic post-editing Our model can be regarded as an automatic post-editing system – a system designed to fix systematic MT errors that is decoupled from the main MT system. Automatic post-editing has a long history, including rule-based BIBREF17, statistical BIBREF18 and neural approaches BIBREF19, BIBREF20, BIBREF21. In terms of architectures, modern approaches use neural sequence-to-sequence models, either multi-source architectures that consider both the original source and the baseline translation BIBREF19, BIBREF20, or monolingual repair systems, as in BIBREF21, which is concurrent work to ours. True post-editing datasets are typically small and expensive to create BIBREF22, hence synthetic training data has been created that uses original monolingual data as output for the sequence-to-sequence model, paired with an automatic back-translation BIBREF23 and/or round-trip translation as its input(s) BIBREF19, BIBREF21. While previous work on automatic post-editing operated on the sentence level, the main novelty of this work is that our DocRepair model operates on groups of sentences and is thus able to fix consistency errors caused by the context-agnostic baseline MT system. We consider this strategy of sentence-level baseline translation and context-aware monolingual repair attractive when parallel document-level data is scarce. For training, the DocRepair model only requires monolingual document-level data. While we create synthetic training data via round-trip translation similarly to earlier work BIBREF19, BIBREF21, note that we purposefully use sentence-level MT systems for this to create the types of consistency errors that we aim to fix with the context-aware DocRepair model. Not all types of consistency errors that we want to fix emerge from a round-trip translation, so access to parallel document-level data can be useful (Section SECREF34). Related Work ::: Document-level NMT Neural models of MT that go beyond the sentence-level are an active research area BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF10, BIBREF9, BIBREF11. Typically, the main MT system is modified to take additional context as its input. One limitation of these approaches is that they assume that parallel document-level training data is available. Closest to our work are two-pass models for document-level NMT BIBREF24, BIBREF11, where a second, context-aware model takes the translation and hidden representations of the sentence-level first-pass model as its input. The second-pass model can in principle be trained on a subset of the parallel training data BIBREF11, somewhat relaxing the assumption that all training data is at the document level. Our work is different from this previous work in two main respects. Firstly, we show that consistency can be improved with only monolingual document-level training data. Secondly, the DocRepair model is decoupled from the first-pass MT system, which improves its portability. Conclusions We introduce the first approach to context-aware machine translation using only monolingual document-level data. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. The model performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. Our approach results in substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation. Moreover, we perform error analysis and detect which discourse phenomena are hard to capture using only monolingual document-level data. While in the current work we used text fragments of 4 sentences, in future work we would like to consider longer contexts. Acknowledgments We would like to thank the anonymous reviewers for their comments. The authors also thank David Talbot and Yandex Machine Translation team for helpful discussions and inspiration. Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich acknowledges support from the Swiss National Science Foundation (105212_169888), the European Union’s Horizon 2020 research and innovation programme (grant agreement no 825460), and the Royal Society (NAF\R1\180122).
what was the baseline?
MT system on the data released by BIBREF11
3,716
qasper
4k
Introduction Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing. Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train. Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are: We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU. We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT. We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task. Bilingual Pre-trained LMs We first provide some background of pre-trained language models. Let $_e$ be English word-embeddings and $\Psi ()$ be the Transformer BIBREF10 encoder with parameters $$. Let $_{w_i}$ denote the embedding of word $w_i$ (i.e., $_{w_i} = _e[w_1]$). We omit positional embeddings and bias for clarity. A pre-trained LM typically performs the following computations: (i) transform a sequence of input tokens to contextualized representations $[_{w_1},\dots ,_{w_n}] = \Psi (_{w_1}, \dots , _{w_n}; )$, and (ii) predict an output word $y_i$ at $i^{\text{th}}$ position $p(y_i | _{w_i}) \propto \exp (_{w_i}^\top _{y_i})$. Autoencoding LM BIBREF0 corrupts some input tokens $w_i$ by replacing them with a special token [MASK]. It then predicts the original tokens $y_i = w_i$ from the corrupted tokens. Autoregressive LM BIBREF3 predicts the next token ($y_i = w_{i+1}$) given all the previous tokens. The recently proposed XLNet model BIBREF5 is an autoregressive LM that factorizes output with all possible permutations, which shows empirical performance improvement over GPT-2 due to the ability to capture bidirectional context. Here we assume that the encoder performs necessary masking with respect to each training objective. Given an English pre-trained LM, we wish to learn a bilingual LM for English and a given target language $f$ under a limited computational resource budget. To quickly build a bilingual LM, we directly adapt the English pre-traind model to the target model. Our approach consists of three steps. First, we initialize target language word-embeddings $_f$ in the English embedding space such that embeddings of a target word and its English equivalents are close together (§SECREF8). Next, we create a target LM from the target embeddings and the English encoder $\Psi ()$. We then fine-tune target embeddings while keeping $\Psi ()$ fixed (§SECREF14). Finally, we construct a bilingual LM of $_e$, $_f$, and $\Psi ()$ and fine-tune all the parameters (§SECREF15). Figure FIGREF7 illustrates the last two steps in our approach. Bilingual Pre-trained LMs ::: Initializing Target Embeddings Our approach to learn the initial foreign word embeddings $_f \in ^{|V_f| \times d}$ is based on the idea of mapping the trained English word embeddings $_e \in ^{|V_e| \times d}$ to $_f$ such that if a foreign word and an English word are similar in meaning then their embeddings are similar. Borrowing the idea of universal lexical sharing from BIBREF11, we represent each foreign word embedding $_f[i] \in ^d$ as a linear combination of English word embeddings $_e[j] \in ^d$ where $_i\in ^{|V_e|}$ is a sparse vector and $\sum _j^{|V_e|} \alpha _{ij} = 1$. In this step of initializing foreign embeddings, having a good estimation of $$ could speed of the convergence when tuning the foreign model and enable zero-shot transfer (§SECREF5). In the following, we discuss how to estimate $_i\;\forall i\in \lbrace 1,2, \dots , |V_f|\rbrace $ under two scenarios: (i) we have parallel data of English-foreign, and (ii) we only rely on English and foreign monolingual data. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Parallel Corpus Given an English-foreign parallel corpus, we can estimate word translation probability $p(e\,|\,f)$ for any (English-foreign) pair $(e, f)$ using popular word-alignment BIBREF12 toolkits such as fast-align BIBREF13. We then assign: Since $_i$ is estimated from word alignment, it is a sparse vector. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Monolingual Corpus For low resource languages, parallel data may not be available. In this case, we rely only on monolingual data (e.g., Wikipedias). We estimate word translation probabilities from word embeddings of the two languages. Word vectors of these languages can be learned using fastText BIBREF14 and then are aligned into a shared space with English BIBREF15, BIBREF16. Unlike learning contextualized representations, learning word vectors is fast and computationally cheap. Given the aligned vectors $\bar{}_f$ of foreign and $\bar{}_e$ of English, we calculate the word translation matrix $\in ^{|V_f|\times |V_e|}$ as Here, we use $\mathrm {sparsemax}$ BIBREF17 instead of softmax. Sparsemax is a sparse version of softmax and it puts zero probabilities on most of the word in the English vocabulary except few English words that are similar to a given foreign word. This property is desirable in our approach since it leads to a better initialization of the foreign embeddings. Bilingual Pre-trained LMs ::: Fine-tuning Target Embeddings After initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned. Bilingual Pre-trained LMs ::: Fine-tuning Bilingual LM We create a bilingual LM by plugging foreign language specific parameters to the pre-trained English LM (Figure FIGREF7). The new model has two separate embedding layers and output layers, one for English and one for foreign language. The encoder layer in between is shared. We then fine-tune this model using English and foreign monolingual data. Here, we keep tuning the model on English to ensure that it does not forget what it has learned in English and that we can use the resulting model for zero-shot transfer (§SECREF3). In this step, the encoder parameters are also updated so that in can learn syntactic aspects (i.e., word order, morphological agreement) of the target languages. Zero-shot Experiments We build our bilingual LMs, named RAMEN, starting from BERT$_{\textsc {base}}$, BERT$_{\textsc {large}}$, RoBERTa$_{\textsc {base}}$, and RoBERTa$_{\textsc {large}}$ pre-trained models. Using BERT$_{\textsc {base}}$ allows us to compare the results with mBERT model. Using BERT$_{\textsc {large}}$ and RoBERTa allows us to investigate whether the performance of the target LM correlates with the performance of the source LM. We evaluate our models on two cross-lingual zero-shot tasks: (1) Cross-lingual Natural Language Inference (XNLI) and (2) dependency parsing. Zero-shot Experiments ::: Data We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource. For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline. We tokenize English using the provided tokenizer from pre-trained models. For target languages, we use fastBPE to learn 30,000 BPE codes and 50,000 codes when transferring from BERT and RoBERTa respectively. We truncate the BPE vocabulary of foreign languages to match the size of the English vocabulary in the source models. Precisely, the size of foreign vocabulary is set to 32,000 when transferring from BERT and 50,000 when transferring from RoBERTa. We use XNLI dataset BIBREF9 for classification task and Universal Dependencies v2.4 BIBREF21 for parsing task. Since a language might have more than one treebank in Universal Dependencies, we use the following treebanks: en_ewt (English), fr_gsd (French), ru_syntagrus (Russian) ar_padt (Arabic), vi_vtb (Vietnamese), hi_hdtb (Hindi), and zh_gsd (Chinese). Zero-shot Experiments ::: Data ::: Remark on BPE BIBREF22 show that sharing subwords between languages improves alignments between embedding spaces. BIBREF2 observe a strong correlation between the percentage of overlapping subwords and mBERT's performances for cross-lingual zero-shot transfer. However, in our current approach, subwords between source and target are not shared. A subword that is in both English and foreign vocabulary has two different embeddings. Zero-shot Experiments ::: Estimating translation probabilities Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13. Estimating subword translation probabilities from aligned word vectors requires an additional processing step since the provided vectors from fastText are not at subword level. We use the following approximation to obtain subword vectors: the vector $_s$ of subword $s$ is the weighted average of all the aligned word vectors $_{w_i}$ that have $s$ as an subword where $p(w_j)$ is the unigram probability of word $w_j$ and $n_s = \sum _{w_j:\, s\in w_j} p(w_j)$. We take the top 50,000 words in each aligned word-vectors to compute subword vectors. In both cases, not all the words in the foreign vocabulary can be initialized from the English word-embeddings. Those words are initialized randomly from a Gaussian $\mathcal {N}(0, {1}{d^2})$. Zero-shot Experiments ::: Hyper-parameters In all the experiments, we tune RAMEN$_{\textsc {base}}$ for 175,000 updates and RAMEN$_{\textsc {large}}$ for 275,000 updates where the first 25,000 updates are for language specific parameters. The sequence length is set to 256. The mini-batch size are 64 and 24 when tuning language specific parameters using RAMEN$_{\textsc {base}}$ and RAMEN$_{\textsc {large}}$ respectively. For tuning bilingual LMs, we use a mini-batch size of 64 for RAMEN$_{\textsc {base}}$ and 24 for RAMEN$_{\textsc {large}}$ where half of the batch are English sequences and the other half are foreign sequences. This strategy of balancing mini-batch has been used in multilingual neural machine translation BIBREF23, BIBREF24. We optimize RAMEN$_{\textsc {base}}$ using Lookahead optimizer BIBREF25 wrapped around Adam with the learning rate of $10^{-4}$, the number of fast weight updates $k=5$, and interpolation parameter $\alpha =0.5$. We choose Lookahead optimizer because it has been shown to be robust to the initial parameters of the based optimizer (Adam). For Adam optimizer, we linearly increase the learning rate from $10^{-7}$ to $10^{-4}$ in the first 4000 updates and then follow an inverse square root decay. All RAMEN$_{\textsc {large}}$ models are optimized with Adam due to memory limit. When fine-tuning RAMEN on XNLI and UD, we use a mini-batch size of 32, Adam's learning rate of $10^{-5}$. The number of epochs are set to 4 and 50 for XNLI and UD tasks respectively. All experiments are carried out on a single Tesla V100 16GB GPU. Each RAMEN$_{\textsc {base}}$ model is trained within a day and each RAMEN$_{\textsc {large}}$ is trained within two days. Results In this section, we present the results of out models for two zero-shot cross lingual transfer tasks: XNLI and dependency parsing. Results ::: Cross-lingual Natural Language Inference Table TABREF32 shows the XNLI test accuracy. For reference, we also include the scores from the previous work, notably the state-of-the-art system XLM BIBREF6. Before discussing the results, we spell out that the fairest comparison in this experiment is the comparison between mBERT and RAMEN$_{\textsc {base}}$+BERT trained with monolingual only. We first discuss the transfer results from BERT. Initialized from fastText vectors, RAMEN$_{\textsc {base}}$ slightly outperforms mBERT by 1.9 points on average and widen the gap of 3.3 points on Arabic. RAMEN$_{\textsc {base}}$ gains extra 0.8 points on average when initialized from parallel data. With triple number of parameters, RAMEN$_{\textsc {large}}$ offers an additional boost in term of accuracy and initialization with parallel data consistently improves the performance. It has been shown that BERT$_{\textsc {large}}$ significantly outperforms BERT$_{\textsc {base}}$ on 11 English NLP tasks BIBREF0, the strength of BERT$_{\textsc {large}}$ also shows up when adapted to foreign languages. Transferring from RoBERTa leads to better zero-shot accuracies. With the same initializing condition, RAMEN$_{\textsc {base}}$+RoBERTa outperforms RAMEN$_{\textsc {base}}$+BERT on average by 2.9 and 2.3 points when initializing from monolingual and parallel data respectively. This result show that with similar number of parameters, our approach benefits from a better English pre-trained model. When transferring from RoBERTa$_{\textsc {large}}$, we obtain state-of-the-art results for five languages. Currently, RAMEN only uses parallel data to initialize foreign embeddings. RAMEN can also exploit parallel data through translation objective proposed in XLM. We believe that by utilizing parallel data during the fine-tuning of RAMEN would bring additional benefits for zero-shot tasks. We leave this exploration to future work. In summary, starting from BERT$_{\textsc {base}}$, our approach obtains competitive bilingual LMs with mBERT for zero-shot XNLI. Our approach shows the accuracy gains when adapting from a better pre-trained model. Results ::: Universal Dependency Parsing We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing. We first look at the fairest comparison between mBERT and monolingually initialized RAMEN$_{\textsc {base}}$+BERT. The latter outperforms the former on five languages except Arabic. We observe the largest gain of +5.2 LAS for French. Chinese enjoys +3.1 LAS from our approach. With similar architecture (12 or 24 layers) and initialization (using monolingual or parallel data), RAMEN+RoBERTa performs better than RAMEN+BERT for most of the languages. Arabic and Hindi benefit the most from bigger models. For the other four languages, RAMEN$_{\textsc {large}}$ renders a modest improvement over RAMEN$_{\textsc {base}}$. Analysis ::: Impact of initialization Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization. Analysis ::: Are contextual representations from RAMEN also good for supervised parsing? All the RAMEN models are built from English and tuned on English for zero-shot cross-lingual tasks. It is reasonable to expect RAMENs do well in those tasks as we have shown in our experiments. But are they also a good feature extractor for supervised tasks? We offer a partial answer to this question by evaluating our model for supervised dependency parsing on UD datasets. We used train/dev/test splits provided in UD to train and evaluate our RAMEN-based parser. Table TABREF38 summarizes the results (LAS) of our supervised parser. For a fair comparison, we choose mBERT as the baseline and all the RAMEN models are initialized from aligned fastText vectors. With the same architecture of 12 Transformer layers, RAMEN$_{\textsc {base}}$+BERT performs competitive to mBERT and outshines mBERT by +1.2 points for Vietnamese. The best LAS results are obtained by RAMEN$_{\textsc {large}}$+RoBERTa with 24 Transformer layers. Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks. Analysis ::: How does linguistic knowledge transfer happen through each training stages? We evaluate the performance of RAMEN+RoBERTa$_{\textsc {base}}$ (initialized from monolingual data) at each training steps: initialization of word embeddings (0K update), fine-tuning target embeddings (25K), and fine-tuning the model on both English and target language (at each 25K updates). The results are presented in Figure FIGREF40. Without fine-tuning, the average accuracy of XLNI is 39.7% for a three-ways classification task, and the average LAS score is 3.6 for dependency parsing. We see the biggest leap in the performance after 50K updates. While semantic similarity task profits significantly at 25K updates of the target embeddings, syntactic task benefits with further fine-tuning the encoder. This is expected since the target languages might exhibit different syntactic structures than English and fine-tuning encoder helps to capture language specific structures. We observe a substantial gain of 19-30 LAS for all languages except French after 50K updates. Language similarities have more impact on transferring syntax than semantics. Without tuning the English encoder, French enjoys 50.3 LAS for being closely related to English, whereas Arabic and Hindi, SOV languages, modestly reach 4.2 and 6.4 points using the SVO encoder. Although Chinese has SVO order, it is often seen as head-final while English is strong head-initial. Perhaps, this explains the poor performance for Chinese. Limitations While we have successfully adapted autoencoding pre-trained LMs from English to other languages, the question whether our approach can also be applied for autoregressive LM such as XLNet still remains. We leave the investigation to future work. Conclusions In this work, we have presented a simple and effective approach for rapidly building a bilingual LM under a limited computational budget. Using BERT as the starting point, we demonstrate that our approach produces better than mBERT on two cross-lingual zero-shot sentence classification and dependency parsing. We find that the performance of our bilingual LM, RAMEN, correlates with the performance of the original pre-trained English models. We also find that RAMEN is also a powerful feature extractor in supervised dependency parsing. Finally, we hope that our work sparks of interest in developing fast and effective methods for transferring pre-trained English models to other languages.
What metrics are used for evaluation?
translation probabilities, Labeled Attachment Scores (LAS)
3,402
qasper
4k
Introduction Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule . Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such non-linguistic modalities. This makes recognizing textual sarcasm more challenging for both humans and machines. Sarcasm detection plays an indispensable role in applications like online review summarizers, dialog systems, recommendation systems and sentiment analyzers. This makes automatic detection of sarcasm an important problem. However, it has been quite difficult to solve such a problem with traditional NLP tools and techniques. This is apparent from the results reported by the survey from DBLP:journals/corr/JoshiBC16. The following discussion brings more insights into this. Consider a scenario where an online reviewer gives a negative opinion about a movie through sarcasm: “This is the kind of movie you see because the theater has air conditioning”. It is difficult for an automatic sentiment analyzer to assign a rating to the movie and, in the absence of any other information, such a system may not be able to comprehend that prioritizing the air-conditioning facilities of the theater over the movie experience indicates a negative sentiment towards the movie. This gives an intuition to why, for sarcasm detection, it is necessary to go beyond textual analysis. We aim to address this problem by exploiting the psycholinguistic side of sarcasm detection, using cognitive features extracted with the help of eye-tracking. A motivation to consider cognitive features comes from analyzing human eye-movement trajectories that supports the conjecture: Reading sarcastic texts induces distinctive eye movement patterns, compared to literal texts. The cognitive features, derived from human eye movement patterns observed during reading, include two primary feature types: The cognitive features, along with textual features used in best available sarcasm detectors, are used to train binary classifiers against given sarcasm labels. Our experiments show significant improvement in classification accuracy over the state of the art, by performing such augmentation. Related Work Sarcasm, in general, has been the focus of research for quite some time. In one of the pioneering works jorgensen1984test explained how sarcasm arises when a figurative meaning is used opposite to the literal meaning of the utterance. In the word of clark1984pretense, sarcasm processing involves canceling the indirectly negated message and replacing it with the implicated one. giora1995irony, on the other hand, define sarcasm as a mode of indirect negation that requires processing of both negated and implicated messages. ivanko2003context define sarcasm as a six tuple entity consisting of a speaker, a listener, Context, Utterance, Literal Proposition and Intended Proposition and study the cognitive aspects of sarcasm processing. Computational linguists have previously addressed this problem using rule based and statistical techniques, that make use of : (a) Unigrams and Pragmatic features BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 (b) Stylistic patterns BIBREF4 and patterns related to situational disparity BIBREF5 and (c) Hastag interpretations BIBREF6 , BIBREF7 . Most of the previously done work on sarcasm detection uses distant supervision based techniques (ex: leveraging hashtags) and stylistic/pragmatic features (emoticons, laughter expressions such as “lol” etc). But, detecting sarcasm in linguistically well-formed structures, in absence of explicit cues or information (like emoticons), proves to be hard using such linguistic/stylistic features alone. With the advent of sophisticated eye-trackers and electro/magneto-encephalographic (EEG/MEG) devices, it has been possible to delve deep into the cognitive underpinnings of sarcasm understanding. Filik2014, using a series of eye-tracking and EEG experiments try to show that for unfamiliar ironies, the literal interpretation would be computed first. They also show that a mismatch with context would lead to a re-interpretation of the statement, as being ironic. Camblin2007103 show that in multi-sentence passages, discourse congruence has robust effects on eye movements. This also implies that disrupted processing occurs for discourse incongruent words, even though they are perfectly congruous at the sentence level. In our previous work BIBREF8 , we augment cognitive features, derived from eye-movement patterns of readers, with textual features to detect whether a human reader has realized the presence of sarcasm in text or not. The recent advancements in the literature discussed above, motivate us to explore gaze-based cognition for sarcasm detection. As far as we know, our work is the first of its kind. Eye-tracking Database for Sarcasm Analysis Sarcasm often emanates from incongruity BIBREF9 , which enforces the brain to reanalyze it BIBREF10 . This, in turn, affects the way eyes move through the text. Hence, distinctive eye-movement patterns may be observed in the case of successful processing of sarcasm in text in contrast to literal texts. This hypothesis forms the crux of our method for sarcasm detection and we validate this using our previously released freely available sarcasm dataset BIBREF8 enriched with gaze information. Document Description The database consists of 1,000 short texts, each having 10-40 words. Out of these, 350 are sarcastic and are collected as follows: (a) 103 sentences are from two popular sarcastic quote websites, (b) 76 sarcastic short movie reviews are manually extracted from the Amazon Movie Corpus BIBREF11 by two linguists. (c) 171 tweets are downloaded using the hashtag #sarcasm from Twitter. The 650 non-sarcastic texts are either downloaded from Twitter or extracted from the Amazon Movie Review corpus. The sentences do not contain words/phrases that are highly topic or culture specific. The tweets were normalized to make them linguistically well formed to avoid difficulty in interpreting social media lingo. Every sentence in our dataset carries positive or negative opinion about specific “aspects”. For example, the sentence “The movie is extremely well cast” has positive sentiment about the aspect “cast”. The annotators were seven graduate students with science and engineering background, and possess good English proficiency. They were given a set of instructions beforehand and are advised to seek clarifications before they proceed. The instructions mention the nature of the task, annotation input method, and necessity of head movement minimization during the experiment. Task Description The task assigned to annotators was to read sentences one at a time and label them with with binary labels indicating the polarity (i.e., positive/negative). Note that, the participants were not instructed to annotate whether a sentence is sarcastic or not., to rule out the Priming Effect (i.e., if sarcasm is expected beforehand, processing incongruity becomes relatively easier BIBREF12 ). The setup ensures its “ecological validity” in two ways: (1) Readers are not given any clue that they have to treat sarcasm with special attention. This is done by setting the task to polarity annotation (instead of sarcasm detection). (2) Sarcastic sentences are mixed with non sarcastic text, which does not give prior knowledge about whether the forthcoming text will be sarcastic or not. The eye-tracking experiment is conducted by following the standard norms in eye-movement research BIBREF13 . At a time, one sentence is displayed to the reader along with the “aspect” with respect to which the annotation has to be provided. While reading, an SR-Research Eyelink-1000 eye-tracker (monocular remote mode, sampling rate 500Hz) records several eye-movement parameters like fixations (a long stay of gaze) and saccade (quick jumping of gaze between two positions of rest) and pupil size. The accuracy of polarity annotation varies between 72%-91% for sarcastic texts and 75%-91% for non-sarcastic text, showing the inherent difficulty of sentiment annotation, when sarcasm is present in the text under consideration. Annotation errors may be attributed to: (a) lack of patience/attention while reading, (b) issues related to text comprehension, and (c) confusion/indecisiveness caused due to lack of context. For our analysis, we do not discard the incorrect annotations present in the database. Since our system eventually aims to involve online readers for sarcasm detection, it will be hard to segregate readers who misinterpret the text. We make a rational assumption that, for a particular text, most of the readers, from a fairly large population, will be able to identify sarcasm. Under this assumption, the eye-movement parameters, averaged across all readers in our setting, may not be significantly distorted by a few readers who would have failed to identify sarcasm. This assumption is applicable for both regular and multi-instance based classifiers explained in section SECREF6 . Analysis of Eye-movement Data We observe distinct behavior during sarcasm reading, by analyzing the “fixation duration on the text” (also referred to as “dwell time” in the literature) and “scanpaths” of the readers. Variation in the Average Fixation Duration per Word Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time BIBREF14 . Hence, fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one. We observe this for all participants in our dataset, with the average fixation duration per word for sarcastic texts being at least 1.5 times more than that of non-sarcastic texts. To test the statistical significance, we conduct a two-tailed t-test (assuming unequal variance) to compare the average fixation duration per word for sarcastic and non-sarcastic texts. The hypothesized mean difference is set to 0 and the error tolerance limit ( INLINEFORM0 ) is set to 0.05. The t-test analysis, presented in Table TABREF11 , shows that for all participants, a statistically significant difference exists between the average fixation duration per word for sarcasm (higher average fixation duration) and non-sarcasm (lower average fixation duration). This affirms that the presence of sarcasm affects the duration of fixation on words. It is important to note that longer fixations may also be caused by other linguistic subtleties (such as difficult words, ambiguity and syntactically complex structures) causing delay in comprehension, or occulomotor control problems forcing readers to spend time adjusting eye-muscles. So, an elevated average fixation duration per word may not sufficiently indicate the presence of sarcasm. But we would also like to share that, for our dataset, when we considered readability (Flesch readability ease-score BIBREF15 ), number of words in a sentence and average character per word along with the sarcasm label as the predictors of average fixation duration following a linear mixed effect model BIBREF16 , sarcasm label turned out to be the most significant predictor with a maximum slope. This indicates that average fixation duration per word has a strong connection with the text being sarcastic, at least in our dataset. We now analyze scanpaths to gain more insights into the sarcasm comprehension process. Analysis of Scanpaths Scanpaths are line-graphs that contain fixations as nodes and saccades as edges; the radii of the nodes represent the fixation duration. A scanpath corresponds to a participant's eye-movement pattern while reading a particular sentence. Figure FIGREF14 presents scanpaths of three participants for the sarcastic sentence S1 and the non-sarcastic sentence S2. The x-axis of the graph represents the sequence of words a reader reads, and the y-axis represents a temporal sequence in milliseconds. Consider a sarcastic text containing incongruous phrases A and B. Our qualitative scanpath-analysis reveals that scanpaths with respect to sarcasm processing have two typical characteristics. Often, a long regression - a saccade that goes to a previously visited segment - is observed when a reader starts reading B after skimming through A. In a few cases, the fixation duration on A and B are significantly higher than the average fixation duration per word. In sentence S1, we see long and multiple regressions from the two incongruous phrases “misconception” and “cherish”, and a few instances where phrases “always cherish” and “original misconception” are fixated longer than usual. Such eye-movement behaviors are not seen for S2. Though sarcasm induces distinctive scanpaths like the ones depicted in Figure FIGREF14 in the observed examples, presence of such patterns is not sufficient to guarantee sarcasm; such patterns may also possibly arise from literal texts. We believe that a combination of linguistic features, readability of text and features derived from scanpaths would help discriminative machine learning models learn sarcasm better. Features for Sarcasm Detection We describe the features used for sarcasm detection in Table . The features enlisted under lexical,implicit incongruity and explicit incongruity are borrowed from various literature (predominantly from joshi2015harnessing). These features are essential to separate sarcasm from other forms semantic incongruity in text (for example ambiguity arising from semantic ambiguity or from metaphors). Two additional textual features viz. readability and word count of the text are also taken under consideration. These features are used to reduce the effect of text hardness and text length on the eye-movement patterns. Simple Gaze Based Features Readers' eye-movement behavior, characterized by fixations, forward saccades, skips and regressions, can be directly quantified by simple statistical aggregation (i.e., either computing features for individual participants and then averaging or performing a multi-instance based learning as explained in section SECREF6 ). Since these eye-movement attributes relate to the cognitive process in reading BIBREF17 , we consider these as features in our model. Some of these features have been reported by sarcasmunderstandability for modeling sarcasm understandability of readers. However, as far as we know, these features are being introduced in NLP tasks like textual sarcasm detection for the first time. The values of these features are believed to increase with the increase in the degree of surprisal caused by incongruity in text (except skip count, which will decrease). Complex Gaze Based Features For these features, we rely on a graph structure, namely “saliency graphs", derived from eye-gaze information and word sequences in the text. For each reader and each sentence, we construct a “saliency graph”, representing the reader's attention characteristics. A saliency graph for a sentence INLINEFORM0 for a reader INLINEFORM1 , represented as INLINEFORM2 , is a graph with vertices ( INLINEFORM3 ) and edges ( INLINEFORM4 ) where each vertex INLINEFORM5 corresponds to a word in INLINEFORM6 (may not be unique) and there exists an edge INLINEFORM7 between vertices INLINEFORM8 and INLINEFORM9 if R performs at least one saccade between the words corresponding to INLINEFORM10 and INLINEFORM11 . Figure FIGREF15 shows an example of a saliency graph.A saliency graph may be weighted, but not necessarily connected, for a given text (as there may be words in the given text with no fixation on them). The “complex” gaze features derived from saliency graphs are also motivated by the theory of incongruity. For instance, Edge Density of a saliency graph increases with the number of distinct saccades, which could arise from the complexity caused by presence of sarcasm. Similarly, the highest weighted degree of a graph is expected to be higher, if the node corresponds to a phrase, incongruous to some other phrase in the text. The Sarcasm Classifier We interpret sarcasm detection as a binary classification problem. The training data constitutes 994 examples created using our eye-movement database for sarcasm detection. To check the effectiveness of our feature set, we observe the performance of multiple classification techniques on our dataset through a stratified 10-fold cross validation. We also compare the classification accuracy of our system and the best available systems proposed by riloff2013sarcasm and joshi2015harnessing on our dataset. Using Weka BIBREF18 and LibSVM BIBREF19 APIs, we implement the following classifiers: Results Table TABREF17 shows the classification results considering various feature combinations for different classifiers and other systems. These are: Unigram (with principal components of unigram feature vectors), Sarcasm (the feature-set reported by joshi2015harnessing subsuming unigram features and features from other reported systems) Gaze (the simple and complex cognitive features we introduce, along with readability and word count features), and Gaze+Sarcasm (the complete set of features). For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance “bag” of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag. For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall. To see if the improvement obtained is statistically significant over the state-of-the art system with textual sarcasm features alone, we perform McNemar test. The output of the SVM classifier using only linguistic features used for sarcasm detection by joshi2015harnessing and the output of the MILR classifier with the complete set of features are compared, setting threshold INLINEFORM0 . There was a significant difference in the classifier's accuracy with p(two-tailed) = 0.02 with an odds-ratio of 1.43, showing that the classification accuracy improvement is unlikely to be observed by chance in 95% confidence interval. Considering Reading Time as a Cognitive Feature along with Sarcasm Features One may argue that, considering simple measures of reading effort like “reading time” as cognitive feature instead of the expensive eye-tracking features for sarcasm detection may be a cost-effective solution. To examine this, we repeated our experiments with “reading time” considered as the only cognitive feature, augmented with the textual features. The F-scores of all the classifiers turn out to be close to that of the classifiers considering sarcasm feature alone and the difference in the improvement is not statistically significant ( INLINEFORM0 ). One the other hand, F-scores with gaze features are superior to the F-scores when reading time is considered as a cognitive feature. How Effective are the Cognitive Features We examine the effectiveness of cognitive features on the classification accuracy by varying the input training data size. To examine this, we create a stratified (keeping the class ratio constant) random train-test split of 80%:20%. We train our classifier with 100%, 90%, 80% and 70% of the training data with our whole feature set, and the feature combination from joshi2015harnessing. The goodness of our system is demonstrated by improvements in F-score and Kappa statistics, shown in Figure FIGREF22 . We further analyze the importance of features by ranking the features based on (a) Chi squared test, and (b) Information Gain test, using Weka's attribute selection module. Figure FIGREF23 shows the top 20 ranked features produced by both the tests. For both the cases, we observe 16 out of top 20 features to be gaze features. Further, in each of the cases, Average Fixation Duration per Word and Largest Regression Position are seen to be the two most significant features. Example Cases Table TABREF21 shows a few example cases from the experiment with stratified 80%-20% train-test split. Example sentence 1 is sarcastic, and requires extra-linguistic knowledge (about poor living conditions at Manchester). Hence, the sarcasm detector relying only on textual features is unable to detect the underlying incongruity. However, our system predicts the label successfully, possibly helped by the gaze features. Similarly, for sentence 2, the false sense of presence of incongruity (due to phrases like “Helped me” and “Can't stop”) affects the system with only linguistic features. Our system, though, performs well in this case also. Sentence 3 presents a false-negative case where it was hard for even humans to get the sarcasm. This is why our gaze features (and subsequently the complete set of features) account for erroneous prediction. In sentence 4, gaze features alone false-indicate presence of incongruity, whereas the system predicts correctly when gaze and linguistic features are taken together. From these examples, it can be inferred that, only gaze features would not have sufficed to rule out the possibility of detecting other forms of incongruity that do not result in sarcasm. Error Analysis Errors committed by our system arise from multiple factors, starting from limitations of the eye-tracker hardware to errors committed by linguistic tools and resources. Also, aggregating various eye-tracking parameters to extract the cognitive features may have caused information loss in the regular classification setting. Conclusion In the current work, we created a novel framework to detect sarcasm, that derives insights from human cognition, that manifests over eye movement patterns. We hypothesized that distinctive eye-movement patterns, associated with reading sarcastic text, enables improved detection of sarcasm. We augmented traditional linguistic features with cognitive features obtained from readers' eye-movement data in the form of simple gaze-based features and complex features derived from a graph structure. This extended feature-set improved the success rate of the sarcasm detector by 3.7%, over the best available system. Using cognitive features in an NLP Processing system like ours is the first proposal of its kind. Our general approach may be useful in other NLP sub-areas like sentiment and emotion analysis, text summarization and question answering, where considering textual clues alone does not prove to be sufficient. We propose to augment this work in future by exploring deeper graph and gaze features. We also propose to develop models for the purpose of learning complex gaze feature representation, that accounts for the power of individual eye movement patterns along with the aggregated patterns of eye movements. Acknowledgments We thank the members of CFILT Lab, especially Jaya Jha and Meghna Singh, and the students of IIT Bombay for their help and support.
What kind of stylistic features are obtained?
Unanswerable
3,543
qasper
4k
Problem Statement Spoken conversations still remain the most natural and effortless means of human communication. Thus a lot of valuable information is conveyed and exchanged in such an unstructured form. In telehealth settings, nurses might call discharged patients who have returned home to continue to monitor their health status. Human language technology that can efficiently and effectively extract key information from such conversations is clinically useful, as it can help streamline workflow processes and digitally document patient medical information to increase staff productivity. In this work, we design and prototype a dialogue comprehension system in the question-answering manner, which is able to comprehend spoken conversations between nurses and patients to extract clinical information. Motivation of Approach Machine comprehension of written passages has made tremendous progress recently. Large quantities of supervised training data for reading comprehension (e.g. SQuAD BIBREF0 ), the wide adoption and intense experimentation of neural modeling BIBREF1 , BIBREF2 , and the advancements in vector representations of word embeddings BIBREF3 , BIBREF4 all contribute significantly to the achievements obtained so far. The first factor, the availability of large scale datasets, empowers the latter two factors. To date, there is still very limited well-annotated large-scale data suitable for modeling human-human spoken dialogues. Therefore, it is not straightforward to directly port over the recent endeavors in reading comprehension to dialogue comprehension tasks. In healthcare, conversation data is even scarcer due to privacy issues. Crowd-sourcing is an efficient way to annotate large quantities of data, but less suitable for healthcare scenarios, where domain knowledge is required to guarantee data quality. To demonstrate the feasibility of a dialogue comprehension system used for extracting key clinical information from symptom monitoring conversations, we developed a framework to construct a simulated human-human dialogue dataset to bootstrap such a prototype. Similar efforts have been conducted for human-machine dialogues for restaurant or movie reservations BIBREF5 . To the best of our knowledge, no one to date has done so for human-human conversations in healthcare. Human-human Spoken Conversations Human-human spoken conversations are a dynamic and interactive flow of information exchange. While developing technology to comprehend such spoken conversations presents similar technical challenges as machine comprehension of written passages BIBREF6 , the challenges are further complicated by the interactive nature of human-human spoken conversations: (1) Zero anaphora is more common: Co-reference resolution of spoken utterances from multiple speakers is needed. For example, in Figure FIGREF5 (a) headaches, the pain, it, head bulging all refer to the patient's headache symptom, but they were uttered by different speakers and across multiple utterances and turns. In addition, anaphors are more likely to be omitted (see Figure FIGREF5 (a) A4) as this does not affect the human listener’s understanding, but it might be challenging for computational models. (2) Thinking aloud more commonly occurs: Since it is more effortless to speak than to type, one is more likely to reveal her running thoughts when talking. In addition, one cannot retract what has been uttered, while in text communications, one is more likely to confirm the accuracy of the information in a written response and revise if necessary before sending it out. Thinking aloud can lead to self-contradiction, requiring more context to fully understand the dialogue; e.g., in A6 in Figure FIGREF5 (a), the patient at first says he has none of the symptoms asked, but later revises his response saying that he does get dizzy after running. (3) Topic drift is more common and harder to detect in spoken conversations: An example is shown in Figure FIGREF5 (a) in A3, where No is actually referring to cough in the previous question, and then the topic is shifted to headache. In spoken conversations, utterances are often incomplete sentences so traditional linguistic features used in written passages such as punctuation marks indicating syntactic boundaries or conjunction words suggesting discourse relations might no longer exist. Dialogue Comprehension Task Figure FIGREF5 (b) illustrates the proposed dialogue comprehension task using a question answering (QA) model. The input are a multi-turn symptom checking dialogue INLINEFORM0 and a query INLINEFORM1 specifying a symptom with one of its attributes; the output is the extracted answer INLINEFORM2 from the given dialogue. A training or test sample is defined as INLINEFORM3 . Five attributes, specifying certain details of clinical significance, are defined to characterize the answer types of INLINEFORM4 : (1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom. For each symptom/attribute, it can take on different linguistic expressions, defined as entities. Note that if the queried symptom or attribute is not mentioned in the dialogue, the groundtruth output is “No Answer”, as in BIBREF6 . Reading Comprehension Large-scale reading comprehension tasks like SQuAD BIBREF0 and MARCO BIBREF7 provide question-answer pairs from a vast range of written passages, covering different kinds of factual answers involving entities such as location and numerical values. Furthermore, HotpotQA BIBREF8 requires multi-step inference and provides numerous answer types. CoQA BIBREF9 and QuAC BIBREF10 are designed to mimic multi-turn information-seeking discussions of the given material. In these tasks, contextual reasoning like coreference resolution is necessary to grasp rich linguistic patterns, encouraging semantic modeling beyond naive lexical matching. Neural networks contribute to impressive progress in semantic modeling: distributional semantic word embeddings BIBREF3 , contextual sequence encoding BIBREF11 , BIBREF12 and the attention mechanism BIBREF13 , BIBREF14 are widely adopted in state-of-the-art comprehension models BIBREF1 , BIBREF2 , BIBREF4 . While language understanding tasks in dialogue such as domain identification BIBREF15 , slot filling BIBREF16 and user intent detection BIBREF17 have attracted much research interest, work in dialogue comprehension is still limited, if any. It is labor-intensive and time-consuming to obtain a critical mass of annotated conversation data for computational modeling. Some propose to collect text data from human-machine or machine-machine dialogues BIBREF18 , BIBREF5 . In such cases, as human speakers are aware of current limitations of dialogue systems or due to pre-defined assumptions of user simulators, there are fewer cases of zero anaphora, thinking aloud, and topic drift, which occur more often in human-human spoken interactions. NLP for Healthcare There is emerging interest in research and development activities at the intersection of machine learning and healthcare , of which much of the NLP related work are centered around social media or online forums (e.g., BIBREF19 , BIBREF20 ), partially due to the world wide web as a readily available source of information. Other work in this area uses public data sources such as MIMIC in electronic health records: text classification approaches have been applied to analyze unstructured clinical notes for ICD code assignment BIBREF21 and automatic intensive emergency prediction BIBREF22 . Sequence-to-sequence textual generation has been used for readable notes based on medical and demographic recordings BIBREF23 . For mental health, there has been more focus on analyzing dialogues. For example, sequential modeling of audio and text have helped detect depression from human-machine interviews BIBREF24 . However, few studies have examined human-human spoken conversations in healthcare settings. Data Preparation We used recordings of nurse-initiated telephone conversations for congestive heart failure patients undergoing telemonitoring, post-discharge from the hospital. The clinical data was acquired by the Health Management Unit at Changi General Hospital. This research study was approved by the SingHealth Centralised Institutional Review Board (Protocol 1556561515). The patients were recruited during 2014-2016 as part of their routine care delivery, and enrolled into the telemonitoring health management program with consent for use of anonymized versions of their data for research. The dataset comprises a total of 353 conversations from 40 speakers (11 nurses, 16 patients, and 13 caregivers) with consent to the use of anonymized data for research. The speakers are 38 to 88 years old, equally distributed across gender, and comprise a range of ethnic groups (55% Chinese, 17% Malay, 14% Indian, 3% Eurasian, and 11% unspecified). The conversations cover 11 topics (e.g., medication compliance, symptom checking, education, greeting) and 9 symptoms (e.g., chest pain, cough) and amount to 41 hours. Data preprocessing and anonymization were performed by a data preparation team, separate from the data analysis team to maintain data confidentiality. The data preparation team followed standard speech recognition transcription guidelines, where words are transcribed verbatim to include false starts, disfluencies, mispronunciations, and private self-talk. Confidential information were marked and clipped off from the audio and transcribed with predefined tags in the annotation. Conversation topics and clinical symptoms were also annotated and clinically validated by certified telehealth nurses. Linguistic Characterization on Seed Data To analyze the linguistic structure of the inquiry-response pairs in the entire 41-hour dataset, we randomly sampled a seed dataset consisting of 1,200 turns and manually categorized them to different types, which are summarized in Table TABREF14 along with the corresponding occurrence frequency statistics. Note that each given utterance could be categorized to more than one type. We elaborate on each utterance type below. Open-ended Inquiry: Inquiries about general well-being or a particular symptom; e.g., “How are you feeling?” and “Do you cough?” Detailed Inquiry: Inquiries with specific details that prompt yes/no answers or clarifications; e.g., “Do you cough at night?” Multi-Intent Inquiry: Inquiring more than one symptom in a question; e.g., “Any cough, chest pain, or headache?” Reconfirmation Inquiry: The nurse reconfirms particular details; e.g., “Really? At night?” and “Serious or mild?”. This case is usually related to explicit or implicit coreferencing. Inquiry with Transitional Clauses: During spoken conversations, one might repeat what the other party said, but it is unrelated to the main clause of the question. This is usually due to private self-talk while thinking aloud, and such utterances form a transitional clause before the speaker starts a new topic; e.g., “Chest pain... no chest pain, I see... any cough?”. Yes/No Response: Yes/No responses seem straightforward, but sometimes lead to misunderstanding if one does not interpret the context appropriately. One case is tag questions: A:“You don't cough at night, do you?” B:`Yes, yes” A:“cough at night?” B:“No, no cough”. Usually when the answer is unclear, clarifying inquiries will be asked for reconfirmation purposes. Detailed Response: Responses that contain specific information of one symptom, like “I felt tightness in my chest”. Response with Revision: Revision is infrequent but can affect comprehension significantly. One cause is thinking aloud so a later response overrules the previous one; e.g., “No dizziness, oh wait... last week I felt a bit dizzy when biking”. Response with Topic Drift: When a symptom/topic like headache is inquired, the response might be: “Only some chest pain at night”, not referring to the original symptom (headache) at all. Response with Transitional Clauses: Repeating some of the previous content, but often unrelated to critical clinical information and usually followed by topic drift. For example, “Swelling... swelling... I don't cough at night”. Simulating Symptom Monitoring Dataset for Training We divide the construction of data simulation into two stages. In Section SECREF16 , we build templates and expression pools using linguistic analysis followed by manual verification. In Section SECREF20 , we present our proposed framework for generating simulated training data. The templates and framework are verified for logical correctness and clinical soundness. Template Construction Each utterance in the seed data is categorized according to Table TABREF14 and then abstracted into templates by replacing entity phrases like cough and often with respective placeholders “#symptom#” and “#frequency#”. The templates are refined through verifying logical correctness and injecting expression diversity by linguistically trained researchers. As these replacements do not alter the syntactic structure, we interchange such placeholders with various verbal expressions to enlarge the simulated training set in Section SECREF20 . Clinical validation was also conducted by certified telehealth nurses. For the 9 symptoms (e.g. chest pain, cough) and 5 attributes (e.g., extent, frequency), we collect various expressions from the seed data, and expand them through synonym replacement. Some attributes are unique to a particular symptom; e.g., “left leg” in #location# is only suitable to describe the symptom swelling, but not the symptom headache. Therefore, we only reuse general expressions like “slight” in #extent# across different symptoms to diversify linguistic expressions. Two linguistically trained researchers constructed expression pools for each symptom and each attribute to account for different types of paraphrasing and descriptions. These expression pools are used in Section SECREF20 (c). Simulated Data Generation Framework Figure FIGREF15 shows the five steps we use to generate multi-turn symptom monitoring dialogue samples. (a) Topic Selection: While nurses might prefer to inquire the symptoms in different orders depending on the patient's history, our preliminary analysis shows that modeling results do not differ noticeably if topics are of equal prior probabilities. Thus we adopt this assumption for simplicity. (b) Template Selection: For each selected topic, one inquiry template and one response template are randomly chosen to compose a turn. To minimize adverse effects of underfitting, we redistributed the frequency distribution in Table TABREF14 : For utterance types that are below 15%, we boosted them to 15%, and the overall relative distribution ranking is balanced and consistent with Table TABREF14 . (c) Enriching Linguistic Expressions: The placeholders in the selected templates are substituted with diverse expressions from the expression pools in Section UID19 to characterize the symptoms and their corresponding attributes. (d) Multi-Turn Dialogue State Tracking: A greedy algorithm is applied to complete conversations. A “completed symptoms” list and a “to-do symptoms” list are used for symptom topic tracking. We also track the “completed attributes" and “to-do attributes". For each symptom, all related attributes are iterated. A dialogue ends only when all possible entities are exhausted, generating a multi-turn dialogue sample, which encourages the model to learn from the entire discussion flow rather than a single turn to comprehend contextual dependency. The average length of a simulated dialogue is 184 words, which happens to be twice as long as an average dialogue from the real-world evaluation set. Moreover, to model the roles of the respondents, we set the ratio between patients and caregivers to 2:1; this statistic is inspired by the real scenarios in the seed dataset. For both the caregivers and patients, we assume equal probability of both genders. The corresponding pronouns in the conversations are thus determined by the role and gender of these settings. (e) Multi-Turn Sample Annotation: For each multi-turn dialogue, a query is specified by a symptom and an attribute. The groundtruth output of the QA system is automatically labeled based on the template generation rules, but also manually verified to ensure annotation quality. Moreover, we adopt the unanswerable design in BIBREF6 : when the patient does not mention a particular symptom, the answer is defined as “No Answer”. This process is repeated until all logical permutations of symptoms and attributes are exhausted. Experiments Model Design We implemented an established model in reading comprehension, a bi-directional attention pointer network BIBREF1 , and equipped it with an answerable classifier, as depicted in Figure FIGREF21 . First, tokens in the given dialogue INLINEFORM0 and query INLINEFORM1 are converted into embedding vectors. Then the dialogue embeddings are fed to a bi-directional LSTM encoding layer, generating a sequence of contextual hidden states. Next, the hidden states and query embeddings are processed by a bi-directional attention layer, fusing attention information in both context-to-query and query-to-context directions. The following two bi-directional LSTM modeling layers read the contextual sequence with attention. Finally, two respective linear layers with softmax functions are used to estimate token INLINEFORM2 's INLINEFORM3 and INLINEFORM4 probability of the answer span INLINEFORM5 . In addition, we add a special tag “[SEQ]” at the head of INLINEFORM0 to account for the case of “No answer” BIBREF4 and adopt an answerable classifier as in BIBREF25 . More specifically, when the queried symptom or attribute is not mentioned in the dialogue, the answer span should point to the tag “[SEQ]” and answerable probability should be predicted as 0. Implementation Details The model was trained via gradient backpropagation with the cross-entropy loss function of answer span prediction and answerable classification, optimized by Adam algorithm BIBREF26 with initial learning rate of INLINEFORM0 . Pre-trained GloVe BIBREF3 embeddings (size INLINEFORM1 ) were used. We re-shuffled training samples at each epoch (batch size INLINEFORM2 ). Out-of-vocabulary words ( INLINEFORM3 ) were replaced with a fixed random vector. L2 regularization and dropout (rate INLINEFORM4 ) were used to alleviate overfitting BIBREF27 . Evaluation Setup To evaluate the effectiveness of our linguistically-inspired simulation approach, the model is trained on the simulated data (see Section SECREF20 ). We designed 3 evaluation sets: (1) Base Set (1,264 samples) held out from the simulated data. (2) Augmented Set (1,280 samples) built by adding two out-of-distribution symptoms, with corresponding dialogue contents and queries, to the Base Set (“bleeding” and “cold”, which never appeared in training data). (3) Real-World Set (944 samples) manually delineated from the the symptom checking portions (approximately 4 hours) of real-world dialogues, and annotated as evaluation samples. Results Evaluation results are in Table TABREF25 with exact match (EM) and F1 score in BIBREF0 metrics. To distinguish the correct answer span from the plausible ones which contain the same words, we measure the scores on the position indices of tokens. Our results show that both EM and F1 score increase with training sample size growing and the optimal size in our setting is 100k. The best-trained model performs well on both the Base Set and the Augmented Set, indicating that out-of-distribution symptoms do not affect the comprehension of existing symptoms and outputs reasonable answers for both in- and out-of-distribution symptoms. On the Real-World Set, we obtained 78.23 EM score and 80.18 F1 score respectively. Error analysis suggests the performance drop from the simulated test sets is due to the following: 1) sparsity issues resulting from the expression pools excluding various valid but sporadic expressions. 2) nurses and patients occasionally chit-chat in the Real-World Set, which is not simulated in the training set. At times, these chit-chats make the conversations overly lengthy, causing the information density to be lower. These issues could potentially distract and confuse the comprehension model. 3) an interesting type of infrequent error source, caused by patients elaborating on possible causal relations of two symptoms. For example, a patient might say “My giddiness may be due to all this cough”. We are currently investigating how to close this performance gap efficiently. Ablation Analysis To assess the effectiveness of bi-directional attention, we bypassed the bi-attention layer by directly feeding the contextual hidden states and query embeddings to the modeling layer. To evaluate the pre-trained GloVe embeddings, we randomly initialized and trained the embeddings from scratch. These two procedures lead to 10% and 18% performance degradation on the Augmented Set and Real-World Set, respectively (see Table TABREF27 ). Conclusion We formulated a dialogue comprehension task motivated by the need in telehealth settings to extract key clinical information from spoken conversations between nurses and patients. We analyzed linguistic characteristics of real-world human-human symptom checking dialogues, constructed a simulated dataset based on linguistically inspired and clinically validated templates, and prototyped a QA system. The model works effectively on a simulated test set using symptoms excluded during training and on real-world conversations between nurses and patients. We are currently improving the model's dialogue comprehension capability in complex reasoning and context understanding and also applying the QA model to summarization and virtual nurse applications. Acknowledgements Research efforts were supported by funding for Digital Health and Deep Learning I2R (DL2 SSF Project No: A1718g0045) and the Science and Engineering Research Council (SERC Project No: A1818g0044), A*STAR, Singapore. In addition, this work was conducted using resources and infrastructure provided by the Human Language Technology unit at I2R. The telehealth data acquisition was funded by the Economic Development Board (EDB), Singapore Living Lab Fund and Philips Electronics – Hospital to Home Pilot Project (EDB grant reference number: S14-1035-RF-LLF H and W). We acknowledge valuable support and assistance from Yulong Lin, Yuanxin Xiang, and Ai Ti Aw at the Institute for Infocomm Research (I2R); Weiliang Huang at the Changi General Hospital (CGH) Department of Cardiology, Hong Choon Oh at CGH Health Services Research, and Edris Atikah Bte Ahmad, Chiu Yan Ang, and Mei Foon Yap of the CGH Integrated Care Department. We also thank Eduard Hovy and Bonnie Webber for insightful discussions and the anonymous reviewers for their precious feedback to help improve and extend this piece of work.
What labels do they create on their dataset?
(1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom, No Answer
3,424
qasper
4k
Introduction Word embeddings are representations of words in numerical form, as vectors of typically several hundred dimensions. The vectors are used as an input to machine learning models; for complex language processing tasks these are typically deep neural networks. The embedding vectors are obtained from specialized learning tasks, based on neural networks, e.g., word2vec BIBREF0, GloVe BIBREF1, FastText BIBREF2, ELMo BIBREF3, and BERT BIBREF4. For training, the embeddings algorithms use large monolingual corpora that encode important information about word meaning as distances between vectors. In order to enable downstream machine learning on text understanding tasks, the embeddings shall preserve semantic relations between words, and this is true even across languages. Probably the best known word embeddings are produced by the word2vec method BIBREF5. The problem with word2vec embeddings is their failure to express polysemous words. During training of an embedding, all senses of a given word (e.g., paper as a material, as a newspaper, as a scientific work, and as an exam) contribute relevant information in proportion to their frequency in the training corpus. This causes the final vector to be placed somewhere in the weighted middle of all words' meanings. Consequently, rare meanings of words are poorly expressed with word2vec and the resulting vectors do not offer good semantic representations. For example, none of the 50 closest vectors of the word paper is related to science. The idea of contextual embeddings is to generate a different vector for each context a word appears in and the context is typically defined sentence-wise. To a large extent, this solves the problems with word polysemy, i.e. the context of a sentence is typically enough to disambiguate different meanings of a word for humans and so it is for the learning algorithms. In this work, we describe high-quality models for contextual embeddings, called ELMo BIBREF3, precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish. ELMo is one of the most successful approaches to contextual word embeddings. At time of its creation, ELMo has been shown to outperform previous word embeddings BIBREF3 like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution. This report is split into further five sections. In section SECREF2, we describe the contextual embeddings ELMo. In Section SECREF3, we describe the datasets used and in Section SECREF4 we describe preprocessing and training of the embeddings. We describe the methodology for evaluation of created vectors and results in Section SECREF5. We present conclusion in Section SECREF6 where we also outline plans for further work. ELMo Typical word embeddings models or representations, such as word2vec BIBREF0, GloVe BIBREF1, or FastText BIBREF2, are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding BIBREF3 is one of the state-of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component. ELMo model`s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, we try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, we try to predict the preceding word, based on the given following words. It is equivalent to the first LSTM, just reading the text in reverse. In NLP tasks, any set of these embeddings may be used; however, a weighted average is usually used. The weights of the average are learned during the training of the model for the specific task. Additionally, an entire ELMo model can be fine-tuned on a specific end task. Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese. ELMo ::: ELMoForManyLangs Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository. Training Data We trained ELMo models for seven languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian and Swedish. To obtain high-quality embeddings, we used large monolingual corpora from various sources for each language. Some corpora are available online under permissive licences, others are available only for research purposes or have limited availability. The corpora used in training datasets are a mix of news articles and general web crawl, which we preprocessed and deduplicated. Below we shortly describe the used corpora in alphabetical order of the involved languages. Their names and sizes are summarized in Table TABREF3. Croatian dataset include hrWaC 2.1 corpus BIBREF9, Riznica BIBREF10, and articles of Croatian branch of Styria media house, made available to us through partnership in a joint project. hrWaC was built by crawling the .hr internet domain in 2011 and 2014. Riznica is composed of Croatian fiction and non-fiction prose, poetry, drama, textbooks, manuals, etc. The Styria dataset consists of 570,219 news articles published on the Croatian 24sata news portal and niche portals related to 24sata. Estonian dataset contains texts from two sources, CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, and news articles made available to us by Ekspress Meedia due to partnership in the project. Ekspress Meedia dataset is composed of Estonian news articles between years 2009 and 2019. The CoNLL 2017 corpus is composed of Estonian Wikipedia and webcrawl. Finnish dataset contains articles by Finnish news agency STT, Finnish part of the CoNLL 2017 dataset, and Ylilauta downloadable version BIBREF11. STT news articles were published between years 1992 and 2018. Ylilauta is a Finnish online discussion board; the corpus contains parts of the discussions from 2012 to 2014. Latvian dataset consists only of the Latvian portion of the ConLL 2017 corpus. Lithuanian dataset is composed of Lithuanian Wikipedia articles from 2018, DGT-UD corpus, and LtTenTen. DGT-UD is a parallel corpus of 23 official languages of the EU, composed of JRC DGT translation memory of European law, automatically annotated with UD-Pipe 1.2. LtTenTen is Lithuanian web corpus made up of texts collected from the internet in April 2014 BIBREF12. Slovene dataset is formed from the Gigafida 2.0 corpus BIBREF13. It is a general language corpus composed of various sources, mostly newspapers, internet pages, and magazines, but also fiction and non-fiction prose, textbooks, etc. Swedish dataset is composed of STT Swedish articles and Swedish part of CoNLL 2017. The Finnish news agency STT publishes some of its articles in Swedish language. They were made available to us through partnership in a joint project. The corpus contains those articles from 1992 to 2017. Preprocessing and Training Prior to training the ELMo models, we sentence and word tokenized all the datasets. The text was formatted in such a way that each sentence was in its own line with tokens separated by white spaces. CoNLL 2017, DGT-UD and LtTenTen14 corpora were already pre-tokenized. We tokenized the others using the NLTK library and its tokenizers for each of the languages. There is no tokenizer for Croatian in NLTK library, so we used Slovene tokenizer instead. After tokenization, we deduplicated the datasets for each language separately, using the Onion (ONe Instance ONly) tool for text deduplication. We applied the tool on paragraph level for corpora that did not have sentences shuffled and on sentence level for the rest. We considered 9-grams with duplicate content threshold of 0.9. For each language we prepared a vocabulary file, containing roughly one million most common tokens, i.e. tokens that appear at least $n$ times in the corpus, where $n$ is between 15 and 25, depending on the dataset size. We included the punctuation marks among the tokens. We trained each ELMo model using default values used to train the original English ELMo (large) model. Evaluation We evaluated the produced ELMo models for all languages using two evaluation tasks: a word analogy task and named entity recognition (NER) task. Below, we first shortly describe each task, followed by the evaluation results. Evaluation ::: Word Analogy Task The word analogy task was popularized by mikolov2013distributed. The goal is to find a term $y$ for a given term $x$ so that the relationship between $x$ and $y$ best resembles the given relationship $a : b$. There are two main groups of categories: 5 semantic and 10 syntactic. To illustrate a semantic relationship, consider for example that the word pair $a : b$ is given as “Finland : Helsinki”. The task is to find the term $y$ corresponding to the relationship “Sweden : $y$”, with the expected answer being $y=$ Stockholm. In syntactic categories, the two words in a pair have a common stem (in some cases even same lemma), with all the pairs in a given category having the same morphological relationship. For example, given the word pair “long : longer”, we see that we have an adjective in its base form and the same adjective in a comparative form. That task is then to find the term $y$ corresponding to the relationship “dark : $y$”, with the expected answer being $y=$ darker, that is a comparative form of the adjective dark. In the vector space, the analogy task is transformed into vector arithmetic and search for nearest neighbours, i.e. we compute the distance between vectors: d(vec(Finland), vec(Helsinki)) and search for word $y$ which would give the closest result in distance d(vec(Sweden), vec($y$)). In the analogy dataset the analogies are already pre-specified, so we are measuring how close are the given pairs. In the evaluation below, we use analogy datasets for all tested languages based on the English dataset by BIBREF14 . Due to English-centered bias of this dataset, we used a modified dataset which was first written in Slovene language and then translated into other languages BIBREF15. As each instance of analogy contains only four words, without any context, the contextual models (such as ELMo) do not have enough context to generate sensible embeddings. We therefore used some additional text to form simple sentences using the four analogy words, while taking care that their noun case stays the same. For example, for the words "Rome", "Italy", "Paris" and "France" (forming the analogy Rome is to Italy as Paris is to $x$, where the correct answer is $x=$France), we formed the sentence "If the word Rome corresponds to the word Italy, then the word Paris corresponds to the word France". We generated embeddings for those four words in the constructed sentence, substituted the last word with each word in our vocabulary and generated the embeddings again. As typical for non-contextual analogy task, we measure the cosine distance ($d$) between the last word ($w_4$) and the combination of the first three words ($w_2-w_1+w_3$). We use the CSLS metric BIBREF16 to find the closest candidate word ($w_4$). If we find the correct word among the five closest words, we consider that entry as successfully identified. The proportion of correctly identified words forms a statistic called accuracy@5, which we report as the result. We first compare existing Latvian ELMo embeddings from ELMoForManyLangs project with our Latvian embeddings, followed by the detailed analysis of our ELMo embeddings. We trained Latvian ELMo using only CoNLL 2017 corpora. Since this is the only language, where we trained the embedding model on exactly the same corpora as ELMoForManyLangs models, we chose it for comparison between our ELMo model with ELMoForManyLangs. In other languages, additional or other corpora were used, so a direct comparison would also reflect the quality of the corpora used for training. In Latvian, however, only the size of the training dataset is different. ELMoForManyLangs uses only 20 million tokens and we use the whole corpus of 270 million tokens. The Latvian ELMo model from ELMoForManyLangs project performs significantly worse than EMBEDDIA ELMo Latvian model on all categories of word analogy task (Figure FIGREF16). We also include the comparison with our Estonian ELMo embeddings in the same figure. This comparison shows that while differences between our Latvian and Estonian embeddings can be significant for certain categories, the accuracy score of ELMoForManyLangs is always worse than either of our models. The comparison of Estonian and Latvian models leads us to believe that a few hundred million tokens is a sufficiently large corpus to train ELMo models (at least for word analogy task), but 20-million token corpora used in ELMoForManyLangs are too small. The results for all languages and all ELMo layers, averaged over semantic and syntactic categories, are shown in Table TABREF17. The embeddings after the first LSTM layer perform best in semantic categories. In syntactic categories, the non-contextual CNN layer performs the best. Syntactic categories are less context dependent and much more morphology and syntax based, so it is not surprising that the non-contextual layer performs well. The second LSTM layer embeddings perform the worst in syntactic categories, though still outperforming CNN layer embeddings in semantic categories. Latvian ELMo performs worse compared to other languages we trained, especially in semantic categories, presumably due to smaller training data size. Surprisingly, the original English ELMo performs very poorly in syntactic categories and only outperforms Latvian in semantic categories. The low score can be partially explained by English model scoring $0.00$ in one syntactic category “opposite adjective”, which we have not been able to explain. Evaluation ::: Named Entity Recognition For evaluation of ELMo models on a relevant downstream task, we used named entity recognition (NER) task. NER is an information extraction task that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. To allow comparison of results between languages, we used an adapted version of this task, which uses a reduced set of labels, available in NER datasets for all processed languages. The labels in the used NER datasets are simplified to a common label set of three labels (person - PER, location - LOC, organization - ORG). Each word in the NER dataset is labeled with one of the three mentioned labels or a label 'O' (other, i.e. not a named entity) if it does not fit any of the other three labels. The number of words having each label is shown in Table TABREF19. To measure the performance of ELMo embeddings on the NER task we proceeded as follows. We embedded the text in the datasets sentence by sentence, producing three vectors (one from each ELMo layer) for each token in a sentence. We calculated the average of the three vectors and used it as the input of our recognition model. The input layer was followed by a single LSTM layer with 128 LSTM cells and a dropout layer, randomly dropping 10% of the neurons on both the output and the recurrent branch. The final layer of our model was a time distributed softmax layer with 4 neurons. We used ADAM optimiser BIBREF17 with the learning rate 0.01 and $10^{-5}$ learning rate decay. We used categorical cross-entropy as a loss function and trained the model for 3 epochs. We present the results using the Macro $F_1$ score, that is the average of $F_1$-scores for each of the three NE classes (the class Other is excluded). Since the differences between the tested languages depend more on the properties of the NER datasets than on the quality of embeddings, we can not directly compare ELMo models. For this reason, we take the non-contextual fastText embeddings as a baseline and predict named entities using them. The architecture of the model using fastText embeddings is the same as the one using ELMo embeddings, except that the input uses 300 dimensional fastText embedding vectors, and the model was trained for 5 epochs (instead of 3 as for ELMo). In both cases (ELMo and fastText) we trained and evaluated the model five times, because there is some random component involved in initialization of the neural network model. By training and evaluating multiple times, we minimise this random component. The results are presented in Table TABREF21. We included the evaluation of the original ELMo English model in the same table. NER models have little difficulty distinguishing between types of named entities, but recognizing whether a word is a named entity or not is more difficult. For languages with the smallest NER datasets, Croatian and Lithuanian, ELMo embeddings show the largest improvement over fastText embeddings. However, we can observe significant improvements with ELMo also on English and Finnish, which are among the largest datasets (English being by far the largest). Only on Slovenian dataset did ELMo perform slightly worse than fastText, on all other EMBEDDIA languages, the ELMo embeddings improve the results. Conclusion We prepared precomputed ELMo contextual embeddings for seven languages: Croatian, Estonian, Finnish, Latvian, Lithuanian, Slovenian, and Swedish. We present the necessary background on embeddings and contextual embeddings, the details of training the embedding models, and their evaluation. We show that the size of used training sets importantly affects the quality of produced embeddings, and therefore the existing publicly available ELMo embeddings for the processed languages are inadequate. We trained new ELMo embeddings on larger training sets and analysed their properties on the analogy task and on the NER task. The results show that the newly produced contextual embeddings produce substantially better results compared to the non-contextual fastText baseline. In future work, we plan to use the produced contextual embeddings on the problems of news media industry. The pretrained ELMo models will be deposited to the CLARIN repository by the time of the final version of this paper. Acknowledgments The work was partially supported by the Slovenian Research Agency (ARRS) core research programme P6-0411. This paper is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this publication reflects only the authors' view and the EU Commission is not responsible for any use that may be made of the information it contains.
What is the improvement in performance for Estonian in the NER task?
5 percent points.
3,290
qasper
4k
Introduction Long short term memory (LSTM) units BIBREF1 are popular for many sequence modeling tasks and are used extensively in language modeling. A key to their success is their articulated gating structure, which allows for more control over the information passed along the recurrence. However, despite the sophistication of the gating mechanisms employed in LSTMs and similar recurrent units, the input and context vectors are treated with simple linear transformations prior to gating. Non-linear transformations such as convolutions BIBREF2 have been used, but these have not achieved the performance of well regularized LSTMs for language modeling BIBREF3 . A natural way to improve the expressiveness of linear transformations is to increase the number of dimensions of the input and context vectors, but this comes with a significant increase in the number of parameters which may limit generalizability. An example is shown in Figure FIGREF1 , where LSTMs performance decreases with the increase in dimensions of the input and context vectors. Moreover, the semantics of the input and context vectors are different, suggesting that each may benefit from specialized treatment. Guided by these insights, we introduce a new recurrent unit, the Pyramidal Recurrent Unit (PRU), which is based on the LSTM gating structure. Figure FIGREF2 provides an overview of the PRU. At the heart of the PRU is the pyramidal transformation (PT), which uses subsampling to effect multiple views of the input vector. The subsampled representations are combined in a pyramidal fusion structure, resulting in richer interactions between the individual dimensions of the input vector than is possible with a linear transformation. Context vectors, which have already undergone this transformation in the previous cell, are modified with a grouped linear transformation (GLT) which allows the network to learn latent representations in high dimensional space with fewer parameters and better generalizability (see Figure FIGREF1 ). We show that PRUs can better model contextual information and demonstrate performance gains on the task of language modeling. The PRU improves the perplexity of the current state-of-the-art language model BIBREF0 by up to 1.3 points, reaching perplexities of 56.56 and 64.53 on the Penn Treebank and WikiText2 datasets while learning 15-20% fewer parameters. Replacing an LSTM with a PRU results in improvements in perplexity across a variety of experimental settings. We provide detailed ablations which motivate the design of the PRU architecture, as well as detailed analysis of the effect of the PRU on other components of the language model. Related work Multiple methods, including a variety of gating structures and transformations, have been proposed to improve the performance of recurrent neural networks (RNNs). We first describe these approaches and then provide an overview of recent work in language modeling. Pyramidal Recurrent Units We introduce Pyramidal Recurrent Units (PRUs), a new RNN architecture which improves modeling of context by allowing for higher dimensional vector representations while learning fewer parameters. Figure FIGREF2 provides an overview of PRU. We first elaborate on the details of the pyramidal transformation and the grouped linear transformation. We then describe our recurrent unit, PRU. Pyramidal transformation for input The basic transformation in many recurrent units is a linear transformation INLINEFORM0 defined as: DISPLAYFORM0 where INLINEFORM0 are learned weights that linearly map INLINEFORM1 to INLINEFORM2 . To simplify notation, we omit the biases. Motivated by successful applications of sub-sampling in computer vision (e.g., BIBREF22 , BIBREF23 , BIBREF9 , BIBREF24 ), we subsample input vector INLINEFORM0 into INLINEFORM1 pyramidal levels to achieve representation of the input vector at multiple scales. This sub-sampling operation produces INLINEFORM2 vectors, represented as INLINEFORM3 , where INLINEFORM4 is the sampling rate and INLINEFORM5 . We learn scale-specific transformations INLINEFORM6 for each INLINEFORM7 . The transformed subsamples are concatenated to produce the pyramidal analog to INLINEFORM8 , here denoted as INLINEFORM9 : DISPLAYFORM0 where INLINEFORM0 indicates concatenation. We note that pyramidal transformation with INLINEFORM1 is the same as the linear transformation. To improve gradient flow inside the recurrent unit, we combine the input and output using an element-wise sum (when dimension matches) to produce residual analog of pyramidal transformation, as shown in Figure FIGREF2 BIBREF25 . We sub-sample the input vector INLINEFORM0 into INLINEFORM1 pyramidal levels using the kernel-based approach BIBREF8 , BIBREF9 . Let us assume that we have a kernel INLINEFORM2 with INLINEFORM3 elements. Then, the input vector INLINEFORM4 can be sub-sampled as: DISPLAYFORM0 where INLINEFORM0 represents the stride and INLINEFORM1 . The number of parameters learned by the linear transformation and the pyramidal transformation with INLINEFORM0 pyramidal levels to map INLINEFORM1 to INLINEFORM2 are INLINEFORM3 and INLINEFORM4 respectively. Thus, pyramidal transformation reduces the parameters of a linear transformation by a factor of INLINEFORM5 . For example, the pyramidal transformation (with INLINEFORM6 and INLINEFORM7 ) learns INLINEFORM8 fewer parameters than the linear transformation. Grouped linear transformation for context Many RNN architectures apply linear transformations to both the input and context vector. However, this may not be ideal due to the differing semantics of each vector. In many NLP applications including language modeling, the input vector is a dense word embedding which is shared across all contexts for a given word in a dataset. In contrast, the context vector is highly contextualized by the current sequence. The differences between the input and context vector motivate their separate treatment in the PRU architecture. The weights learned using the linear transformation (Eq. EQREF9 ) are reused over multiple time steps, which makes them prone to over-fitting BIBREF26 . To combat over-fitting, various methods, such as variational dropout BIBREF26 and weight dropout BIBREF0 , have been proposed to regularize these recurrent connections. To further improve generalization abilities while simultaneously enabling the recurrent unit to learn representations at very high dimensional space, we propose to use grouped linear transformation (GLT) instead of standard linear transformation for recurrent connections BIBREF27 . While pyramidal and linear transformations can be applied to transform context vectors, our experimental results in Section SECREF39 suggests that GLTs are more effective. The linear transformation INLINEFORM0 maps INLINEFORM1 linearly to INLINEFORM2 . Grouped linear transformations break the linear interactions by factoring the linear transformation into two steps. First, a GLT splits the input vector INLINEFORM3 into INLINEFORM4 smaller groups such that INLINEFORM5 . Second, a linear transformation INLINEFORM6 is applied to map INLINEFORM7 linearly to INLINEFORM8 , for each INLINEFORM9 . The INLINEFORM10 resultant output vectors INLINEFORM11 are concatenated to produce the final output vector INLINEFORM12 . DISPLAYFORM0 GLTs learn representations at low dimensionality. Therefore, a GLT requires INLINEFORM0 fewer parameters than the linear transformation. We note that GLTs are subset of linear transformations. In a linear transformation, each neuron receives an input from each element in the input vector while in a GLT, each neuron receives an input from a subset of the input vector. Therefore, GLT is the same as a linear transformation when INLINEFORM1 . Pyramidal Recurrent Unit We extend the basic gating architecture of LSTM with the pyramidal and grouped linear transformations outlined above to produce the Pyramidal Recurrent Unit (PRU), whose improved sequence modeling capacity is evidenced in Section SECREF4 . At time INLINEFORM0 , the PRU combines the input vector INLINEFORM1 and the previous context vector (or previous hidden state vector) INLINEFORM2 using the following transformation function as: DISPLAYFORM0 where INLINEFORM0 indexes the various gates in the LSTM model, and INLINEFORM1 and INLINEFORM2 represents the pyramidal and grouped linear transformations defined in Eqns. EQREF10 and EQREF15 , respectively. We will now incorporate INLINEFORM0 into LSTM gating architecture to produce PRU. At time INLINEFORM1 , a PRU cell takes INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 as inputs to produce forget INLINEFORM5 , input INLINEFORM6 , output INLINEFORM7 , and content INLINEFORM8 gate signals. The inputs are combined with these gate signals to produce context vector INLINEFORM9 and cell state INLINEFORM10 . Mathematically, the PRU with the LSTM gating architecture can be defined as: DISPLAYFORM0 where INLINEFORM0 represents the element-wise multiplication operation, and INLINEFORM1 and INLINEFORM2 are the sigmoid and hyperbolic tangent activation functions. We note that LSTM is a special case of PRU when INLINEFORM3 = INLINEFORM4 =1. Experiments To showcase the effectiveness of the PRU, we evaluate the performance on two standard datasets for word-level language modeling and compare with state-of-the-art methods. Additionally, we provide a detailed examination of the PRU and its behavior on the language modeling tasks. Set-up Following recent works, we compare on two widely used datasets, the Penn Treebank (PTB) BIBREF28 as prepared by BIBREF29 and WikiText2 (WT-2) BIBREF20 . For both datasets, we follow the same training, validation, and test splits as in BIBREF0 . We extend the language model, AWD-LSTM BIBREF0 , by replacing LSTM layers with PRU. Our model uses 3-layers of PRU with an embedding size of 400. The number of parameters learned by state-of-the-art methods vary from 18M to 66M with majority of the methods learning about 22M to 24M parameters on the PTB dataset. For a fair comparison with state-of-the-art methods, we fix the model size to 19M and vary the value of INLINEFORM0 and hidden layer sizes so that total number of learned parameters is similar across different configurations. We use 1000, 1200, and 1400 as hidden layer sizes for values of INLINEFORM1 =1,2, and 4, respectively. We use the same settings for the WT-2 dataset. We set the number of pyramidal levels INLINEFORM2 to two in our experiments and use average pooling for sub-sampling. These values are selected based on our ablation experiments on the validation set (Section SECREF39 ). We measure the performance of our models in terms of word-level perplexity. We follow the same training strategy as in BIBREF0 . To understand the effect of regularization methods on the performance of PRUs, we perform experiments under two different settings: (1) Standard dropout: We use a standard dropout BIBREF12 with probability of 0.5 after embedding layer, the output between LSTM layers, and the output of final LSTM layer. (2) Advanced dropout: We use the same dropout techniques with the same dropout values as in BIBREF0 . We call this model as AWD-PRU. Results Table TABREF23 compares the performance of the PRU with state-of-the-art methods. We can see that the PRU achieves the best performance with fewer parameters. PRUs achieve either the same or better performance than LSTMs. In particular, the performance of PRUs improves with the increasing value of INLINEFORM0 . At INLINEFORM1 , PRUs outperform LSTMs by about 4 points on the PTB dataset and by about 3 points on the WT-2 dataset. This is explained in part by the regularization effect of the grouped linear transformation (Figure FIGREF1 ). With grouped linear and pyramidal transformations, PRUs learn rich representations at very high dimensional space while learning fewer parameters. On the other hand, LSTMs overfit to the training data at such high dimensions and learn INLINEFORM2 to INLINEFORM3 more parameters than PRUs. With the advanced dropouts, the performance of PRUs improves by about 4 points on the PTB dataset and 7 points on the WT-2 dataset. This further improves with finetuning on the PTB (about 2 points) and WT-2 (about 1 point) datasets. For similar number of parameters, the PRU with standard dropout outperforms most of the state-of-the-art methods by large margin on the PTB dataset (e.g. RAN BIBREF7 by 16 points with 4M less parameters, QRNN BIBREF33 by 16 points with 1M more parameters, and NAS BIBREF31 by 1.58 points with 6M less parameters). With advanced dropouts, the PRU delivers the best performance. On both datasets, the PRU improves the perplexity by about 1 point while learning 15-20% fewer parameters. PRU is a drop-in replacement for LSTM, therefore, it can improve language models with modern inference techniques such as dynamic evaluation BIBREF21 . When we evaluate PRU-based language models (only with standard dropout) with dynamic evaluation on the PTB test set, the perplexity of PRU ( INLINEFORM0 ) improves from 62.42 to 55.23 while the perplexity of an LSTM ( INLINEFORM1 ) with similar settings improves from 66.29 to 58.79; suggesting that modern inference techniques are equally applicable to PRU-based language models. Analysis It is shown above that the PRU can learn representations at higher dimensionality with more generalization power, resulting in performance gains for language modeling. A closer analysis of the impact of the PRU in a language modeling system reveals several factors that help explain how the PRU achieves these gains. As exemplified in Table TABREF34 , the PRU tends toward more confident decisions, placing more of the probability mass on the top next-word prediction than the LSTM. To quantify this effect, we calculate the entropy of the next-token distribution for both the PRU and the LSTM using 3687 contexts from the PTB validation set. Figure FIGREF32 shows a histogram of the entropies of the distribution, where bins of size 0.23 are used to effect categories. We see that the PRU more often produces lower entropy distributions corresponding to higher confidences for next-token choices. This is evidenced by the mass of the red PRU curve lying in the lower entropy ranges compared to the blue LSTM's curve. The PRU can produce confident decisions in part because more information is encoded in the higher dimensional context vectors. The PRU has the ability to model individual words at different resolutions through the pyramidal transform; which provides multiple paths for the gradient to the embedding layer (similar to multi-task learning) and improves the flow of information. When considering the embeddings by part of speech, we find that the pyramid level 1 embeddings exhibit higher variance than the LSTM across all POS categories (Figure FIGREF33 ), and that pyramid level 2 embeddings show extremely low variance. We hypothesize that the LSTM must encode both coarse group similarities and individual word differences into the same vector space, reducing the space between individual words of the same category. The PRU can rely on the subsampled embeddings to account for coarse-grained group similarities, allowing for finer individual word distinctions in the embedding layer. This hypothesis is strengthened by the entropy results described above: a model which can make finer distinctions between individual words can more confidently assign probability mass. A model that cannot make these distinctions, such as the LSTM, must spread its probability mass across a larger class of similar words. Saliency analysis using gradients help identify relevant words in a test sequence that contribute to the prediction BIBREF34 , BIBREF35 , BIBREF36 . These approaches compute the relevance as the squared norm of the gradients obtained through back-propagation. Table TABREF34 visualizes the heatmaps for different sequences. PRUs, in general, give more relevance to contextual words than LSTMs, such as southeast (sample 1), cost (sample 2), face (sample 4), and introduced (sample 5), which help in making more confident decisions. Furthermore, when gradients during back-propagation are visualized BIBREF37 (Table TABREF34 ), we find that PRUs have better gradient coverage than LSTMs, suggesting PRUs use more features than LSTMs that contributes to the decision. This also suggests that PRUs update more parameters at each iteration which results in faster training. Language model in BIBREF0 takes 500 and 750 epochs to converge with PRU and LSTM as a recurrent unit, respectively. Ablation studies In this section, we provide a systematic analysis of our design choices. Our training methodology is the same as described in Section SECREF19 with the standard dropouts. For a thorough understanding of our design choices, we use a language model with a single layer of PRU and fix the size of embedding and hidden layers to 600. The word-level perplexities are reported on the validation sets of the PTB and the WT-2 datasets. The two hyper-parameters that control the trade-off between performance and number of parameters in PRUs are the number of pyramidal levels INLINEFORM0 and groups INLINEFORM1 . Figure FIGREF35 provides a trade-off between perplexity and recurrent unit (RU) parameters. Variable INLINEFORM0 and fixed INLINEFORM1 : When we increase the number of pyramidal levels INLINEFORM2 at a fixed value of INLINEFORM3 , the performance of the PRU drops by about 1 to 4 points while reducing the total number of recurrent unit parameters by up to 15%. We note that the PRU with INLINEFORM4 at INLINEFORM5 delivers similar performance as the LSTM while learning about 15% fewer recurrent unit parameters. Fixed INLINEFORM0 and variable INLINEFORM1 : When we vary the value of INLINEFORM2 at fixed number of pyramidal levels INLINEFORM3 , the total number of recurrent unit parameters decreases significantly with a minimal impact on the perplexity. For example, PRUs with INLINEFORM4 and INLINEFORM5 learns 77% fewer recurrent unit parameters while its perplexity (lower is better) increases by about 12% in comparison to LSTMs. Moreover, the decrease in number of parameters at higher value of INLINEFORM6 enables PRUs to learn the representations in high dimensional space with better generalizability (Table TABREF23 ). Table TABREF43 shows the impact of different transformations of the input vector INLINEFORM0 and the context vector INLINEFORM1 . We make following observations: (1) Using the pyramidal transformation for the input vectors improves the perplexity by about 1 point on both the PTB and WT-2 datasets while reducing the number of recurrent unit parameters by about 14% (see R1 and R4). We note that the performance of the PRU drops by up to 1 point when residual connections are not used (R4 and R6). (2) Using the grouped linear transformation for context vectors reduces the total number of recurrent unit parameters by about 75% while the performance drops by about 11% (see R3 and R4). When we use the pyramidal transformation instead of the linear transformation, the performance drops by up to 2% while there is no significant drop in the number of parameters (R4 and R5). We set sub-sampling kernel INLINEFORM0 (Eq. EQREF12 ) with stride INLINEFORM1 and size of 3 ( INLINEFORM2 ) in four different ways: (1) Skip: We skip every other element in the input vector. (2) Convolution: We initialize the elements of INLINEFORM3 randomly from normal distribution and learn them during training the model. We limit the output values between -1 and 1 using INLINEFORM4 activation function to make training stable. (3) Avg. pool: We initialize the elements of INLINEFORM5 to INLINEFORM6 . (4) Max pool: We select the maximum value in the kernel window INLINEFORM7 . Table TABREF45 compares the performance of the PRU with different sampling methods. Average pooling performs the best while skipping give comparable performance. Both of these methods enable the network to learn richer word representations while representing the input vector in different forms, thus delivering higher performance. Surprisingly, a convolution-based sub-sampling method does not perform as well as the averaging method. The INLINEFORM0 function used after convolution limits the range of output values which are further limited by the LSTM gating structure, thereby impeding in the flow of information inside the cell. Max pooling forces the network to learn representations from high magnitude elements, thus distinguishing features between elements vanishes, resulting in poor performance. Conclusion We introduce the Pyramidal Recurrent Unit, which better model contextual information by admitting higher dimensional representations with good generalizability. When applied to the task of language modeling, PRUs improve perplexity across several settings, including recent state-of-the-art systems. Our analysis shows that the PRU improves the flow of gradient and expand the word embedding subspace, resulting in more confident decisions. Here we have shown improvements for language modeling. In future, we plan to study the performance of PRUs on different tasks, including machine translation and question answering. In addition, we will study the performance of the PRU on language modeling with more recent inference techniques, such as dynamic evaluation and mixture of softmax. Acknowledgments This research was supported by NSF (IIS 1616112, III 1703166), Allen Distinguished Investigator Award, and gifts from Allen Institute for AI, Google, Amazon, and Bloomberg. We are grateful to Aaron Jaech, Hannah Rashkin, Mandar Joshi, Aniruddha Kembhavi, and anonymous reviewers for their helpful comments.
what previous RNN models do they compare with?
Variational LSTM, CharCNN, Pointer Sentinel-LSTM, RHN, NAS Cell, SRU, QRNN, RAN, 4-layer skip-connection LSTM, AWD-LSTM, Quantized LSTM
3,319
qasper
4k
Introduction Accurate grapheme-to-phoneme conversion (g2p) is important for any application that depends on the sometimes inconsistent relationship between spoken and written language. Most prominently, this includes text-to-speech and automatic speech recognition. Most work on g2p has focused on a few languages for which extensive pronunciation data is available BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Most languages lack these resources. However, a low resource language's writing system is likely to be similar to the writing systems of languages that do have sufficient pronunciation data. Therefore g2p may be possible for low resource languages if this high resource data can be properly utilized. We attempt to leverage high resource data by treating g2p as a multisource neural machine translation (NMT) problem. The source sequences for our system are words in the standard orthography in any language. The target sequences are the corresponding representation in the International Phonetic Alphabet (IPA). Our results show that the parameters learned by the shared encoder–decoder are able to exploit the orthographic and phonemic similarities between the various languages in our data. Low Resource g2p Our approach is similar in goal to deri2016grapheme's model for adapting high resource g2p models for low resource languages. They trained weighted finite state transducer (wFST) models on a variety of high resource languages, then transferred those models to low resource languages, using a language distance metric to choose which high resource models to use and a phoneme distance metric to map the high resource language's phonemes to the low resource language's phoneme inventory. These distance metrics are computed based on data from Phoible BIBREF4 and URIEL BIBREF5 . Other low resource g2p systems have used a strategy of combining multiple models. schlippe2014combining trained several data-driven g2p systems on varying quantities of monolingual data and combined their outputs with a phoneme-level voting scheme. This led to improvements over the best-performing single system for small quantities of data in some languages. jyothilow trained recurrent neural networks for small data sets and found that a version of their system that combined the neural network output with the output of the wFST-based Phonetisaurus system BIBREF1 did better than either system alone. A different approach came from kim2012universal, who used supervised learning with an undirected graphical model to induce the grapheme–phoneme mappings for languages written in the Latin alphabet. Given a short text in a language, the model predicts the language's orthographic rules. To create phonemic context features from the short text, the model naïvely maps graphemes to IPA symbols written with the same character, and uses the features of these symbols to learn an approximation of the phonotactic constraints of the language. In their experiments, these phonotactic features proved to be more valuable than geographical and genetic features drawn from WALS BIBREF6 . Multilingual Neural NLP In recent years, neural networks have emerged as a common way to use data from several languages in a single system. Google's zero-shot neural machine translation system BIBREF7 shares an encoder and decoder across all language pairs. In order to facilitate this multi-way translation, they prepend an artificial token to the beginning of each source sentence at both training and translation time. The token identifies what language the sentence should be translated to. This approach has three benefits: it is far more efficient than building a separate model for each language pair; it allows for translation between languages that share no parallel data; and it improves results on low-resource languages by allowing them to implicitly share parameters with high-resource languages. Our g2p system is inspired by this approach, although it differs in that there is only one target “language”, IPA, and the artificial tokens identify the language of the source instead of the language of the target. Other work has also made use of multilingually-trained neural networks. Phoneme-level polyglot language models BIBREF8 train a single model on multiple languages and additionally condition on externally constructed typological data about the language. ostling2017continuous used a similar approach, in which a character-level neural language model is trained on a massively multilingual corpus. A language embedding vector is concatenated to the input at each time step. The language embeddings their system learned correlate closely to the genetic relationships between languages. However, neither of these models was applied to g2p. Grapheme-to-Phoneme g2p is the problem of converting the orthographic representation of a word into a phonemic representation. A phoneme is an abstract unit of sound which may have different realizations in different contexts. For example, the English phoneme has two phonetic realizations (or allophones): English speakers without linguistic training often struggle to perceive any difference between these sounds. Writing systems usually do not distinguish between allophones: and are both written as INLINEFORM0 p INLINEFORM1 in English. The sounds are written differently in languages where they contrast, such as Hindi and Eastern Armenian. Most writing systems in use today are glottographic, meaning that their symbols encode solely phonological information. But despite being glottographic, in few writing systems do graphemes correspond one-to-one with phonemes. There are cases in which multiple graphemes represent a single phoneme, as in the word the in English: There are cases in which a single grapheme represents multiple phonemes, such as syllabaries, in which each symbol represents a syllable. In many languages, there are silent letters, as in the word hora in Spanish: There are more complicated correspondences, such as the silent e in English that affects the pronunciation of the previous vowel, as seen in the pair of words cape and cap. It is possible for an orthographic system to have any or all of the above phenomena while remaining unambiguous. However, some orthographic systems contain ambiguities. English is well-known for its spelling ambiguities. Abjads, used for Arabic and Hebrew, do not give full representation to vowels. Consequently, g2p is harder than simply replacing each grapheme symbol with a corresponding phoneme symbol. It is the problem of replacing a grapheme sequence INLINEFORM0 with a phoneme sequence INLINEFORM0 where the sequences are not necessarily of the same length. Data-driven g2p is therefore the problem of finding the phoneme sequence that maximizes the likelihood of the grapheme sequence: INLINEFORM0 Data-driven approaches are especially useful for problems in which the rules that govern them are complex and difficult to engineer by hand. g2p for languages with ambiguous orthographies is such a problem. Multilingual g2p, in which the various languages have similar but different and possibly contradictory spelling rules, can be seen as an extreme case of that. Therefore, a data-driven sequence-to-sequence model is a natural choice. Encoder–Decoder Models In order to find the best phoneme sequence, we use a neural encoder–decoder model with attention BIBREF9 . The model consists of two main parts: the encoder compresses each source grapheme sequence INLINEFORM0 into a fixed-length vector. The decoder, conditioned on this fixed-length vector, generates the output phoneme sequence INLINEFORM1 . The encoder and decoder are both implemented as recurrent neural networks, which have the advantage of being able to process sequences of arbitrary length and use long histories efficiently. They are trained jointly to minimize cross-entropy on the training data. We had our best results when using a bidirectional encoder, which consists of two separate encoders which process the input in forward and reverse directions. We used long short-term memory units BIBREF10 for both the encoder and decoder. For the attention mechanism, we used the general global attention architecture described by luong2015effective. We implemented all models with OpenNMT BIBREF11 . Our hyperparameters, which we determined by experimentation, are listed in Table TABREF8 . Training Multilingual Models Presenting pronunciation data in several languages to the network might create problems because different languages have different pronunciation patterns. For example, the string `real' is pronounced differently in English, German, Spanish, and Portuguese. We solve this problem by prepending each grapheme sequence with an artificial token consisting of the language's ISO 639-3 code enclosed in angle brackets. The English word `real', for example, would be presented to the system as INLINEFORM0 eng INLINEFORM1 r e a l The artificial token is treated simply as an element of the grapheme sequence. This is similar to the approach taken by johnson2016google in their zero-shot NMT system. However, their source-side artificial tokens identify the target language, whereas ours identify the source language. An alternative approach, used by ostling2017continuous, would be to concatenate a language embedding to the input at each time step. They do not evaluate their approach on grapheme-to-phoneme conversion. Data In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling–pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10 . In addition to the raw IPA transcriptions extracted from Wiktionary, the corpus provides an automatically cleaned version of transcriptions. Cleaning is a necessary step because web-scraped data is often noisy and may be transcribed at an inconsistent level of detail. The data cleaning used here attempts to make the transcriptions consistent with the phonemic inventories used in Phoible BIBREF4 . When a transcription contains a phoneme that is not in its language's inventory in Phoible, that phoneme is replaced by the phoneme with the most similar articulatory features that is in the language's inventory. Sometimes this cleaning algorithm works well: in the German examples in Table TABREF11 , the raw German symbols and are both converted to . This is useful because the in Ansbach and the in Kaninchen are instances of the same phoneme, so their phonemic representations should use the same symbol. However, the cleaning algorithm can also have negative effects on the data quality. For example, the phoneme is not present in the Phoible inventory for German, but it is used in several German transcriptions in the corpus. The cleaning algorithm converts to in all German transcriptions, whereas would be a more reasonable guess. The cleaning algorithm also removes most suprasegmentals, even though these are often an important part of a language's phonology. Developing a more sophisticated procedure for cleaning pronunciation data is a direction for future work, but in this paper we use the corpus's provided cleaned transcriptions in order to ease comparison to previous results. Experiments We present experiments with two versions of our sequence-to-sequence model. LangID prepends each training, validation, and test sample with an artificial token identifying the language of the sample. NoLangID omits this token. LangID and NoLangID have identical structure otherwise. To translate the test corpus, we used a beam width of 100. Although this is an unusually wide beam and had negligible performance effects, it was necessary to compute our error metrics. Evaluation We use the following three evaluation metrics: Phoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences. Word Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence. Word Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system. In system evaluations, WER, WER 100, and PER numbers presented for multiple languages are averaged, weighting each language equally BIBREF13 . It would be interesting to compute error metrics that incorporate phoneme similarity, such as those proposed by hixon2011phonemic. PER weights all phoneme errors the same, even though some errors are more harmful than others: and are usually contrastive, whereas and almost never are. Such statistics would be especially interesting for evaluating a multilingual system, because different languages often map the same grapheme to phonemes that are only subtly different from each other. However, these statistics have not been widely reported for other g2p systems, so we omit them here. Baseline Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. Their results can be divided into two parts: High resource results, computed with wFSTs trained on a combination of Wiktionary pronunciation data and g2p rules extracted from Wikipedia IPA Help pages. They report high resource results for 85 languages. Adapted results, where they apply various mapping strategies in order to adapt high resource models to other languages. The final adapted results they reported include most of the 85 languages with high resource results, as well as the various languages they were able to adapt them for, for a total of 229 languages. This test set omits 23 of the high resource languages that are written in unique scripts or for which language distance metrics could not be computed. Training We train the LangID and NoLangID versions of our model each on three subsets of the Wiktionary data: LangID-High and NoLangID-High: Trained on data from the 85 languages for which BIBREF13 used non-adapted wFST models. LangID-Adapted and NoLangID-Adapted: Trained on data from any of the 229 languages for which they built adapted models. Because many of these languages had no training data at all, the model is actually only trained on data in 157 languages. As is noted above, the Adapted set omits 23 languages which are in the High test set. LangID-All and NoLangID-All: Trained on data in all 311 languages in the Wiktionary training corpus. In order to ease comparison to Deri and Knight's system, we limited our use of the training corpus to 10,000 words per language. We set aside 10 percent of the data in each language for validation, so the maximum number of training words for any language is 9000 for our systems. Adapted Results On the 229 languages for which deri2016grapheme presented their final results, the LangID version of our system outperforms the baseline by a wide margin. The best performance came with the version of our model that was trained on data in all available languages, not just the languages it was tested on. Using a language ID token improves results considerably, but even NoLangID beats the baseline in WER and WER 100. Full results are presented in Table TABREF24 . High Resource Results Having shown that our model exceeds the performance of the wFST-adaptation approach, we next compare it to the baseline models for just high resource languages. The wFST models here are purely monolingual – they do not use data adaptation because there is sufficient training data for each of them. Full results are presented in Table TABREF26 . We omit models trained on the Adapted languages because they were not trained on high resource languages with unique writing systems, such as Georgian and Greek, and consequently performed very poorly on them. In contrast to the larger-scale Adapted results, in the High Resource experiments none of the sequence-to-sequence approaches equal the performance of the wFST model in WER and PER, although LangID-High does come close. The LangID models do beat wFST in WER 100. A possible explanation is that a monolingual wFST model will never generate phonemes that are not part of the language's inventory. A multilingual model, on the other hand, could potentially generate phonemes from the inventories of any language it has been trained on. Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB. Results on Unseen Languages Finally, we report our models' results on unseen languages in Table TABREF28 . The unseen languages are any that are present in the test corpus but absent from the training data. Deri and Knight did not report results specifically on these languages. Although the NoLangID models sometimes do better on WER 100, even here the LangID models have a slight advantage in WER and PER. This is somewhat surprising because the LangID models have not learned embeddings for the language ID tokens of unseen languages. Perhaps negative associations are also being learned, driving the model towards predicting more common pronunciations for unseen languages. Language ID Tokens Adding a language ID token always improves results in cases where an embedding has been learned for that token. The power of these embeddings is demonstrated by what happens when one feeds the same input word to the model with different language tokens, as is seen in Table TABREF30 . Impressively, this even works when the source sequence is in the wrong script for the language, as is seen in the entry for Arabic. Language Embeddings Because these language ID tokens are so useful, it would be good if they could be effectively estimated for unseen languages. ostling2017continuous found that the language vectors their models learned correlated well to genetic relationships, so it would be interesting to see if the embeddings our source encoder learned for the language ID tokens showed anything similar. In a few cases they do (the languages closest to German in the vector space are Luxembourgish, Bavarian, and Yiddish, all close relatives). However, for the most part the structure of these vectors is not interpretable. Therefore, it would be difficult to estimate the embedding for an unseen language, or to “borrow” the language ID token of a similar language. A more promising way forward is to find a model that uses an externally constructed typological representation of the language. Phoneme Embeddings In contrast to the language embeddings, the phoneme embeddings appear to show many regularities (see Table TABREF33 ). This is a sign that our multilingual model learns similar embeddings for phonemes that are written with the same grapheme in different languages. These phonemes tend to be phonetically similar to each other. Perhaps the structure of the phoneme embedding space is what leads to our models' very good performance on WER 100. Even when the model's first predicted pronunciation is not correct, it tends to assign more probability mass to guesses that are more similar to the correct one. Applying some sort of filtering or reranking of the system output might therefore lead to better performance. Future Work Because the language ID token is so beneficial to performance, it would be very interesting to find ways to extend a similar benefit to unseen languages. One possible way to do so is with tokens that identify something other than the language, such as typological features about the language's phonemic inventory. This could enable better sharing of resources among languages. Such typological knowledge is readily available in databases like Phoible and WALS for a wide variety of languages. It would be interesting to explore if any of these features is a good predictor of a language's orthographic rules. It would also be interesting to apply the artificial token approach to other problems besides multilingual g2p. One closely related application is monolingual English g2p. Some of the ambiguity of English spelling is due to the wide variety of loanwords in the language, many of which have unassimilated spellings. Knowing the origins of these loanwords could provide a useful hint for figuring out their pronunciations. The etymology of a word could be tagged in an analogous way to how language ID is tagged in multilingual g2p.
what datasets did they use?
the Carnegie Mellon Pronouncing Dictionary BIBREF12, the multilingual pronunciation corpus collected by deri2016grapheme , ranscriptions extracted from Wiktionary
3,244
qasper
4k
Introduction While most NLP resources are English-specific, there have been several recent efforts to build multilingual benchmarks. One possibility is to collect and annotate data in multiple languages separately BIBREF0, but most existing datasets have been created through translation BIBREF1, BIBREF2. This approach has two desirable properties: it relies on existing professional translation services rather than requiring expertise in multiple languages, and it results in parallel evaluation sets that offer a meaningful measure of the cross-lingual transfer gap of different models. The resulting multilingual datasets are generally used for evaluation only, relying on existing English datasets for training. Closely related to that, cross-lingual transfer learning aims to leverage large datasets available in one language—typically English—to build multilingual models that can generalize to other languages. Previous work has explored 3 main approaches to that end: machine translating the test set into English and using a monolingual English model (Translate-Test), machine translating the training set into each target language and training the models on their respective languages (Translate-Train), or using English data to fine-tune a multilingual model that is then transferred to the rest of languages (Zero-Shot). The dataset creation and transfer procedures described above result in a mixture of original, human translated and machine translated data when dealing with cross-lingual models. In fact, the type of text a system is trained on does not typically match the type of text it is exposed to at test time: Translate-Test systems are trained on original data and evaluated on machine translated test sets, Zero-Shot systems are trained on original data and evaluated on human translated test sets, and Translate-Train systems are trained on machine translated data and evaluated on human translated test sets. Despite overlooked to date, we show that such mismatch has a notable impact in the performance of existing cross-lingual models. By using back-translation BIBREF3 to paraphrase each training instance, we obtain another English version of the training set that better resembles the test set, obtaining substantial improvements for the Translate-Test and Zero-Shot approaches in cross-lingual Natural Language Inference (NLI). While improvements brought by machine translation have previously been attributed to data augmentation BIBREF4, we reject this hypothesis and show that the phenomenon is only present in translated test sets, but not in original ones. Instead, our analysis reveals that this behavior is caused by subtle artifacts arising from the translation process itself. In particular, we show that translating different parts of each instance separately (e.g. the premise and the hypothesis in NLI) can alter superficial patterns in the data (e.g. the degree of lexical overlap between them), which severely affects the generalization ability of current models. Based on the gained insights, we improve the state-of-the-art in XNLI, and show that some previous findings need to be reconsidered in the light of this phenomenon. Related work ::: Cross-lingual transfer learning. Current cross-lingual models work by pre-training multilingual representations using some form of language modeling, which are then fine-tuned on the relevant task and transferred to different languages. Some authors leverage parallel data to that end BIBREF5, BIBREF6, but training a model akin to BERT BIBREF7 on the combination of monolingual corpora in multiple languages is also effective BIBREF8. Closely related to our work, BIBREF4 showed that replacing segments of the training data with their translation during fine-tuning is helpful. However, they attribute this behavior to a data augmentation effect, which we believe should be reconsidered given the new evidence we provide. Related work ::: Multilingual benchmarks. Most benchmarks covering a wide set of languages have been created through translation, as it is the case of XNLI BIBREF1 for NLI, PAWS-X BIBREF9 for adversarial paraphrase identification, and XQuAD BIBREF2 and MLQA BIBREF10 for Question Answering (QA). A notable exception is TyDi QA BIBREF0, a contemporaneous QA dataset that was separately annotated in 11 languages. Other cross-lingual datasets leverage existing multilingual resources, as it is the case of MLDoc BIBREF11 for document classification and Wikiann BIBREF12 for named entity recognition. Concurrent to our work, BIBREF13 combine some of these datasets into a single multilingual benchmark, and evaluate some well-known methods on it. Related work ::: Annotation artifacts. Several studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings. Related work ::: Translationese. Translated texts are known to have unique features like simplification, explicitation, normalization and interference, which are refer to as translationese BIBREF24. This phenomenon has been reported to have a notable impact in machine translation evaluation BIBREF25, BIBREF26. For instance, back-translation brings large BLEU gains for reversed test sets (i.e. when translationese is on the source side and original text is used as reference), but its effect diminishes in the natural direction BIBREF27. While connected, the phenomenon we analyze is different in that it arises from translation inconsistencies due to the lack of context, and affects cross-lingual transfer learning rather than machine translation. Experimental design Our goal is to analyze the effect of both human and machine translation in cross-lingual models. For that purpose, the core idea of our work is to (i) use machine translation to either translate the training set into other languages, or generate English paraphrases of it through back-translation, and (ii) evaluate the resulting systems on original, human translated and machine translated test sets in comparison with systems trained on original data. We next describe the models used in our experiments (§SECREF6), the specific training variants explored (§SECREF8), and the evaluation procedure followed (§SECREF10). Experimental design ::: Models and transfer methods We experiment with two models that are representative of the state-of-the-art in monolingual and cross-lingual pre-training: (i) Roberta BIBREF28, which is an improved version of BERT that uses masked language modeling to pre-train an English Transformer model, and (ii) XLM-R BIBREF8, which is a multilingual extension of the former pre-trained on 100 languages. In both cases, we use the large models released by the authors under the fairseq repository. As discussed next, we explore different variants of the training set to fine-tune each model on different tasks. At test time, we try both machine translating the test set into English (Translate-Test) and, in the case of XLM-R, using the actual test set in the target language (Zero-Shot). Experimental design ::: Training variants We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI). For sentences occurring multiple times in the training set (e.g. premises repeated for multiple hypotheses), we use the exact same translation for all occurrences, as our goal is to understand the inherent effect of translation rather than its potential application as a data augmentation method. In order to train the machine translation systems for MT-XX and BT-XX, we use the big Transformer model BIBREF29 with the same settings as BIBREF30 and SentencePiece tokenization BIBREF31 with a joint vocabulary of 32k subwords. For English-Spanish, we train for 10 epochs on all parallel data from WMT 2013 BIBREF32 and ParaCrawl v5.0 BIBREF33. For English-Finnish, we train for 40 epochs on Europarl and Wiki Titles from WMT 2019 BIBREF34, ParaCrawl v5.0, and DGT, EUbookshop and TildeMODEL from OPUS BIBREF35. In both cases, we remove sentences longer than 250 tokens, with a source/target ratio exceeding 1.5, or for which langid.py BIBREF36 predicts a different language, resulting in a final corpus size of 48M and 7M sentence pairs, respectively. We use sampling decoding with a temperature of 0.5 for inference, which produces more diverse translations than beam search BIBREF37 and performed better in our preliminary experiments. Experimental design ::: Tasks and evaluation procedure We use the following tasks for our experiments: Experimental design ::: Tasks and evaluation procedure ::: Natural Language Inference (NLI). Given a premise and a hypothesis, the task is to determine whether there is an entailment, neutral or contradiction relation between them. We fine-tune our models on MultiNLI BIBREF15 for 10 epochs using the same settings as BIBREF28. In most of our experiments, we evaluate on XNLI BIBREF1, which comprises 2490 development and 5010 test instances in 15 languages. These were originally annotated in English, and the resulting premises and hypotheses were independently translated into the rest of the languages by professional translators. For the Translate-Test approach, we use the machine translated versions from the authors. Following BIBREF8, we select the best epoch checkpoint according to the average accuracy in the development set. Experimental design ::: Tasks and evaluation procedure ::: Question Answering (QA). Given a context paragraph and a question, the task is to identify the span answering the question in the context. We fine-tune our models on SQuAD v1.1 BIBREF38 for 2 epochs using the same settings as BIBREF28, and report test results for the last epoch. We use two datasets for evaluation: XQuAD BIBREF2, a subset of the SQuAD development set translated into 10 other languages, and MLQA BIBREF10 a dataset consisting of parallel context paragraphs plus the corresponding questions annotated in English and translated into 6 other languages. In both cases, the translation was done by professional translators at the document level (i.e. when translating a question, the text answering it was also shown). For our BT-XX and MT-XX variants, we translate the context paragraph and the questions independently, and map the answer spans using the same procedure as BIBREF39. For the Translate-Test approach, we use the official machine translated versions of MLQA, run inference over them, and map the predicted answer spans back to the target language. Both for NLI and QA, we run each system 5 times with different random seeds and report the average results. Space permitting, we also report the standard deviation across the 5 runs. NLI experiments We next discuss our main results in the XNLI development set (§SECREF15, §SECREF16), run additional experiments to better understand the behavior of our different variants (§SECREF17, §SECREF22, §SECREF25), and compare our results to previous work in the XNLI test set (§SECREF30). NLI experiments ::: Translate-Test results We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another. NLI experiments ::: Zero-Shot results We next analyze the results for the Zero-Shot approach. In this case, inference is done in the test set in each target language which, in the case of XNLI, was human translated from English. As such, different from the Translate-Test approach, neither training on original data (Orig) nor training on machine translated data (BT-XX and MT-XX) makes use of the exact same type of text that the system is exposed to at test time. However, as shown in Table TABREF9, both BT-XX and MT-XX outperform Orig by approximately 2 points, which suggests that our (back-)translated versions of the training set are more similar to the human translated test sets than the original one. This also provides a new perspective on the Translate-Train approach, which was reported to outperform Orig in previous work BIBREF5: while the original motivation was to train the model on the same language that it is tested on, our results show that machine translating the training set is beneficial even when the target language is different. NLI experiments ::: Original vs. translated test sets So as to understand whether the improvements observed so far are limited to translated test sets or apply more generally, we conduct additional experiments comparing translated test sets to original ones. However, to the best of our knowledge, all existing non-English NLI benchmarks were created through translation. For that reason, we build a new test set that mimics XNLI, but is annotated in Spanish rather than English. We first collect the premises from a filtered version of CommonCrawl BIBREF42, taking a subset of 5 websites that represent a diverse set of genres: a newspaper, an economy forum, a celebrity magazine, a literature blog, and a consumer magazine. We then ask native Spanish annotators to generate an entailment, a neutral and a contradiction hypothesis for each premise. We collect a total of 2490 examples using this procedure, which is the same size as the XNLI development set. Finally, we create a human translated and a machine translated English version of the dataset using professional translators from Gengo and our machine translation system described in §SECREF8, respectively. We report results for the best epoch checkpoint on each set. As shown in Table TABREF18, both BT-XX and MT-XX clearly outperform Orig in all test sets created through translation, which is consistent with our previous results. In contrast, the best results on the original English set are obtained by Orig, and neither BT-XX nor MT-XX obtain any clear improvement on the one in Spanish either. This confirms that the underlying phenomenon is limited to translated test sets. In addition, it is worth mentioning that the results for the machine translated test set in English are slightly better than those for the human translated one, which suggests that the difficulty of the task does not only depend on the translation quality. Finally, it is also interesting that MT-ES is only marginally better than MT-FI in both Spanish test sets, even if it corresponds to the Translate-Train approach, whereas MT-FI needs to Zero-Shot transfer from Finnish into Spanish. This reinforces the idea that it is training on translated data rather than training on the target language that is key in Translate-Train. NLI experiments ::: Stress tests In order to better understand how systems trained on original and translated data differ, we run additional experiments on the NLI Stress Tests BIBREF19, which were designed to test the robustness of NLI models to specific linguistic phenomena in English. The benchmark consists of a competence test, which evaluates the ability to understand antonymy relation and perform numerical reasoning, a distraction test, which evaluates the robustness to shallow patterns like lexical overlap and the presence of negation words, and a noise test, which evaluates robustness to spelling errors. Just as with previous experiments, we report results for the best epoch checkpoint in each test set. As shown in Table TABREF23, Orig outperforms BT-FI and MT-FI on the competence test by a large margin, but the opposite is true on the distraction test. In particular, our results show that BT-FI and MT-FI are less reliant on lexical overlap and the presence of negative words. This feels intuitive, as translating the premise and hypothesis independently—as BT-FI and MT-FI do—is likely to reduce the lexical overlap between them. More generally, the translation process can alter similar superficial patterns in the data, which NLI models are sensitive to (§SECREF2). This would explain why the resulting models have a different behavior on different stress tests. NLI experiments ::: Output class distribution With the aim to understand the effect of the previous phenomenon in cross-lingual settings, we look at the output class distribution of our different models in the XNLI development set. As shown in Table TABREF28, the predictions of all systems are close to the true class distribution in the case of English. Nevertheless, Orig is strongly biased for the rest of languages, and tends to underpredict entailment and overpredict neutral. This can again be attributed to the fact that the English test set is original, whereas the rest are human translated. In particular, it is well-known that NLI models tend to predict entailment when there is a high lexical overlap between the premise and the hypothesis (§SECREF2). However, the degree of overlap will be smaller in the human translated test sets given that the premise and the hypothesis were translated independently, which explains why entailment is underpredicted. In contrast, BT-FI and MT-FI are exposed to the exact same phenomenon during training, which explains why they are not that heavily affected. So as to measure the impact of this phenomenon, we explore a simple approach to correct this bias: having fine-tuned each model, we adjust the bias term added to the logit of each class so the model predictions match the true class distribution for each language. As shown in Table TABREF29, this brings large improvements for Orig, but is less effective for BT-FI and MT-FI. This shows that the performance of Orig was considerably hindered by this bias, which BT-FI and MT-FI effectively mitigate. NLI experiments ::: Comparison with the state-of-the-art So as to put our results into perspective, we compare our best variant to previous work on the XNLI test set. As shown in Table TABREF31, our method improves the state-of-the-art for both the Translate-Test and the Zero-Shot approaches by 4.3 and 2.8 points, respectively. It also obtains the best overall results published to date, with the additional advantage that the previous state-of-the-art required a machine translation system between English and each of the 14 target languages, whereas our method uses a single machine translation system between English and Finnish (which is not one of the target languages). While the main goal of our work is not to design better cross-lingual models, but to analyze their behavior in connection to translation, this shows that the phenomenon under study is highly relevant, to the extent that it can be exploited to improve the state-of-the-art. QA experiments So as to understand whether our previous findings apply to other tasks besides NLI, we run additional experiments on QA. As shown in Table TABREF32, BT-FI and BT-ES do indeed outperform Orig for the Translate-Test approach on MLQA. The improvement is modest, but very consistent across different languages, models and runs. The results for MT-ES and MT-FI are less conclusive, presumably because mapping the answer spans across languages might introduce some noise. In contrast, we do not observe any clear improvement for the Zero-Shot approach on this dataset. Our XQuAD results in Table TABREF33 are more positive, but still inconclusive. These results can partly be explained by the translation procedure used to create the different benchmarks: the premises and hypotheses of XNLI were translated independently, whereas the questions and context paragraphs of XQuAD were translated together. Similarly, MLQA made use of parallel contexts, and translators were shown the sentence containing each answer when translating the corresponding question. As a result, one can expect both QA benchmarks to have more consistent translations than XNLI, which would in turn diminish this phenomenon. In contrast, the questions and context paragraphs are independently translated when using machine translation, which explains why BT-ES and BT-FI outperform Orig for the Translate-Test approach. We conclude that the translation artifacts revealed by our analysis are not exclusive to NLI, as they also show up on QA for the Translate-Test approach, but their actual impact can be highly dependent on the translation procedure used and the nature of the task. Discussion Our analysis prompts to reconsider previous findings in cross-lingual transfer learning as follows: Discussion ::: The cross-lingual transfer gap on XNLI was overestimated. Given the parallel nature of XNLI, accuracy differences across languages are commonly interpreted as the loss of performance when generalizing from English to the rest of languages. However, our work shows that there is another factor that can have a much larger impact: the loss of performance when generalizing from original to translated data. Our results suggest that the real cross-lingual generalization ability of XLM-R is considerably better than what the accuracy numbers in XNLI reflect. Discussion ::: Overcoming the cross-lingual gap is not what makes Translate-Train work. The original motivation for Translate-Train was to train the model on the same language it is tested on. However, we show that it is training on translated data, rather than training on the target language, that is key for this approach to outperform Zero-Shot as reported by previous authors. Discussion ::: Improvements previously attributed to data augmentation should be reconsidered. The method by BIBREF4 combines machine translated premises and hypotheses in different languages (§SECREF2), resulting in an effect similar to BT-XX and MT-XX. As such, we believe that this method should be analyzed from the point of view of dataset artifacts rather than data augmentation, as the authors do. From this perspective, having the premise and the hypotheses in different languages can reduce the superficial patterns between them, which would explain why this approach is better than using examples in a single language. Discussion ::: The potential of Translate-Test was underestimated. The previous best results for Translate-Test on XNLI lagged behind the state-of-the-art by 4.6 points. Our work reduces this gap to only 0.8 points by addressing the underlying translation artifacts. The reason why Translate-Test is more severely affected by this phenomenon is twofold: (i) the effect is doubled by first using human translation to create the test set and then machine translation to translate it back to English, and (ii) Translate-Train was inadvertently mitigating this issue (see above), but equivalent techniques were never applied to Translate-Test. Discussion ::: Future evaluation should better account for translation artifacts. The evaluation issues raised by our analysis do not have a simple solution. In fact, while we use the term translation artifacts to highlight that they are an unintended effect of translation that impacts final evaluation, one could also argue that it is the original datasets that contain the artifacts, which translation simply alters or even mitigates. In any case, this is a more general issue that falls beyond the scope of cross-lingual transfer learning, so we argue that it should be carefully controlled when evaluating cross-lingual models. In the absence of more robust datasets, we recommend that future multilingual benchmarks should at least provide consistent test sets for English and the rest of languages. This can be achieved by (i) using original annotations in all languages, (ii) using original annotations in a non-English language and translating them into English and other languages, or (iii) if translating from English, doing so at the document level to minimize translation inconsistencies. Conclusions In this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning. Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin. Finally, we have shown that the phenomenon is not specific to NLI but also affects QA, although it is less pronounced there thanks to the translation procedure used in the corresponding benchmarks. So as to facilitate similar studies in the future, we release our NLI dataset, which, unlike previous benchmarks, was annotated in a non-English language and human translated into English. Acknowledgments We thank Nora Aranberri and Uxoa Iñurrieta for helpful discussion during the development of this work, as well as the rest of our colleagues from the IXA group that worked as annotators for our NLI dataset. This research was partially funded by a Facebook Fellowship, the Basque Government excellence research group (IT1343-19), the Spanish MINECO (UnsupMT TIN2017‐91692‐EXP MCIU/AEI/FEDER, UE), Project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018), and the NVIDIA GPU grant program.
What are the languages they use in their experiment?
English French Spanish German Greek Bulgarian Russian Turkish Arabic Vietnamese Thai Chinese Hindi Swahili Urdu Finnish
4,086
qasper
4k
Introduction The task of generating natural language descriptions of structured data (such as tables) BIBREF2 , BIBREF3 , BIBREF4 has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them BIBREF0 , BIBREF1 , BIBREF5 , BIBREF6 . For text generation tasks, the only gold standard metric is to show the output to humans for judging its quality, but this is too expensive to apply repeatedly anytime small modifications are made to a system. Hence, automatic metrics that compare the generated text to one or more reference texts are routinely used to compare models BIBREF7 . For table-to-text generation, automatic evaluation has largely relied on BLEU BIBREF8 and ROUGE BIBREF9 . The underlying assumption behind these metrics is that the reference text is gold-standard, i.e., it is the ideal target text that a system should generate. In practice, however, when datasets are collected automatically and heuristically, the reference texts are often not ideal. Figure FIGREF2 shows an example from the WikiBio dataset BIBREF0 . Here the reference contains extra information which no system can be expected to produce given only the associated table. We call such reference texts divergent from the table. We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§ SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (§ SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are: Table-to-Text Generation We briefly review the task of generating natural language descriptions of semi-structured data, which we refer to as tables henceforth BIBREF11 , BIBREF12 . Tables can be expressed as set of records INLINEFORM0 , where each record is a tuple (entity, attribute, value). When all the records are about the same entity, we can truncate the records to (attribute, value) pairs. For example, for the table in Figure FIGREF2 , the records are {(Birth Name, Michael Dahlquist), (Born, December 22 1965), ...}. The task is to generate a text INLINEFORM1 which summarizes the records in a fluent and grammatical manner. For training and evaluation we further assume that we have a reference description INLINEFORM2 available for each table. We let INLINEFORM3 denote an evaluation set of tables, references and texts generated from a model INLINEFORM4 , and INLINEFORM5 , INLINEFORM6 denote the collection of n-grams of order INLINEFORM7 in INLINEFORM8 and INLINEFORM9 , respectively. We use INLINEFORM10 to denote the count of n-gram INLINEFORM11 in INLINEFORM12 , and INLINEFORM13 to denote the minimum of its counts in INLINEFORM14 and INLINEFORM15 . Our goal is to assign a score to the model, which correlates highly with human judgments of the quality of that model. PARENT PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 . Evaluation via Information Extraction BIBREF1 proposed to use an auxiliary model, trained to extract structured records from text, for evaluation. However, the extraction model presented in that work is limited to the closed-domain setting of basketball game tables and summaries. In particular, they assume that each table has exactly the same set of attributes for each entity, and that the entities can be identified in the text via string matching. These assumptions are not valid for the open-domain WikiBio dataset, and hence we train our own extraction model to replicate their evaluation scheme. Our extraction system is a pointer-generator network BIBREF19 , which learns to produce a linearized version of the table from the text. The network learns which attributes need to be populated in the output table, along with their values. It is trained on the training set of WikiBio. At test time we parsed the output strings into a set of (attribute, value) tuples and compare it to the ground truth table. The F-score of this text-to-table system was INLINEFORM0 , which is comparable to other challenging open-domain settings BIBREF20 . More details are included in the Appendix SECREF52 . Given this information extraction system, we consider the following metrics for evaluation, along the lines of BIBREF1 . Content Selection (CS): F-score for the (attribute, value) pairs extracted from the generated text compared to those extracted from the reference. Relation Generation (RG): Precision for the (attribute, value) pairs extracted from the generated text compared to those in the ground truth table. RG-F: Since our task emphasizes the recall of information from the table as well, we consider another variant which computes the F-score of the extracted pairs to those in the table. We omit the content ordering metric, since our extraction system does not align records to the input text. Experiments & Results In this section we compare several automatic evaluation metrics by checking their correlation with the scores assigned by humans to table-to-text models. Specifically, given INLINEFORM0 models INLINEFORM1 , and their outputs on an evaluation set, we show these generated texts to humans to judge their quality, and obtain aggregated human evaluation scores for all the models, INLINEFORM2 (§ SECREF33 ). Next, to evaluate an automatic metric, we compute the scores it assigns to each model, INLINEFORM3 , and check the Pearson correlation between INLINEFORM4 and INLINEFORM5 BIBREF21 . Data & Models Our main experiments are on the WikiBio dataset BIBREF0 , which is automatically constructed and contains many divergent references. In § SECREF47 we also present results on the data released as part of the WebNLG challenge. We developed several models of varying quality for generating text from the tables in WikiBio. This gives us a diverse set of outputs to evaluate the automatic metrics on. Table TABREF32 lists the models along with their hyperparameter settings and their scores from the human evaluation (§ SECREF33 ). Our focus is primarily on neural sequence-to-sequence methods since these are most widely used, but we also include a template-based baseline. All neural models were trained on the WikiBio training set. Training details and sample outputs are included in Appendices SECREF56 & SECREF57 . We divide these models into two categories and measure correlation separately for both the categories. The first category, WikiBio-Systems, includes one model each from the four families listed in Table TABREF32 . This category tests whether a metric can be used to compare different model families with a large variation in the quality of their outputs. The second category, WikiBio-Hyperparams, includes 13 different hyperparameter settings of PG-Net BIBREF19 , which was the best performing system overall. 9 of these were obtained by varying the beam size and length normalization penalty of the decoder network BIBREF23 , and the remaining 4 were obtained by re-scoring beams of size 8 with the information extraction model described in § SECREF4 . All the models in this category produce high quality fluent texts, and differ primarily on the quantity and accuracy of the information they express. Here we are testing whether a metric can be used to compare similar systems with a small variation in performance. This is an important use-case as metrics are often used to tune hyperparameters of a model. Human Evaluation We collected human judgments on the quality of the 16 models trained for WikiBio, plus the reference texts. Workers on a crowd-sourcing platform, proficient in English, were shown a table with pairs of generated texts, or a generated text and the reference, and asked to select the one they prefer. Figure FIGREF34 shows the instructions they were given. Paired comparisons have been shown to be superior to rating scales for comparing generated texts BIBREF24 . However, for measuring correlation the comparisons need to be aggregated into real-valued scores, INLINEFORM0 , for each of the INLINEFORM1 models. For this, we use Thurstone's method BIBREF22 , which assigns a score to each model based on how many times it was preferred over an alternative. The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio. Compared Metrics Text only: We compare BLEU BIBREF8 , ROUGE BIBREF9 , METEOR BIBREF18 , CIDEr and CIDEr-D BIBREF25 using their publicly available implementations. Information Extraction based: We compare the CS, RG and RG-F metrics discussed in § SECREF4 . Text & Table: We compare a variant of BLEU, denoted as BLEU-T, where the values from the table are used as additional references. BLEU-T draws inspiration from iBLEU BIBREF26 but instead rewards n-grams which match the table rather than penalizing them. For PARENT, we compare both the word-overlap model (PARENT-W) and the co-occurrence model (PARENT-C) for determining entailment. We also compare versions where a single INLINEFORM0 is tuned on the entire dataset to maximize correlation with human judgments, denoted as PARENT*-W/C. Correlation Comparison We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 . Table TABREF37 also indicates whether PARENT is significantly better than a baseline metric. BIBREF21 suggest using the William's test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a INLINEFORM0 confidence interval of the difference in correlation between PARENT and any other metric and check whether this is above 0 BIBREF27 . Correlations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T). Among the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation. On this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In § SECREF47 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting INLINEFORM0 is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C). Analysis In this section we further analyze the performance of PARENT-W under different conditions, and compare to the other best metrics from Table TABREF37 . To study the correlation as we vary the number of divergent references, we also collected binary labels from workers for whether a reference is entailed by the corresponding table. We define a reference as entailed when it mentions only information which can be inferred from the table. Each table and reference pair was judged by 3 independent workers, and we used the majority vote as the label for that pair. Overall, only INLINEFORM0 of the references were labeled as entailed by the table. Fleiss' INLINEFORM1 was INLINEFORM2 , which indicates a fair agreement. We found the workers sometimes disagreed on what information can be reasonably entailed by the table. Figure FIGREF40 shows the correlations as we vary the percent of entailed examples in the evaluation set of WikiBio. Each point is obtained by fixing the desired proportion of entailed examples, and sampling subsets from the full set which satisfy this proportion. PARENT and RG-F remain stable and show a high correlation across the entire range, whereas BLEU and BLEU-T vary a lot. In the hyperparams category, the latter two have the worst correlation when the evaluation set contains only entailed examples, which may seem surprising. However, on closer examination we found that this subset tends to omit a lot of information from the tables. Systems which produce more information than these references are penalized by BLEU, but not in the human evaluation. PARENT overcomes this issue by measuring recall against the table in addition to the reference. We check how different components in the computation of PARENT contribute to its correlation to human judgments. Specifically, we remove the probability INLINEFORM0 of an n-gram INLINEFORM1 being entailed by the table from Eqs. EQREF19 and EQREF23 . The average correlation for PARENT-W drops to INLINEFORM5 in this case. We also try a variant of PARENT with INLINEFORM6 , which removes the contribution of Table Recall (Eq. EQREF22 ). The average correlation is INLINEFORM7 in this case. With these components, the correlation is INLINEFORM8 , showing that they are crucial to the performance of PARENT. BIBREF28 point out that hill-climbing on an automatic metric is meaningless if that metric has a low instance-level correlation to human judgments. In Table TABREF46 we show the average accuracy of the metrics in making the same judgments as humans between pairs of generated texts. Both variants of PARENT are significantly better than the other metrics, however the best accuracy is only INLINEFORM0 for the binary task. This is a challenging task, since there are typically only subtle differences between the texts. Achieving higher instance-level accuracies will require more sophisticated language understanding models for evaluation. WebNLG Dataset To check how PARENT correlates with human judgments when the references are elicited from humans (and less likely to be divergent), we check its correlation with the human ratings provided for the systems competing in the WebNLG challenge BIBREF6 . The task is to generate text describing 1-5 RDF triples (e.g. John E Blaha, birthPlace, San Antonio), and human ratings were collected for the outputs of 9 participating systems on 223 instances. These systems include a mix of pipelined, statistical and neural methods. Each instance has upto 3 reference texts associated with the RDF triples, which we use for evaluation. The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well. While BLEU has the highest correlation for the grammar and fluency aspects, PARENT does best for semantics. This suggests that the inclusion of source tables into the evaluation orients the metric more towards measuring the fidelity of the content of the generation. A similar trend is seen comparing BLEU and BLEU-T. As modern neural text generation systems are typically very fluent, measuring their fidelity is of increasing importance. Between the two entailment models, PARENT-C is better due to its higher correlation with the grammaticality and fluency aspects. The INLINEFORM0 parameter in the calculation of PARENT decides whether to compute recall against the table or the reference (Eq. EQREF22 ). Figure FIGREF50 shows the distribution of the values taken by INLINEFORM1 using the heuristic described in § SECREF3 for instances in both WikiBio and WebNLG. For WikiBio, the recall of the references against the table is generally low, and hence the recall of the generated text relies more on the table. For WebNLG, where the references are elicited from humans, this recall is much higher (often INLINEFORM2 ), and hence the recall of the generated text relies more on the reference. Related Work Over the years several studies have evaluated automatic metrics for measuring text generation performance BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 . The only consensus from these studies seems to be that no single metric is suitable across all tasks. A recurring theme is that metrics like BLEU and NIST BIBREF36 are not suitable for judging content quality in NLG. Recently, BIBREF37 did a comprehensive study of several metrics on the outputs of state-of-the-art NLG systems, and found that while they showed acceptable correlation with human judgments at the system level, they failed to show any correlation at the sentence level. Ours is the first study which checks the quality of metrics when table-to-text references are divergent. We show that in this case even system level correlations can be unreliable. Hallucination BIBREF38 , BIBREF39 refers to when an NLG system generates text which mentions extra information than what is present in the source from which it is generated. Divergence can be viewed as hallucination in the reference text itself. PARENT deals with hallucination by discounting n-grams which do not overlap with either the reference or the table. PARENT draws inspiration from iBLEU BIBREF26 , a metric for evaluating paraphrase generation, which compares the generated text to both the source text and the reference. While iBLEU penalizes texts which match the source, here we reward such texts since our task values accuracy of generated text more than the need for paraphrasing the tabular content BIBREF40 . Similar to SARI for text simplification BIBREF41 and Q-BLEU for question generation BIBREF42 , PARENT falls under the category of task-specific metrics. Conclusions We study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics BIBREF1 , and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data. Acknowledgements Bhuwan Dhingra is supported by a fellowship from Siemens, and by grants from Google. We thank Maruan Al-Shedivat, Ian Tenney, Tom Kwiatkowski, Michael Collins, Slav Petrov, Jason Baldridge, David Reitter and other members of the Google AI Language team for helpful discussions and suggestions. We thank Sam Wiseman for sharing data for an earlier version of this paper. We also thank the anonymous reviewers for their feedback. Information Extraction System For evaluation via information extraction BIBREF1 we train a model for WikiBio which accepts text as input and generates a table as the output. Tables in WikiBio are open-domain, without any fixed schema for which attributes may be present or absent in an instance. Hence we employ the Pointer-Generator Network (PG-Net) BIBREF19 for this purpose. Specifically, we use a sequence-to-sequence model, whose encoder and decoder are both single-layer bi-directional LSTMs. The decoder is augmented with an attention mechanism over the states of the encoder. Further, it also uses a copy mechanism to optionally copy tokens directly from the source text. We do not use the coverage mechanism of BIBREF19 since that is specific to the task of summarization they study. The decoder is trained to produce a linearized version of the table where the rows and columns are flattened into a sequence, and separate by special tokens. Figure FIGREF53 shows an example. Clearly, since the references are divergent, the model cannot be expected to produce the entire table, and we see some false information being hallucinated after training. Nevertheless, as we show in § SECREF36 , this system can be used for evaluating generated texts. After training, we can parse the output sequence along the special tokens INLINEFORM0 R INLINEFORM1 and INLINEFORM2 C INLINEFORM3 to get a set of (attribute, value) pairs. Table TABREF54 shows the precision, recall and F-score of these extracted pairs against the ground truth tables, where the attributes and values are compared using an exact string match. Hyperparameters After tuning we found the same set of hyperparameters to work well for both the table-to-text PG-Net, and the inverse information extraction PG-Net. The hidden state size of the biLSTMs was set to 200. The input and output vocabularies were set to 50000 most common words in the corpus, with additional special symbols for table attribute names (such as “birth-date”). The embeddings of the tokens in the vocabulary were initialized with Glove BIBREF43 . Learning rate of INLINEFORM0 was used during training, with the Adam optimizer, and a dropout of INLINEFORM1 was also applied to the outputs of the biLSTM. Models were trained till the loss on the dev set stopped dropping. Maximum length of a decoded text was set to 40 tokens, and that of the tables was set to 120 tokens. Various beam sizes and length normalization penalties were applied for the table-to-text system, which are listed in the main paper. For the information extraction system, we found a beam size of 8 and no length penalty to produce the highest F-score on the dev set. Sample Outputs Table TABREF55 shows some sample references and the corresponding predictions from the best performing model, PG-Net for WikiBio.
Ngrams of which length are aligned using PARENT?
Unanswerable
3,827
qasper
4k
Introduction Recently, people have started looking at online forums either as a primary or secondary source of counseling services BIBREF0. BIBREF1 reported that over the first five years of operation (2011-2016), ReachOut.com – Ireland's online youth mental health service – 62% of young people would visit a website for support when going through a tough time. With the expansion of the Internet, there has been a substantial growth in the number of users looking for psychological support online. The importance of the on-line life of patients has been recognized in research as well. BIBREF2 stated that the online life of patients constitutes a major influence on their self-definition. Furthermore, according to BIBREF3, the social networking activities of an individual, offer an important reflection of their personality. While dealing with patients suffering from psychological problems, it is important that therapists do not ignore this pivotal source of information which can provide deep insights into their patients' mental conditions. Acceptance of on-line support groups (OSG) by Mental Health Professionals is still not established BIBREF4. Since OSG can have double-edged effects on patients and the presence of professionals is often limited, we argue that their properties should be further studied. According to BIBREF5 OSG effectiveness is hard to assess, while some studies showed OSG's potential to change participants' attitudes, no such effect was observed in other studies (see Related Work Section for more details). Furthermore the scope of previous work on analysis of users' behaviour in OSG has been limited by the fact that they relied on expert annotation of posts and comments BIBREF6. We present a novel approach for automatically analysing online conversations for the presence of therapeutic factors of group therapy defined by BIBREF7 as “the actual mechanisms of effecting change in the patient”. The authors have identified 11 therapeutic factors in group therapy: Universality, Altruism, Instillation of Hope, Guidance, Imparting information, Developing social skills, Interpersonal learning, Cohesion, Catharsis, Existential factors, Imitative behavior and Corrective recapitulation of family of origin issues. In this paper, we focus on 3 therapeutic factors: Universality, Altruism and Instillation of Hope (listed below), as we believe that these can be approximated by using established NLP techniques (e.g. Sentiment Analysis, Dialogue Act tagging etc.). Universality: the disconfirmation of a user's feelings of uniqueness of their mental health condition. Altruism: others offer support, reassurance, suggestions and insight. Instillation of Hope: inspiration provided to participants by their peers. The selected therapeutic factors are analysed in terms of illocutionary force and attitude. Due to the multi-party and asynchronous nature of on-line social media conversations, prior to the analysis, we extract conversation threads among users – an essential prerequisite for any kind of higher-level dialogue analysis BIBREF10. Afterwards, the illocutionary force is identified using Dialogue Act tagging, whereas the attitude by using Sentiment Analysis. The quantitative analysis is then performed on these processed conversations. Ideally, the analysis would require experts to annotate each post and comment on the presence of therapeutic factors. However, due to time and cost demands of this task, it is feasible to analyse only a small fraction of the available data. Compared to previous studies (e.g. BIBREF6) that analysed few tens of conversations and several thousand lines of chat; using the proposed approach – application of Dialogue Acts and Sentiment Analysis – we were able to automatically analyse approximately 300 thousands conversations (roughly 1.5 million comments). The rest of the paper is structured as follows. In Section 2 we introduce related work. Next, in Section 3 we describe the pre-processing pipeline and the methodology to perform thread extraction on asynchronous multi-party conversations. In Section 4 we provide the describe the final dataset used for the analysis, and in Section 5 we present the results of our analysis. Finally, in Section 6 we provide concluding remarks and future research directions. Related Work On-line support groups have been analyzed for various factors before. For instance, BIBREF11 analysed stress reduction in on-line support group chat-rooms, and the effects of on-line social interactions. Such studies mostly relied on questionnaires and were based on a small number of users. Nevertheless, in BIBREF11, the author showed that social support facilitates coping with distress, improves mood and expedites recovery from it. These findings highlight that, overall, on-line discussion boards appear to be therapeutic and constructive for individuals suffering alcohol-abuse. Application of NLP to the analysis of mental health-related conversation has been studied as well (e.g. BIBREF12, BIBREF13). BIBREF6 applied sentiment-analysis combined with extensive turn-level annotation to investigate stress reduction in on-line support group chat-rooms, showing that sentiment-analysis is a good predictor of entrance stress level. Furthermore, similar to our setting, they applied automatic thread-extraction to determine conversation threads. BIBREF14 have shown that on-line support group therapy increased the quality of life of patients with metastatic breast cancer. Since many original posters reported the benefits of group therapy on patients BIBREF15, BIBREF2, BIBREF16, BIBREF17, BIBREF18, BIBREF7, we evaluate the effect of the user interaction using sentiment scores of comments in on-line support groups. According to BIBREF6, users with high incoming stress tend to request less information from others, as a percentage of their time, and share much more information, in absolute terms. In addition, high information sharing has been shown to be a good predictor of stress reduction at the end of the chat BIBREF6. Regarding information sharing, we rely on Dialogue Acts BIBREF19 to model the speaker's intention in producing an utterance. In particular, we are interested in Dialogue Act label that is defined to represent descriptive, narrative, or personal information – the statement. Dialogue Acts have been applied to the analysis of spoken BIBREF20, BIBREF21 as well as on-line written synchronous conversations BIBREF22. We apply Dialogue Act tag set defined in BIBREF22 to the analysis of our on-line asynchronous conversations. We argue that Dialogue Acts can be used to analyse user behaviour in social media and verify the presence of therapeutic factors. Methodology We select the three therapeutic factors – Universality, Altruism and Instillation of Hope – that can be best approximated using NLP techniques: Sentiment Analysis and Dialogue Act tagging. We discuss each one of the selected therapeutic factors and the identified necessary conditions. The listed conditions, however, are not sufficient to attribute the presence of a therapeutic factor with high confidence, which only can be obtained using expert annotation. Our analysis focuses on the structure of conversations; though content plays an important role as well. Universality consists in the disconfirmation of patients' belief of uniqueness of their disease. This therapeutic factor is shown to be a powerful source of relief for the patient, according to BIBREF7. From this definition, we can draw the following conditions that are applicable to our environment: improvement of original poster's sentiment: we hypothesize that the discovery that other people passed through similar issues leads to a higher sentiment score; posts containing negative personal experiences: to disconfirm the belief of uniqueness users have to share their story; comments containing negative statements: to disconfirm the patient's feelings of uniqueness, the commenting user must tell a similar negative personal experience. This condition requires two sub-conditions: high presence of statements in comments and the presence of negative comments replying to negative posts. Instillation of Hope is based on inspiration provided to participants by their peers. Through the inspiration provided by their peers, patients can increase their expectation on the therapy outcome. BIBREF7 in several studies have demonstrated that a high expectation of help before the start of a therapy is significantly correlated with a positive therapy outcome. The author states that many patients pointed out the importance of having observed the improvement of others. Therefore, the three main conditions are the following: improvement of original poster's sentiment: we hypothesize that instillation of hope leads to a higher sentiment score; posts containing negative personal experiences: hope can be instilled in someone who shares a negative personal experience; comments containing positive personal experiences: in order to instill hope, commenting posters must show to original posters an overall positive personal experience. To detect positive personal experience, we require the presence of statements in comments and a positive sentiment of comments replying to negative posts. Altruism consists of peers offering support, reassurance, suggestions and insight, since they share similar problems with one another BIBREF7. The experience of finding that a patient can be of value to others is refreshing and boosts self-esteem BIBREF7. However, in the current study we focus on testing whether commenting posters are altruists or not. We do not test whether the altruistic behavior leads to an improvement on the altruist itself. For these reasons, we define three main conditions: improvement of original poster's sentiment: we hypothesize that supportive and reassuring statements improve the sentiment score of the original poster; posts contains negative personal experiences: users offer support, reassurance and suggestion when facing a negative personal experience of the original poster; comments containing positive statements: either supportive or reassuring statements show by definition a positive intended emotional communication. Thus comments to the post should consist of positive sentiment statements. Consequently, a conversation containing the aforementioned therapeutic factors should satisfy the following conditions in terms of NLP: Sentiment Analysis and Dialogue Acts. original posters have a higher sentiment score at the end of the thread than at the beginning; the original post consists mostly of polarised statements; the presence of a significant amount of statements in comments, since both support and sharing similar negative experiences can be represented as statements; both negative and positive statements in comments lead to higher final sentiment score of the original poster. Datasets We verify the presence of therapeutic factors in two social media datasets: OSG and Twitter. The first dataset is crawled from an on-line support groups website, and the second dataset consists of a small sample of Twitter conversation threads. Since the former consists of multi-threaded conversations, we apply a pre-processing to extract conversation threads to provide a fair comparison with the Twitter dataset. An example conversation from each data source is presented in Figure FIGREF19. Datasets ::: Twitter We have downloaded 1,873 Twitter conversation threads, roughly 14k tweets, from a publicly available resource that were previously pre-processed and have conversation threads extracted. A conversation in the dataset consists of at least 4 tweets. Even though, according to BIBREF23, Twitter is broadly applicable to public health research, our expectation is that it contains less therapeutic conversations in comparison to specialized on-line support forums. Datasets ::: OSG Our data has been developed by crawling and pre-processing an OSG web forum. The forum has a great variety of different groups such as depression, anxiety, stress, relationship, cancer, sexually transmitted diseases, etc. Each conversation starts with one post and can contain multiple comments. Each post or comment is represented by a poster, a timestamp, a list of users it is referencing to, thread id, a comment id and a conversation id. The thread id is the same for comments replying to each other, otherwise it is different. The thread id is increasing with time. Thus, it provides ordering among threads; whereas the timestamp provides ordering in the thread. Each conversation can belong to multiple groups. Consequently, the dataset needs to be processed to remove duplicates. The dataset resulting after de-duplication contains 295 thousand conversations, each conversation contains on average 6 comments. In total, there are 1.5 million comments. Since the created dataset is multi-threaded, we need to extract conversation threads, to eliminate paths not relevant to the original post. Datasets ::: OSG ::: Conversation Thread Extraction The thread extraction algorithm is heuristic-based and consists of two steps: (1) creation of a tree, based on a post written by a user and the related comments and (2) transformation of the tree into a list of threads. The tree creation is an extension of the approach of BIBREF24, where first a graph of conversation is constructed. In the approach, direct replies to a post are attached to the first nesting level and subsequent comments to increasing nesting levels. In our approach, we also exploit comments' features. The tree creation is performed without processing the content of comments, which allows us to process posts and comments of any length efficiently. The heuristic used in the process is based on three simplifying assumptions: Unless there is a specific reference to another comment or a user, comments are attached to the original post. When replying, the commenting poster is always replying to the original post or some other comment. Unless specified otherwise, it is assumed that it is a response to the previous (in time) post/comment. Subsequent comments by the same poster are part of the same thread. To evaluate the performance of the thread extraction algorithm, 2 annotators have manually constructed the trees for 100 conversations. The performance of the algorithm on this set of 100 conversations is evaluated using accuracy and standard Information Retrieval evaluation metrics of precision, recall, and F$_1$ measure. The results are reported in Table TABREF28 together with random and majority baselines. The turn-level percent agreement between the 2 annotators is 97.99% and Cohen's Kappa Coefficient is 83.80%. Datasets ::: Data Representation For both data sources, Twitter and OSG with extracted threads, posts and comments are tokenized and sentence split. Each sentence is passed through Sentiment Analysis and Dialogue Act tagging. Since a post or a comment can contain multiple sentences, therefore multiple Dialogue Acts, it is represented as as a one-hot encoding, where each position represents a Dialogue Act. For Sentiment Analysis we use a lexicon-based sentiment analyser introduced by BIBREF25. For Dialogue Act tagging, on the other hand, we make use of a model trained on NPSChat corpus BIBREF22 following the approach of BIBREF26. Analysis As we mentioned in Section 3, the presence of each of the therapeutic conditions under analysis is a necessary for a conversation to be considered to have therapeutic factors. In this section we present the results of our analysis with respect to these conditions. Analysis ::: Change in Sentiment score of Original Posters The first condition which we test is the sentiment change in conversation threads, comparing the initial and final sentiment scores (i.e. posts' scores) of the original poster. The results of the analysis are presented in Figure FIGREF33. In the figure we can observe that the distribution of the sentiment change in the two datasets is different. While in Twitter the amount of conversations that lead to the increase of sentiment score is roughly equal to the amount of conversations that lead to the decrease of sentiment score; the situation is different for OSG. In OSG, the amount of conversations that lead to the increase of sentiment score is considerably higher. Figure FIGREF34 provides a more fine grained analysis, where we additionally analyse the sentiment change in nominal polarity terms – negative and positive. In OSG, the number of users that changed polarity from negative to positive is more than the double of the users that have changed the polarity from positive to negative. In Twitter, on the other hand, the users mostly changed polarity from positive to negative. Results of the analysis suggest that in OSG , sentiment increases and users tend to change polarity from negative to positive, whereas in Twitter sentiment tends to decrease. Verification of this condition alone indicates that the ratio of potentially therapeutic conversations in Twitter is lower. Analysis ::: Structure of Posts and Comments Table TABREF36 presents the distribution of automatically predicted per-sentence Dialogue Acts in the datasets. The most frequent tag is statement in both. In Table TABREF37, on the other hand, we present the distribution of post and comment structures in terms of automatically predicted Dialogue Act tags. The structure is an unordered set of tags in the post or comment. From the table we can observe that the distribution of tag sets is similar between posts and comments. In both cases the most common set is statement only. However, conversations containing only statement, emphasis or question posts and comments predominantly appear in Twitter. Which is expected due to the shorter length of Twitter posts and comments. We can also observe that the original posters tend to ask more questions than the commenting posters – 19.83% for posts vs. 11.21% for comments (summed). This suggests that the original posters frequently ask either for suggestion or confirmation of their points of view or their disconfirmation. However, the high presence of personal experiences is supported by the high number of posts containing only statements. High number of statement tags in comments suggests that users reply either with supporting or empathic statements or personal experience. However, 6.39% of comments contain accept and reject tags, which mark the degree to which a speaker accepts some previous proposal, plan, opinion, or statement BIBREF20. The described Dialogue Act tags are often used when commenting posters discuss original poster's point of view. For instance, “It's true. I felt the same.” – {Accept, Statement} or “Well no. You're not alone” – {Reject, Statement}. The datasets differ with respect to the distribution of these Dialogue Acts tags, they appear more frequently in OSG. Analysis ::: Sentiment of Posts and Comments Table TABREF39 presents the distribution of sentiment polarity in post and comment statements (i.e. sentences tagged as statement). For OSG, the predominant sentiment label of statements is positive and it is the highest for both posts and comments. However, the difference between the amounts of positive and negative statements is higher for the replying comments (34.5% vs. 42.5%). For Twitter, on the other hand, the predominant sentiment label of statements is neutral and the polarity distribution between posts and comments is very close. One particular observation is that the ratio of negative statements is higher in OSG for both posts and comments than in Twitter, which supports the idea of sharing negative experiences. Further we analyze whether the sentiment of a comment (i.e. the replying user) is affected by the sentiment of the original post (i.e. the user being replied to), which will imply that the users adapt their behaviour with respect to the post's sentiment. For the analysis, we split the datasets into three buckets according to the posts' sentiment score – negative, neutral, or positive, and represent each conversation in terms of percentages of comments (replies) with each sentiment label. The buckets are then compared using t-test for statistically significant differences. Table TABREF40 presents the distribution of sentiment labels with respect to the post's sentiment score. The patterns of distribution are similar across the datasets. We can observe that overall, replies tend to have a positive sentiment, which suggests that replying posters tend to have a positive attitude. However, the ratio of positive comments is higher for OSG than for Twitter. The results of the Welch's t-test on OSG data reveal that there are statistically significant differences in the distribution of replying comments' sentiment between conversations with positive and negative starting posts. A positive post tends to get significantly more positive replies. Similarly, a negative post tends to get significantly more negative replies (both with $p < 0.01$). Table TABREF41 presents the distribution of the sentiment labels of the final text provided by the original poster with respect to the sentiment polarity of the comments. The results indicate that OSG participants are more supportive, as the majority of conversations end in a positive final sentiment regardless of the sentiment of comments. We can also observe that negative comments in OSG lead to positive sentiment, which supports the idea of sharing the negative experiences, thus presence of therapeutic factors. For Twitter, on the other hand, only positive comments lead to the positive final sentiments, whereas other comments lead predominantly to neutral final sentiments. Our analysis in terms of sentiment and Dialogue Acts supports the presence of the three selected therapeutic factors – Universality, Altruism and Instillation of Hope – in OSG more than in Twitter. The main contributors to this conclusion are the facts that there is more positive change in the sentiment of the original posters in OSG (people seeking support) and that in OSG even negative and neutral comments are likely to lead to positive changes. Conclusion In this work, we propose a methodology to automatically analyse online social platforms for the presence of therapeutic factors (i.e. Universality, Altruism and Instillation of Hope). We evaluate our approach on two on-line platforms, Twitter and an OSG web forum. We apply NLP techniques of Sentiment Analysis and Dialogue Act tagging to automatically verify the presence of therapeutic factors, which allows us to analyse larger amounts of conversational data (as compared to previous studies). Our analysis indicates that OSG conversations satisfy higher number of conditions approximating therapeutic factors than Twitter conversations. Given this outcome, we postulate that users who join support group websites spontaneously seem to benefit from it. Indeed, as shown in Section SECREF5, the original posters who interact with others by replying to comments, have benefited from an improvement of their emotional state. We would like to reemphasise that the conditions for the therapeutic factors are necessary but not sufficient; since our analysis focuses on the structure of conversations, being agnostic to the content. NLP, however, allows us to strengthen our approximations even further. Thus, the further extension of our work is also augmentation of our study with other language analysis metrics and their correlation with human annotation. It should be noted that the proposed approach is an approximation of the tedious tasks of annotation of conversations by experts versed in the therapeutic factors and their associated theories. Even though we can use Sentiment Analysis to detect the existence of therapeutic factors, we cannot differentiate between Altruism and Instillation of Hope, as this requires differentiation between emotional state of the user and the intended emotional communication. Thus, the natural extensions of this work are differentiation between different therapeutic factors and comparison of the proposed analysis to the human evaluation. Although we acknowledge that the proposed methodology does not serve as a replacement of manual analysis of OSG for the presence of therapeutic factors, we believe that it could facilitate and supplement this process. The method can serve as a tool for general practitioners and psychologists who can use it as an additional source of information regarding their patients’ condition and, in turn, offer a more personalised support that is better tailored to individual therapeutic needs.
How large is the Twitter dataset?
1,873 Twitter conversation threads, roughly 14k tweets
3,721
qasper
4k
Introduction Recently, deep learning algorithms have successfully addressed problems in various fields, such as image classification, machine translation, speech recognition, text-to-speech generation and other machine learning related areas BIBREF0 , BIBREF1 , BIBREF2 . Similarly, substantial improvements in performance have been obtained when deep learning algorithms have been applied to statistical speech processing BIBREF3 . These fundamental improvements have led researchers to investigate additional topics related to human nature, which have long been objects of study. One such topic involves understanding human emotions and reflecting it through machine intelligence, such as emotional dialogue models BIBREF4 , BIBREF5 . In developing emotionally aware intelligence, the very first step is building robust emotion classifiers that display good performance regardless of the application; this outcome is considered to be one of the fundamental research goals in affective computing BIBREF6 . In particular, the speech emotion recognition task is one of the most important problems in the field of paralinguistics. This field has recently broadened its applications, as it is a crucial factor in optimal human-computer interactions, including dialog systems. The goal of speech emotion recognition is to predict the emotional content of speech and to classify speech according to one of several labels (i.e., happy, sad, neutral, and angry). Various types of deep learning methods have been applied to increase the performance of emotion classifiers; however, this task is still considered to be challenging for several reasons. First, insufficient data for training complex neural network-based models are available, due to the costs associated with human involvement. Second, the characteristics of emotions must be learned from low-level speech signals. Feature-based models display limited skills when applied to this problem. To overcome these limitations, we propose a model that uses high-level text transcription, as well as low-level audio signals, to utilize the information contained within low-resource datasets to a greater degree. Given recent improvements in automatic speech recognition (ASR) technology BIBREF7 , BIBREF2 , BIBREF8 , BIBREF9 , speech transcription can be carried out using audio signals with considerable skill. The emotional content of speech is clearly indicated by the emotion words contained in a sentence BIBREF10 , such as “lovely” and “awesome,” which carry strong emotions compared to generic (non-emotion) words, such as “person” and “day.” Thus, we hypothesize that the speech emotion recognition model will be benefit from the incorporation of high-level textual input. In this paper, we propose a novel deep dual recurrent encoder model that simultaneously utilizes audio and text data in recognizing emotions from speech. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods by 68.8% to 71.8% when applied to the IEMOCAP dataset, which is one of the most well-studied datasets. Based on an error analysis of the models, we show that our proposed model accurately identifies emotion classes. Moreover, the neutral class misclassification bias frequently exhibited by previous models, which focus on audio features, is less pronounced in our model. Related work Classical machine learning algorithms, such as hidden Markov models (HMMs), support vector machines (SVMs), and decision tree-based methods, have been employed in speech emotion recognition problems BIBREF11 , BIBREF12 , BIBREF13 . Recently, researchers have proposed various neural network-based architectures to improve the performance of speech emotion recognition. An initial study utilized deep neural networks (DNNs) to extract high-level features from raw audio data and demonstrated its effectiveness in speech emotion recognition BIBREF14 . With the advancement of deep learning methods, more complex neural-based architectures have been proposed. Convolutional neural network (CNN)-based models have been trained on information derived from raw audio signals using spectrograms or audio features such as Mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) BIBREF15 , BIBREF16 , BIBREF17 . These neural network-based models are combined to produce higher-complexity models BIBREF18 , BIBREF19 , and these models achieved the best-recorded performance when applied to the IEMOCAP dataset. Another line of research has focused on adopting variant machine learning techniques combined with neural network-based models. One researcher utilized the multiobject learning approach and used gender and naturalness as auxiliary tasks so that the neural network-based model learned more features from a given dataset BIBREF20 . Another researcher investigated transfer learning methods, leveraging external data from related domains BIBREF21 . As emotional dialogue is composed of sound and spoken content, researchers have also investigated the combination of acoustic features and language information, built belief network-based methods of identifying emotional key phrases, and assessed the emotional salience of verbal cues from both phoneme sequences and words BIBREF22 , BIBREF23 . However, none of these studies have utilized information from speech signals and text sequences simultaneously in an end-to-end learning neural network-based model to classify emotions. Model This section describes the methodologies that are applied to the speech emotion recognition task. We start by introducing the recurrent encoder model for the audio and text modalities individually. We then propose a multimodal approach that encodes both audio and textual information simultaneously via a dual recurrent encoder. Audio Recurrent Encoder (ARE) Motivated by the architecture used in BIBREF24 , BIBREF25 , we build an audio recurrent encoder (ARE) to predict the class of a given audio signal. Once MFCC features have been extracted from an audio signal, a subset of the sequential features is fed into the RNN (i.e., gated recurrent units (GRUs)), which leads to the formation of the network's internal hidden state INLINEFORM0 to model the time series patterns. This internal hidden state is updated at each time step with the input data INLINEFORM1 and the hidden state of the previous time step INLINEFORM2 as follows: DISPLAYFORM0 where INLINEFORM0 is the RNN function with weight parameter INLINEFORM1 , INLINEFORM2 represents the hidden state at t- INLINEFORM3 time step, and INLINEFORM4 represents the t- INLINEFORM5 MFCC features in INLINEFORM6 . After encoding the audio signal INLINEFORM7 with the RNN, the last hidden state of the RNN, INLINEFORM8 , is considered to be the representative vector that contains all of the sequential audio data. This vector is then concatenated with another prosodic feature vector, INLINEFORM9 , to generate a more informative vector representation of the signal, INLINEFORM10 . The MFCC and the prosodic features are extracted from the audio signal using the openSMILE toolkit BIBREF26 , INLINEFORM11 , respectively. Finally, the emotion class is predicted by applying the softmax function to the vector INLINEFORM12 . For a given audio sample INLINEFORM13 , we assume that INLINEFORM14 is the true label vector, which contains all zeros but contains a one at the correct class, and INLINEFORM15 is the predicted probability distribution from the softmax layer. The training objective then takes the following form: DISPLAYFORM0 where INLINEFORM0 is the calculated representative vector of the audio signal with dimensionality INLINEFORM1 . The INLINEFORM2 and the bias INLINEFORM3 are learned model parameters. C is the total number of classes, and N is the total number of samples used in training. The upper part of Figure shows the architecture of the ARE model. Text Recurrent Encoder (TRE) We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 is last hidden state of the text-RNN, INLINEFORM1 , and the INLINEFORM2 and bias INLINEFORM3 are learned model parameters. The lower part of Figure indicates the architecture of the TRE model. Multimodal Dual Recurrent Encoder (MDRE) We present a novel architecture called the multimodal dual recurrent encoder (MDRE) to overcome the limitations of existing approaches. In this study, we consider multiple modalities, such as MFCC features, prosodic features and transcripts, which contain sequential audio information, statistical audio information and textual information, respectively. These types of data are the same as those used in the ARE and TRE cases. The MDRE model employs two RNNs to encode data from the audio signal and textual inputs independently. The audio-RNN encodes MFCC features from the audio signal using equation EQREF2 . The last hidden state of the audio-RNN is concatenated with the prosodic features to form the final vector representation INLINEFORM0 , and this vector is then passed through a fully connected neural network layer to form the audio encoding vector A. On the other hand, the text-RNN encodes the word sequence of the transcript using equation EQREF2 . The final hidden states of the text-RNN are also passed through another fully connected neural network layer to form a textual encoding vector T. Finally, the emotion class is predicted by applying the softmax function to the concatenation of the vectors A and T. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 is the feed-forward neural network with weight parameter INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 are final encoding vectors from the audio-RNN and text-RNN, respectively. INLINEFORM4 and the bias INLINEFORM5 are learned model parameters. Multimodal Dual Recurrent Encoder with Attention (MDREA) Inspired by the concept of the attention mechanism used in neural machine translation BIBREF28 , we propose a novel multimodal attention method to focus on the specific parts of a transcript that contain strong emotional information, conditioning on the audio information. Figure shows the architecture of the MDREA model. First, the audio data and text data are encoded with the audio-RNN and text-RNN using equation EQREF2 . We then consider the final audio encoding vector INLINEFORM0 as a context vector. As seen in equation EQREF9 , during each time step t, the dot product between the context vector e and the hidden state of the text-RNN at each t-th sequence INLINEFORM1 is evaluated to calculate a similarity score INLINEFORM2 . Using this score INLINEFORM3 as a weight parameter, the weighted sum of the sequences of the hidden state of the text-RNN, INLINEFORM4 , is calculated to generate an attention-application vector Z. This attention-application vector is concatenated with the final encoding vector of the audio-RNN INLINEFORM5 (equation EQREF7 ), which will be passed through the softmax function to predict the emotion class. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 where INLINEFORM0 and the bias INLINEFORM1 are learned model parameters. Dataset We evaluate our model using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) BIBREF18 dataset. This dataset was collected following theatrical theory in order to simulate natural dyadic interactions between actors. We use categorical evaluations with majority agreement. We use only four emotional categories happy, sad, angry, and neutral to compare the performance of our model with other research using the same categories. The IEMOCAP dataset includes five sessions, and each session contains utterances from two speakers (one male and one female). This data collection process resulted in 10 unique speakers. For consistent comparison with previous work, we merge the excitement dataset with the happiness dataset. The final dataset contains a total of 5531 utterances (1636 happy, 1084 sad, 1103 angry, 1708 neutral). Feature extraction To extract speech information from audio signals, we use MFCC values, which are widely used in analyzing audio signals. The MFCC feature set contains a total of 39 features, which include 12 MFCC parameters (1-12) from the 26 Mel-frequency bands and log-energy parameters, 13 delta and 13 acceleration coefficients The frame size is set to 25 ms at a rate of 10 ms with the Hamming function. According to the length of each wave file, the sequential step of the MFCC features is varied. To extract additional information from the data, we also use prosodic features, which show effectiveness in affective computing. The prosodic features are composed of 35 features, which include the F0 frequency, the voicing probability, and the loudness contours. All of these MFCC and prosodic features are extracted from the data using the OpenSMILE toolkit BIBREF26 . Implementation details Among the variants of the RNN function, we use GRUs as they yield comparable performance to that of the LSTM and include a smaller number of weight parameters BIBREF29 . We use a max encoder step of 750 for the audio input, based on the implementation choices presented in BIBREF30 and 128 for the text input because it covers the maximum length of the transcripts. The vocabulary size of the dataset is 3,747, including the “_UNK_" token, which represents unknown words, and the “_PAD_" token, which is used to indicate padding information added while preparing mini-batch data. The number of hidden units and the number of layers in the RNN for each model (ARE, TRE, MDRE and MDREA) are selected based on extensive hyperparameter search experiments. The weights of the hidden units are initialized using orthogonal weights BIBREF31 ], and the text embedding layer is initialized from pretrained word-embedding vectors BIBREF32 . In preparing the textual dataset, we first use the released transcripts of the IEMOCAP dataset for simplicity. To investigate the practical performance, we then process all of the IEMOCAP audio data using an ASR system (the Google Cloud Speech API) and retrieve the transcripts. The performance of the Google ASR system is reflected by its word error rate (WER) of 5.53%. Performance evaluation As the dataset is not explicitly split beforehand into training, development, and testing sets, we perform 5-fold cross validation to determine the overall performance of the model. The data in each fold are split into training, development, and testing datasets (8:0.5:1.5, respectively). After training the model, we measure the weighted average precision (WAP) over the 5-fold dataset. We train and evaluate the model 10 times per fold, and the model performance is assessed in terms of the mean score and standard deviation. We examine the WAP values, which are shown in Table 1. First, our ARE model shows the baseline performance because we use minimal audio features, such as the MFCC and prosodic features with simple architectures. On the other hand, the TRE model shows higher performance gain compared to the ARE. From this result, we note that textual data are informative in emotion prediction tasks, and the recurrent encoder model is effective in understanding these types of sequential data. Second, the newly proposed model, MDRE, shows a substantial performance gain. It thus achieves the state-of-the-art performance with a WAP value of 0.718. This result shows that multimodal information is a key factor in affective computing. Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 . However, the MDREA model does not match the performance of the MDRE model, even though it utilizes a more complex architecture. We believe that this result arises because insufficient data are available to properly determine the complex model parameters in the MDREA model. Moreover, we presume that this model will show better performance when the audio signals are aligned with the textual sequence while applying the attention mechanism. We leave the implementation of this point as a future research direction. To investigate the practical performance of the proposed models, we conduct further experiments with the ASR-processed transcript data (see “-ASR” models in Table ). The label accuracy of the processed transcripts is 5.53% WER. The TRE-ASR, MDRE-ASR and MDREA-ASR models reflect degraded performance compared to that of the TRE, MDRE and MDREA models. However, the performance of these models is still competitive; in particular, the MDRE-ASR model outperforms the previous best-performing model, 3CNN-LSTM10H (WAP 0.691 to 0.688). Error analysis We analyze the predictions of the ARE, TRE, and MDRE models. Figure shows the confusion matrix of each model. The ARE model (Fig. ) incorrectly classifies most instances of happy as neutral (43.51%); thus, it shows reduced accuracy (35.15%) in predicting the the happy class. Overall, most of the emotion classes are frequently confused with the neutral class. This observation is in line with the findings of BIBREF30 , who noted that the neutral class is located in the center of the activation-valence space, complicating its discrimination from the other classes. Interestingly, the TRE model (Fig. ) shows greater prediction gains in predicting the happy class when compared to the ARE model (35.15% to 75.73%). This result seems plausible because the model can benefit from the differences among the distributions of words in happy and neutral expressions, which gives more emotional information to the model than that of the audio signal data. On the other hand, it is striking that the TRE model incorrectly predicts instances of the sad class as the happy class 16.20% of the time, even though these emotional states are opposites of one another. The MDRE model (Fig. ) compensates for the weaknesses of the previous two models (ARE and TRE) and benefits from their strengths to a surprising degree. The values arranged along the diagonal axis show that all of the accuracies of the correctly predicted class have increased. Furthermore, the occurrence of the incorrect “sad-to-happy" cases in the TRE model is reduced from 16.20% to 9.15%. Conclusions In this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features. In the future work, we aim to extend the modalities to audio, text and video inputs. Furthermore, we plan to investigate the application of the attention mechanism to data derived from multiple modalities. This approach seems likely to uncover enhanced learning schemes that will increase performance in both speech emotion recognition and other multimodal classification tasks. Acknowledgments K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144).
How do they combine audio and text sequences in their RNN?
combines the information from these sources using a feed-forward neural model
3,201
qasper
4k
Introduction Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These findings emphasize the need to shift towards context-aware machine translation both from modeling and evaluation perspective. Most previous work on context-aware NMT assumed that either all the bilingual data is available at the document level BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 or at least its fraction BIBREF11. But in practical scenarios, document-level parallel data is often scarce, which is one of the challenges when building a context-aware system. We introduce an approach to context-aware machine translation using only monolingual document-level data. In our setting, a separate monolingual sequence-to-sequence model (DocRepair) is used to correct sentence-level translations of adjacent sentences. The key idea is to use monolingual data to imitate typical inconsistencies between context-agnostic translations of isolated sentences. The DocRepair model is trained to map inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. To validate the performance of our model, we use three kinds of evaluation: the BLEU score, contrastive evaluation of translation of several discourse phenomena BIBREF11, and human evaluation. We show strong improvements for all metrics. We analyze which discourse phenomena are hard to capture using monolingual data only. Using contrastive test sets for targeted evaluation of several contextual phenomena, we compare the performance of the models trained on round-trip translations and genuine document-level parallel data. Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations. Our key contributions are as follows: we introduce the first approach to context-aware machine translation using only monolingual document-level data; our approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation; we show which discourse phenomena are hard to capture using monolingual data only. Our Approach: Document-level Repair We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations of a context-agnostic MT system. It does not use any states of a trained MT model whose outputs it corrects and therefore can in principle be trained to correct translations from any black-box MT system. The DocRepair model requires only monolingual document-level data in the target language. It is a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. Consistent groups come from monolingual document-level data. To obtain inconsistent groups, each sentence in a group is replaced with its round-trip translation produced in isolation from context. More formally, forming a training minibatch for the DocRepair model involves the following steps (see also Figure FIGREF9): sample several groups of sentences from the monolingual data; for each sentence in a group, (i) translate it using a target-to-source MT model, (ii) sample a translation of this back-translated sentence in the source language using a source-to-target MT model; using these round-trip translations of isolated sentences, form an inconsistent version of the initial groups; use inconsistent groups as input for the DocRepair model, consistent ones as output. At test time, the process of getting document-level translations is two-step (Figure FIGREF10): produce translations of isolated sentences using a context-agnostic MT model; apply the DocRepair model to a sequence of context-agnostic translations to correct inconsistencies between translations. In the scope of the current work, the DocRepair model is the standard sequence-to-sequence Transformer. Sentences in a group are concatenated using a reserved token-separator between sentences. The Transformer is trained to correct these long inconsistent pseudo-sentences into consistent ones. The token-separator is then removed from corrected translations. Evaluation of Contextual Phenomena We use contrastive test sets for evaluation of discourse phenomena for English-Russian by BIBREF11. These test sets allow for testing different kinds of phenomena which, as we show, can be captured from monolingual data with varying success. In this section, we provide test sets statistics and briefly describe the tested phenomena. For more details, the reader is referred to BIBREF11. Evaluation of Contextual Phenomena ::: Test sets There are four test sets in the suite. Each test set contains contrastive examples. It is specifically designed to test the ability of a system to adapt to contextual information and handle the phenomenon under consideration. Each test instance consists of a true example (a sequence of sentences and their reference translation from the data) and several contrastive translations which differ from the true one only in one specific aspect. All contrastive translations are correct and plausible translations at the sentence level, and only context reveals the inconsistencies between them. The system is asked to score each candidate translation, and we compute the system accuracy as the proportion of times the true translation is preferred to the contrastive ones. Test set statistics are shown in Table TABREF15. The suites for deixis and lexical cohesion are split into development and test sets, with 500 examples from each used for validation purposes and the rest for testing. Convergence of both consistency scores on these development sets and BLEU score on a general development set are used as early stopping criteria in models training. For ellipsis, there is no dedicated development set, so we evaluate on all the ellipsis data and do not use it for development. Evaluation of Contextual Phenomena ::: Phenomena overview Deixis Deictic words or phrases, are referential expressions whose denotation depends on context. This includes personal deixis (“I”, “you”), place deixis (“here”, “there”), and discourse deixis, where parts of the discourse are referenced (“that's a good question”). The test set examples are all related to person deixis, specifically the T-V distinction between informal and formal you (Latin “tu” and “vos”) in the Russian translations, and test for consistency in this respect. Ellipsis Ellipsis is the omission from a clause of one or more words that are nevertheless understood in the context of the remaining elements. In machine translation, elliptical constructions in the source language pose a problem in two situations. First, if the target language does not allow the same types of ellipsis, requiring the elided material to be predicted from context. Second, if the elided material affects the syntax of the sentence. For example, in Russian the grammatical function of a noun phrase, and thus its inflection, may depend on the elided verb, or, conversely, the verb inflection may depend on the elided subject. There are two different test sets for ellipsis. One contains examples where a morphological form of a noun group in the last sentence can not be understood without context beyond the sentence level (“ellipsis (infl.)” in Table TABREF15). Another includes cases of verb phrase ellipsis in English, which does not exist in Russian, thus requires predicting the verb when translating into Russian (“ellipsis (VP)” in Table TABREF15). Lexical cohesion The test set focuses on reiteration of named entities. Where several translations of a named entity are possible, a model has to prefer consistent translations over inconsistent ones. Experimental Setup ::: Data preprocessing We use the publicly available OpenSubtitles2018 corpus BIBREF12 for English and Russian. For a fair comparison with previous work, we train the baseline MT system on the data released by BIBREF11. Namely, our MT system is trained on 6m instances. These are sentence pairs with a relative time overlap of subtitle frames between source and target language subtitles of at least $0.9$. We gathered 30m groups of 4 consecutive sentences as our monolingual data. We used only documents not containing groups of sentences from general development and test sets as well as from contrastive test sets. The main results we report are for the model trained on all 30m fragments. We use the tokenization provided by the corpus and use multi-bleu.perl on lowercased data to compute BLEU score. We use beam search with a beam of 4. Sentences were encoded using byte-pair encoding BIBREF13, with source and target vocabularies of about 32000 tokens. Translation pairs were batched together by approximate sequence length. Each training batch contained a set of translation pairs containing approximately 15000 source tokens. It has been shown that Transformer's performance depends heavily on batch size BIBREF14, and we chose a large batch size to ensure the best performance. In training context-aware models, for early stopping we use both convergence in BLEU score on the general development set and scores on the consistency development sets. After training, we average the 5 latest checkpoints. Experimental Setup ::: Models The baseline model, the model used for back-translation, and the DocRepair model are all Transformer base models BIBREF15. More precisely, the number of layers is $N=6$ with $h = 8$ parallel attention layers, or heads. The dimensionality of input and output is $d_{model} = 512$, and the inner-layer of a feed-forward networks has dimensionality $d_{ff}=2048$. We use regularization as described in BIBREF15. As a second baseline, we use the two-pass CADec model BIBREF11. The first pass produces sentence-level translations. The second pass takes both the first-pass translation and representations of the context sentences as input and returns contextualized translations. CADec requires document-level parallel training data, while DocRepair only needs monolingual training data. Experimental Setup ::: Generating round-trip translations On the selected 6m instances we train sentence-level translation models in both directions. To create training data for DocRepair, we proceed as follows. The Russian monolingual data is first translated into English, using the Russian$\rightarrow $English model and beam search with beam size of 4. Then, we use the English$\rightarrow $Russian model to sample translations with temperature of $0{.}5$. For each sentence, we precompute 20 sampled translations and randomly choose one of them when forming a training minibatch for DocRepair. Also, in training, we replace each token in the input with a random one with the probability of $10\%$. Experimental Setup ::: Optimizer As in BIBREF15, we use the Adam optimizer BIBREF16, the parameters are $\beta _1 = 0{.}9$, $\beta _2 = 0{.}98$ and $\varepsilon = 10^{-9}$. We vary the learning rate over the course of training using the formula: where $warmup\_steps = 16000$ and $scale=4$. Results ::: General results The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments). To see which part of the improvement is due to fixing agreement between sentences rather than simply sentence-level post-editing, we train the same repair model at the sentence level. Each sentence in a group is now corrected separately, then they are put back together in a group. One can see that most of the improvement comes from accounting for extra-sentential dependencies. DocRepair outperforms the baseline and CADec by 0.7 BLEU, and its sentence-level repair version by 0.5 BLEU. Results ::: Consistency results Scores on the phenomena test sets are provided in Table TABREF26. For deixis, lexical cohesion and ellipsis (infl.) we see substantial improvements over both the baseline and CADec. The largest improvement over CADec (22.5 percentage points) is for lexical cohesion. However, there is a drop of almost 5 percentage points for VP ellipsis. We hypothesize that this is because it is hard to learn to correct inconsistencies in translations caused by VP ellipsis relying on monolingual data alone. Figure FIGREF27(a) shows an example of inconsistency caused by VP ellipsis in English. There is no VP ellipsis in Russian, and when translating auxiliary “did” the model has to guess the main verb. Figure FIGREF27(b) shows steps of generating round-trip translations for the target side of the previous example. When translating from Russian, main verbs are unlikely to be translated as the auxiliary “do” in English, and hence the VP ellipsis is rarely present on the English side. This implies the model trained using the round-trip translations will not be exposed to many VP ellipsis examples in training. We discuss this further in Section SECREF34. Table TABREF28 provides scores for deixis and lexical cohesion separately for different distances between sentences requiring consistency. It can be seen, that the performance of DocRepair degrades less than that of CADec when the distance between sentences requiring consistency gets larger. Results ::: Human evaluation We conduct a human evaluation on random 700 examples from our general test set. We picked only examples where a DocRepair translation is not a full copy of the baseline one. The annotators were provided an original group of sentences in English and two translations: baseline context-agnostic one and the one corrected by the DocRepair model. Translations were presented in random order with no indication which model they came from. The task is to pick one of the three options: (1) the first translation is better, (2) the second translation is better, (3) the translations are of equal quality. The annotators were asked to avoid the third answer if they are able to give preference to one of the translations. No other guidelines were given. The results are provided in Table TABREF30. In about $52\%$ of the cases annotators marked translations as having equal quality. Among the cases where one of the translations was marked better than the other, the DocRepair translation was marked better in $73\%$ of the cases. This shows a strong preference of the annotators for corrected translations over the baseline ones. Varying Training Data In this section, we discuss the influence of the training data chosen for document-level models. In all experiments, we used the DocRepair model. Varying Training Data ::: The amount of training data Table TABREF33 provides BLEU and consistency scores for the DocRepair model trained on different amount of data. We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragments). Moreover, we notice that deixis scores are less sensitive to the amount of training data than lexical cohesion and ellipsis scores. The reason might be that, as we observed in our previous work BIBREF11, inconsistencies in translations due to the presence of deictic words and phrases are more frequent in this dataset than other types of inconsistencies. Also, as we show in Section SECREF7, this is the phenomenon the model learns faster in training. Varying Training Data ::: One-way vs round-trip translations In this section, we discuss the limitations of using only monolingual data to model inconsistencies between sentence-level translations. In Section SECREF25 we observed a drop in performance on VP ellipsis for DocRepair compared to CADec, which was trained on parallel data. We hypothesized that this is due to the differences between one-way and round-trip translations, and now we test this hypothesis. To do so, we fix the dataset and vary the way in which the input for DocRepair is generated: round-trip or one-way translations. The latter assumes that document-level data is parallel, and translations are sampled from the source side of the sentences in a group rather than from their back-translations. For parallel data, we take 1.5m parallel instances which were used for CADec training and add 1m instances from our monolingual data. For segments in the parallel part, we either sample translations from the source side or use round-trip translations. The results are provided in Table TABREF35. The model trained on one-way translations is slightly better than the one trained on round-trip translations. As expected, VP ellipsis is the hardest phenomena to be captured using round-trip translations, and the DocRepair model trained on one-way translated data gains 6% accuracy on this test set. This shows that the DocRepair model benefits from having access to non-synthetic English data. This results in exposing DocRepair at training time to Russian translations which suffer from the same inconsistencies as the ones it will have to correct at test time. Varying Training Data ::: Filtering: monolingual (no filtering) or parallel Note that the scores of the DocRepair model trained on 2.5m instances randomly chosen from monolingual data (Table TABREF33) are different from the ones for the model trained on 2.5m instances combined from parallel and monolingual data (Table TABREF35). For convenience, we show these two in Table TABREF36. The domain, the dataset these two data samples were gathered from, and the way we generated training data for DocRepair (round-trip translations) are all the same. The only difference lies in how the data was filtered. For parallel data, as in the previous work BIBREF6, we picked only sentence pairs with large relative time overlap of subtitle frames between source-language and target-language subtitles. This is necessary to ensure the quality of translation data: one needs groups of consecutive sentences in the target language where every sentence has a reliable translation. Table TABREF36 shows that the quality of the model trained on data which came from the parallel part is worse than the one trained on monolingual data. This indicates that requiring each sentence in a group to have a reliable translation changes the distribution of the data, which might be not beneficial for translation quality and provides extra motivation for using monolingual data. Learning Dynamics Let us now look into how the process of DocRepair training progresses. Figure FIGREF38 shows how the BLEU scores with the reference translation and with the baseline context-agnostic translation (i.e. the input for the DocRepair model) are changing during training. First, the model quickly learns to copy baseline translations: the BLEU score with the baseline is very high. Then it gradually learns to change them, which leads to an improvement in BLEU with the reference translation and a drop in BLEU with the baseline. Importantly, the model is reluctant to make changes: the BLEU score between translations of the converged model and the baseline is 82.5. We count the number of changed sentences in every 4-sentence fragment in the test set and plot the histogram in Figure FIGREF38. In over than 20$\%$ of the cases the model has not changed base translations at all. In almost $40\%$, it modified only one sentence and left the remaining 3 sentences unchanged. The model changed more than half sentences in a group in only $14\%$ of the cases. Several examples of the DocRepair translations are shown in Figure FIGREF43. Figure FIGREF42 shows how consistency scores are changing in training. For deixis, the model achieves the final quality quite quickly; for the rest, it needs a large number of training steps to converge. Related Work Our work is most closely related to two lines of research: automatic post-editing (APE) and document-level machine translation. Related Work ::: Automatic post-editing Our model can be regarded as an automatic post-editing system – a system designed to fix systematic MT errors that is decoupled from the main MT system. Automatic post-editing has a long history, including rule-based BIBREF17, statistical BIBREF18 and neural approaches BIBREF19, BIBREF20, BIBREF21. In terms of architectures, modern approaches use neural sequence-to-sequence models, either multi-source architectures that consider both the original source and the baseline translation BIBREF19, BIBREF20, or monolingual repair systems, as in BIBREF21, which is concurrent work to ours. True post-editing datasets are typically small and expensive to create BIBREF22, hence synthetic training data has been created that uses original monolingual data as output for the sequence-to-sequence model, paired with an automatic back-translation BIBREF23 and/or round-trip translation as its input(s) BIBREF19, BIBREF21. While previous work on automatic post-editing operated on the sentence level, the main novelty of this work is that our DocRepair model operates on groups of sentences and is thus able to fix consistency errors caused by the context-agnostic baseline MT system. We consider this strategy of sentence-level baseline translation and context-aware monolingual repair attractive when parallel document-level data is scarce. For training, the DocRepair model only requires monolingual document-level data. While we create synthetic training data via round-trip translation similarly to earlier work BIBREF19, BIBREF21, note that we purposefully use sentence-level MT systems for this to create the types of consistency errors that we aim to fix with the context-aware DocRepair model. Not all types of consistency errors that we want to fix emerge from a round-trip translation, so access to parallel document-level data can be useful (Section SECREF34). Related Work ::: Document-level NMT Neural models of MT that go beyond the sentence-level are an active research area BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF10, BIBREF9, BIBREF11. Typically, the main MT system is modified to take additional context as its input. One limitation of these approaches is that they assume that parallel document-level training data is available. Closest to our work are two-pass models for document-level NMT BIBREF24, BIBREF11, where a second, context-aware model takes the translation and hidden representations of the sentence-level first-pass model as its input. The second-pass model can in principle be trained on a subset of the parallel training data BIBREF11, somewhat relaxing the assumption that all training data is at the document level. Our work is different from this previous work in two main respects. Firstly, we show that consistency can be improved with only monolingual document-level training data. Secondly, the DocRepair model is decoupled from the first-pass MT system, which improves its portability. Conclusions We introduce the first approach to context-aware machine translation using only monolingual document-level data. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. The model performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. Our approach results in substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation. Moreover, we perform error analysis and detect which discourse phenomena are hard to capture using only monolingual document-level data. While in the current work we used text fragments of 4 sentences, in future work we would like to consider longer contexts. Acknowledgments We would like to thank the anonymous reviewers for their comments. The authors also thank David Talbot and Yandex Machine Translation team for helpful discussions and inspiration. Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich acknowledges support from the Swiss National Science Foundation (105212_169888), the European Union’s Horizon 2020 research and innovation programme (grant agreement no 825460), and the Royal Society (NAF\R1\180122).
how many humans evaluated the results?
Unanswerable
3,711
qasper
4k
10pt 1.10pt [ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users. ] Introduction While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited. Since the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers. Such use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”. Our current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter. Specifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views. For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties. From our results, the following main observations can be made: Our findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter. The rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work. Defining Fake news Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news. With respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation. In relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization. The conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows: Research Hypotheses Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information: Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset. Exposure. Characterization. Polarization. Data and Methodology For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016). One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times. Once we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused. Finally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news: In the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal. Results The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window. The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth. The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered. Exposure Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time. However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal. In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant. Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news. Characterization We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts. Turning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions. If we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant. A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed. With respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant. The analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant. On the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 ). Polarization Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates. Discussion As a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them: These findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier. Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising). Finally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news. Conclusions With the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news. We believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle. Within the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample. Author Disclosure Statement No competing financial interest exist.
What is their definition of tweets going viral?
Viral tweets are the ones that are retweeted more than 1000 times
3,144
qasper
4k
Introduction Nowadays deep learning techniques outperform the other conventional methods in most of the speech-related tasks. Training robust deep neural networks for each task depends on the availability of powerful processing GPUs, as well as standard and large scale datasets. In text-independent speaker verification, large-scale datasets are available, thanks to the NIST SRE evaluations and other data collection projects such as VoxCeleb BIBREF0. In text-dependent speaker recognition, experiments with end-to-end architectures conducted on large proprietary databases have demonstrated their superiority over traditional approaches BIBREF1. Yet, contrary to text-independent speaker recognition, text-dependent speaker recognition lacks large-scale publicly available databases. The two most well-known datasets are probably RSR2015 BIBREF2 and RedDots BIBREF3. The former contains speech data collected from 300 individuals in a controlled manner, while the latter is used primarily for evaluation rather than training, due to its small number of speakers (only 64). Motivated by this lack of large-scale dataset for text-dependent speaker verification, we chose to proceed with the collection of the DeepMine dataset, which we expect to become a standard benchmark for the task. Apart from speaker recognition, large amounts of training data are required also for training automatic speech recognition (ASR) systems. Such datasets should not only be large in size, they should also be characterized by high variability with respect to speakers, age and dialects. While several datasets with these properties are available for languages like English, Mandarin, French, this is not the case for several other languages, such as Persian. To this end, we proceeded with collecting a large-scale dataset, suitable for building robust ASR models in Persian. The main goal of the DeepMine project was to collect speech from at least a few thousand speakers, enabling research and development of deep learning methods. The project started at the beginning of 2017, and after designing the database and the developing Android and server applications, the data collection began in the middle of 2017. The project finished at the end of 2018 and the cleaned-up and final version of the database was released at the beginning of 2019. In BIBREF4, the running project and its data collection scenarios were described, alongside with some preliminary results and statistics. In this paper, we announce the final and cleaned-up version of the database, describe its different parts and provide various evaluation setups for each part. Finally, since the database was designed mainly for text-dependent speaker verification purposes, some baseline results are reported for this task on the official evaluation setups. Additional baseline results are also reported for Persian speech recognition. However, due to the space limitation in this paper, the baseline results are not reported for all the database parts and conditions. They will be defined and reported in the database technical documentation and in a future journal paper. Data Collection DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4. Data Collection ::: Post-Processing In order to clean-up the database, the main post-processing step was to filter out problematic utterances. Possible problems include speaker word insertions (e.g. repeating some part of a phrase), deletions, substitutions, and involuntary disfluencies. To detect these, we implemented an alignment stage, similar to the second alignment stage in the LibriSpeech project BIBREF5. In this method, a custom decoding graph was generated for each phrase. The decoding graph allows for word skipping and word insertion in the phrase. For text-dependent and text-prompted parts of the database, such errors are not allowed. Hence, any utterances with errors were removed from the enrollment and test lists. For the speech recognition part, a sub-part of the utterance which is correctly aligned to the corresponding transcription is kept. After the cleaning step, around 190 thousand utterances with full transcription and 10 thousand with sub-part alignment have remained in the database. Data Collection ::: Statistics After processing the database and removing problematic respondents and utterances, 1969 respondents remained in the database, with 1149 of them being male and 820 female. 297 of the respondents could not read English and have therefore read only the Persian prompts. About 13200 sessions were recorded by females and similarly, about 9500 sessions by males, i.e. women are over-represented in terms of sessions, even though their number is 17% smaller than that of males. Other useful statistics related to the database are shown in Table TABREF4. The last status of the database, as well as other related and useful information about its availability can be found on its website, together with a limited number of samples. DeepMine Database Parts The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part: “My voice is my password.” “OK Google.” “Artificial intelligence is for real.” “Actions speak louder than words.” “There is no such thing as a free lunch.” DeepMine Database Parts ::: Part1 - Text-dependent (TD) This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded. We have created three experimental setups with different numbers of speakers in the evaluation set. For each setup, speakers with more recording sessions are included in the evaluation set and the rest of the speakers are used for training in the background set (in the database, all background sets are basically training data). The rows in Table TABREF13 corresponds to the different experimental setups and shows the numbers of speakers in each set. Note that, for English, we have filtered the (Persian native) speakers by the ability to read English. Therefore, there are fewer speakers in each set for English than for Persian. There is a small “dev” set in each setup which can be used for parameter tuning to prevent over-tuning on the evaluation set. For each experimental setup, we have defined several official trial lists with different numbers of enrollment utterances per trial in order to investigate the effects of having different amounts of enrollment data. All trials in one trial list have the same number of enrollment utterances (3 to 6) and only one test utterance. All enrollment utterances in a trial are taken from different consecutive sessions and the test utterance is taken from yet another session. From all the setups and conditions, the 100-spk with 3-session enrollment (3-sess) is considered as the main evaluation condition. In Table TABREF14, the number of trials for Persian 3-sess are shown for the different types of trial in the text-dependent speaker verification (SV). Note that for Imposter-Wrong (IW) trials (i.e. imposter speaker pronouncing wrong phrase), we merely create one wrong trial for each Imposter-Correct (IC) trial to limit the huge number of possible trials for this case. So, the number of trials for IC and IW cases are the same. DeepMine Database Parts ::: Part2 - Text-prompted (TP) For this part, in each session, 3 random sequences of Persian month names are shown to the respondent in two modes: In the first mode, the sequence consists of all 12 months, which will be used for speaker enrollment. The second mode contains a sequence of 3 month names that will be used as a test utterance. In each 8 sessions received by a respondent from the server, there are 3 enrollment phrases of all 12 months (all in just one session), and $7 \times 3$ other test phrases, containing fewer words. For a respondent who can read English, 3 random sequences of English digits are also recorded in each session. In one of the sessions, these sequences contain all digits and the remaining ones contain only 4 digits. Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order). From all setups an all conditions, the 100-spk with 1-session enrolment (1-sess) is considered as the main evaluation condition for the text-prompted case. In Table TABREF16, the numbers of trials (sum for both seq and full conditions) for Persian 1-sess are shown for the different types of trials in the text-prompted SV. Again, we just create one IW trial for each IC trial. DeepMine Database Parts ::: Part3 - Text-independent (TI) In this part, 8 Persian phrases that have already been transcribed on the phone level are displayed to the respondent. These phrases are chosen mostly from news and Persian Wikipedia. If the respondent is unable to read English, instead of 5 fixed phrases and 3 random digit strings, 8 other Persian phrases are also prompted to the respondent to have exactly 24 phrases in each recording session. This part can be useful at least for three potential applications. First, it can be used for text-independent speaker verification. The second application of this part (same as Part4 of RedDots) is text-prompted speaker verification using random text (instead of a random sequence of words). Finally, the third application is large vocabulary speech recognition in Persian (explained in the next sub-section). Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. Table TABREF18 shows numbers of speakers in each set of the database for text-independent SV case. For text-independent SV, we have considered 4 scenarios for enrollment and 4 scenarios for test. The speaker can be enrolled using utterances from 1, 2 or 3 consecutive sessions (1sess to 3sess) or using 8 utterances from 8 different sessions. The test speech can be one utterance (1utt) for short duration scenario or all utterances in one session (1sess) for long duration case. In addition, test speech can be selected from 5 English phrases for cross-language testing (enrollment using Persian utterances and test using English utterances). From all setups, 1sess-1utt and 1sess-1sess for 438-spk set are considered as the main evaluation setups for text-independent case. Table TABREF19 shows number of trials for these setups. For text-prompted SV with random text, the same setup as text-independent case together with corresponding utterance transcriptions can be used. DeepMine Database Parts ::: Part3 - Speech Recognition As explained before, Part3 of the DeepMine database can be used for Persian read speech recognition. There are only a few databases for speech recognition in Persian BIBREF6, BIBREF7. Hence, this part can at least partly address this problem and enable robust speech recognition applications in Persian. Additionally, it can be used for speaker recognition applications, such as training deep neural networks (DNNs) for extracting bottleneck features BIBREF8, or for collecting sufficient statistics using DNNs for i-vector training. We have randomly selected 50 speakers (25 for each gender) from the all speakers in the database which have net speech (without silence parts) between 25 minutes to 50 minutes as test speakers. For each speaker, the utterances in the first 5 sessions are included to (small) test-set and the other utterances of test speakers are considered as a large-test-set. The remaining utterances of the other speakers are included in the training set. The test-set, large-test-set and train-set contain 5.9, 28.5 and 450 hours of speech respectively. There are about 8300 utterances in Part3 which contain only Persian full names (i.e. first and family name pairs). Each phrase consists of several full names and their phoneme transcriptions were extracted automatically using a trained Grapheme-to-Phoneme (G2P). These utterances can be used to evaluate the performance of a systems for name recognition, which is usually more difficult than the normal speech recognition because of the lack of a reliable language model. Experiments and Results Due to the space limitation, we present results only for the Persian text-dependent speaker verification and speech recognition. Experiments and Results ::: Speaker Verification Experiments We conducted an experiment on text-dependent speaker verification part of the database, using the i-vector based method proposed in BIBREF9, BIBREF10 and applied it to the Persian portion of Part1. In this experiment, 20-dimensional MFCC features along with first and second derivatives are extracted from 16 kHz signals using HTK BIBREF11 with 25 ms Hamming windowed frames with 15 ms overlap. The reported results are obtained with a 400-dimensional gender independent i-vector based system. The i-vectors are first length-normalized and are further normalized using phrase- and gender-dependent Regularized Within-Class Covariance Normalization (RWCCN) BIBREF10. Cosine distance is used to obtain speaker verification scores and phrase- and gender-dependent s-norm is used for normalizing the scores. For aligning speech frames to Gaussian components, monophone HMMs with 3 states and 8 Gaussian components in each state are used BIBREF10. We only model the phonemes which appear in the 5 Persian text-dependent phrases. For speaker verification experiments, the results were reported in terms of Equal Error Rate (EER) and Normalized Detection Cost Function as defined for NIST SRE08 ($\mathrm {NDCF_{0.01}^{min}}$) and NIST SRE10 ($\mathrm {NDCF_{0.001}^{min}}$). As shown in Table TABREF22, in text-dependent SV there are 4 types of trials: Target-Correct and Imposter-Correct refer to trials when the pass-phrase is uttered correctly by target and imposter speakers respectively, and in same manner, Target-Wrong and Imposter-Wrong refer to trials when speakers uttered a wrong pass-phrase. In this paper, only the correct trials (i.e. Target-Correct as target trials vs Imposter-Correct as non-target trials) are considered for evaluating systems as it has been proved that these are the most challenging trials in text-dependent SV BIBREF8, BIBREF12. Table TABREF23 shows the results of text-dependent experiments using Persian 100-spk and 3-sess setup. For filtering trials, the respondents' mobile brand and model were used in this experiment. In the table, the first two letters in the filter notation relate to the target trials and the second two letters (i.e. right side of the colon) relate for non-target trials. For target trials, the first Y means the enrolment and test utterances were recorded using a device with the same brand by the target speaker. The second Y letter means both recordings were done using exactly the same device model. Similarly, the first Y for non-target trials means that the devices of target and imposter speakers are from the same brand (i.e. manufacturer). The second Y means that, in addition to the same brand, both devices have the same model. So, the most difficult target trials are “NN”, where the speaker has used different a device at the test time. In the same manner, the most difficult non-target trials which should be rejected by the system are “YY” where the imposter speaker has used the same device model as the target speaker (note that it does not mean physically the same device because each speaker participated in the project using a personal mobile device). Hence, the similarity in the recording channel makes rejection more difficult. The first row in Table TABREF23 shows the results for all trials. By comparing the results with the best published results on RSR2015 and RedDots BIBREF10, BIBREF8, BIBREF12, it is clear that the DeepMine database is more challenging than both RSR2015 and RedDots databases. For RSR2015, the same i-vector/HMM-based method with both RWCCN and s-norm has achieved EER less than 0.3% for both genders (Table VI in BIBREF10). The conventional Relevance MAP adaptation with HMM alignment without applying any channel-compensation techniques (i.e. without applying RWCCN and s-norm due to the lack of suitable training data) on RedDots Part1 for the male has achieved EER around 1.5% (Table XI in BIBREF10). It is worth noting that EERs for DeepMine database without any channel-compensation techniques are 2.1 and 3.7% for males and females respectively. One interesting advantage of the DeepMine database compared to both RSR2015 and RedDots is having several target speakers with more than one mobile device. This is allows us to analyse the effects of channel compensation methods. The second row in Table TABREF23 corresponds to the most difficult trials where the target trials come from mobile devices with different models while imposter trials come from the same device models. It is clear that severe degradation was caused by this kind of channel effects (i.e. decreasing within-speaker similarities while increasing between-speaker similarities), especially for females. The results in the third row show the condition when target speakers at the test time use exactly the same device that was used for enrollment. Comparing this row with the results in the first row proves how much improvement can be achieved when exactly the same device is used by the target speaker. The results in the fourth row show the condition when imposter speakers also use the same device model at test time to fool the system. So, in this case, there is no device mismatch in all trials. By comparing the results with the third row, we can see how much degradation is caused if we only consider the non-target trials with the same device. The fifth row shows similar results when the imposter speakers use device of the same brand as the target speaker but with a different model. Surprisingly, in this case, the degradation is negligible and it means that mobiles from a specific brand (manufacturer) have different recording channel properties. The degraded female results in the sixth row as compared to the third row show the effect of using a different device model from the same brand for target trials. For males, the filters brings almost the same subsets of trials, which explains the very similar results in this case. Looking at the first two and the last row of Table TABREF23, one can notice the significantly worse performance obtained for the female trials as compared to males. Note that these three rows include target trials where the devices used for enrollment do not necessarily match the devices used for recording test utterances. On the other hand, in rows 3 to 6, which exclude such mismatched trials, the performance for males and females is comparable. This suggest that the degraded results for females are caused by some problematic trials with device mismatch. The exact reason for this degradation is so far unclear and needs a further investigation. In the last row of the table, the condition of the second row is relaxed: the target device should have different model possibly from the same brand and imposter device only needs to be from the same brand. In this case, as was expected, the performance degradation is smaller than in the second row. Experiments and Results ::: Speech Recognition Experiments In addition to speaker verification, we present several speech recognition experiments on Part3. The experiments were performed with the Kaldi toolkit BIBREF13. For training HMM-based MonoPhone model, only 20 thousands of shortest utterances are used and for other models the whole training data is used. The DNN based acoustic model is a time-delay DNN with low-rank factorized layers and skip connections without i-vector adaptation (a modified network from one of the best performing LibriSpeech recipes). The network is shown in Table TABREF25: there are 16 F-TDNN layers, with dimension 1536 and linear bottleneck layers of dimension 256. The acoustic model is trained for 10 epochs using lattice-free maximum mutual information (LF-MMI) with cross-entropy regularization BIBREF14. Re-scoring is done using a pruned trigram language model and the size of the dictionary is around 90,000 words. Table TABREF26 shows the results in terms of word error rate (WER) for different evaluated methods. As can be seen, the created database can be used to train well performing and practically usable Persian ASR models. Conclusions In this paper, we have described the final version of a large speech corpus, the DeepMine database. It has been collected using crowdsourcing and, according to the best of our knowledge, it is the largest public text-dependent and text-prompted speaker verification database in two languages: Persian and English. In addition, it is the largest text-independent speaker verification evaluation database, making it suitable to robustly evaluate state-of-the-art methods on different conditions. Alongside these appealing properties, it comes with phone-level transcription, making it suitable to train deep neural network models for Persian speech recognition. We provided several evaluation protocols for each part of the database. The protocols allow researchers to investigate the performance of different methods in various scenarios and study the effects of channels, duration and phrase text on the performance. We also provide two test sets for speech recognition: One normal test set with a few minutes of speech for each speaker and one large test set with more (30 minutes on average) speech that can be used for any speaker adaptation method. As baseline results, we reported the performance of an i-vector/HMM based method on Persian text-dependent part. Moreover, we conducted speech recognition experiments using conventional HMM-based methods, as well as state-of-the-art deep neural network based method using Kaldi toolkit with promising performance. Text-dependent results have shown that the DeepMine database is more challenging than RSR2015 and RedDots databases. Acknowledgments The data collection project was mainly supported by Sharif DeepMine company. The work on the paper was supported by Czech National Science Foundation (GACR) project "NEUREM3" No. 19-26934X and the National Programme of Sustainability (NPU II) project "IT4Innovations excellence in science - LQ1602".
what is the source of the data?
Android application
3,795
qasper
4k
Introduction With the increasing popularity of the Internet, online texts provided by social media platform (e.g. Twitter) and news media sites (e.g. Google news) have become important sources of real-world events. Therefore, it is crucial to automatically extract events from online texts. Due to the high variety of events discussed online and the difficulty in obtaining annotated data for training, traditional template-based or supervised learning approaches for event extraction are no longer applicable in dealing with online texts. Nevertheless, newsworthy events are often discussed by many tweets or online news articles. Therefore, the same event could be mentioned by a high volume of redundant tweets or news articles. This property inspires the research community to devise clustering-based models BIBREF0 , BIBREF1 , BIBREF2 to discover new or previously unidentified events without extracting structured representations. To extract structured representations of events such as who did what, when, where and why, Bayesian approaches have made some progress. Assuming that each document is assigned to a single event, which is modeled as a joint distribution over the named entities, the date and the location of the event, and the event-related keywords, Zhou et al. zhou2014simple proposed an unsupervised Latent Event Model (LEM) for open-domain event extraction. To address the limitation that LEM requires the number of events to be pre-set, Zhou et al. zhou2017event further proposed the Dirichlet Process Event Mixture Model (DPEMM) in which the number of events can be learned automatically from data. However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 . However, long texts such news articles often describe multiple events which clearly violates this assumption; (2) During the inference process of both approaches, the Gibbs sampler needs to compute the conditional posterior distribution and assigns an event for each document. This is time consuming and takes long time to converge. To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events. The main contributions of the paper are summarized below: Related Work Our work is related to two lines of research, event extraction and Generative Adversarial Nets. Event Extraction Recently there has been much interest in event extraction from online texts, and approaches could be categorized as domain-specific and open-domain event extraction. Domain-specific event extraction often focuses on the specific types of events (e.g. sports events or city events). Panem et al. panem2014structured devised a novel algorithm to extract attribute-value pairs and mapped them to manually generated schemes for extracting the natural disaster events. Similarly, to extract the city-traffic related event, Anantharam et al. anantharam2015extracting viewed the task as a sequential tagging problem and proposed an approach based on the conditional random fields. Zhang zhang2018event proposed an event extraction approach based on imitation learning, especially on inverse reinforcement learning. Open-domain event extraction aims to extract events without limiting the specific types of events. To analyze individual messages and induce a canonical value for each event, Benson et al. benson2011event proposed an approach based on a structured graphical model. By representing an event with a binary tuple which is constituted by a named entity and a date, Ritter et al. ritter2012open employed some statistic to measure the strength of associations between a named entity and a date. The proposed system relies on a supervised labeler trained on annotated data. In BIBREF1 , Abdelhaq et al. developed a real-time event extraction system called EvenTweet, and each event is represented as a triple constituted by time, location and keywords. To extract more information, Wang el al. wang2015seeft developed a system employing the links in tweets and combing tweets with linked articles to identify events. Xia el al. xia2015new combined texts with the location information to detect the events with low spatial and temporal deviations. Zhou et al. zhou2014simple,zhou2017event represented event as a quadruple and proposed two Bayesian models to extract events from tweets. Generative Adversarial Nets As a neural-based generative model, Generative Adversarial Nets BIBREF3 have been extensively researched in natural language processing (NLP) community. For text generation, the sequence generative adversarial network (SeqGAN) proposed in BIBREF4 incorporated a policy gradient strategy to optimize the generation process. Based on the policy gradient, Lin et al. lin2017adversarial proposed RankGAN to capture the rich structures of language by ranking and analyzing a collection of human-written and machine-written sentences. To overcome mode collapse when dealing with discrete data, Fedus et al. fedus2018maskgan proposed MaskGAN which used an actor-critic conditional GAN to fill in missing text conditioned on the surrounding context. Along this line, Wang et al. wang2018sentigan proposed SentiGAN to generate texts of different sentiment labels. Besides, Li et al. li2018learning improved the performance of semi-supervised text classification using adversarial training, BIBREF5 , BIBREF6 designed GAN-based models for distance supervision relation extraction. Although various GAN based approaches have been explored for many applications, none of these approaches tackles open-domain event extraction from online texts. We propose a novel GAN-based event extraction model called AEM. Compared with the previous models, AEM has the following differences: (1) Unlike most GAN-based text generation approaches, a generator network is employed in AEM to learn the projection function between an event distribution and the event-related word distributions (entity, location, keyword, date). The learned generator captures event-related patterns rather than generating text sequence; (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration; (3) The discriminative features learned by the discriminator of AEM provide a straightforward way to visualize the extracted events. Methodology We describe Adversarial-neural Event Model (AEM) in this section. An event is represented as a quadruple < INLINEFORM0 >, where INLINEFORM1 stands for non-location named entities, INLINEFORM2 for a location, INLINEFORM3 for event-related keywords, INLINEFORM4 for a date, and each component in the tuple is represented by component-specific representative words. AEM is constituted by three components: (1) The document representation module, as shown at the top of Figure FIGREF4 , defines a document representation approach which converts an input document from the online text corpus into INLINEFORM0 which captures the key event elements; (2) The generator INLINEFORM1 , as shown in the lower-left part of Figure FIGREF4 , generates a fake document INLINEFORM2 which is constituted by four multinomial distributions using an event distribution INLINEFORM3 drawn from a Dirichlet distribution as input; (3) The discriminator INLINEFORM4 , as shown in the lower-right part of Figure FIGREF4 , distinguishes the real documents from the fake ones and its output is subsequently employed as a learning signal to update the INLINEFORM5 and INLINEFORM6 . The details of each component are presented below. Document Representation Each document INLINEFORM0 in a given corpus INLINEFORM1 is represented as a concatenation of 4 multinomial distributions which are entity distribution ( INLINEFORM2 ), location distribution ( INLINEFORM3 ), keyword distribution ( INLINEFORM4 ) and date distribution ( INLINEFORM5 ) of the document. As four distributions are calculated in a similar way, we only describe the computation of the entity distribution below as an example. The entity distribution INLINEFORM0 is represented by a normalized INLINEFORM1 -dimensional vector weighted by TF-IDF, and it is calculated as: INLINEFORM2 where INLINEFORM0 is the pseudo corpus constructed by removing all non-entity words from INLINEFORM1 , INLINEFORM2 is the total number of distinct entities in a corpus, INLINEFORM3 denotes the number of INLINEFORM4 -th entity appeared in document INLINEFORM5 , INLINEFORM6 represents the number of documents in the corpus, and INLINEFORM7 is the number of documents that contain INLINEFORM8 -th entity, and the obtained INLINEFORM9 denotes the relevance between INLINEFORM10 -th entity and document INLINEFORM11 . Similarly, location distribution INLINEFORM0 , keyword distribution INLINEFORM1 and date distribution INLINEFORM2 of INLINEFORM3 could be calculated in the same way, and the dimensions of these distributions are denoted as INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. Finally, each document INLINEFORM7 in the corpus is represented by a INLINEFORM8 -dimensional ( INLINEFORM9 = INLINEFORM10 + INLINEFORM11 + INLINEFORM12 + INLINEFORM13 ) vector INLINEFORM14 by concatenating four computed distributions. Network Architecture The generator network INLINEFORM0 is designed to learn the projection function between the document-event distribution INLINEFORM1 and the four document-level word distributions (entity distribution, location distribution, keyword distribution and date distribution). More concretely, INLINEFORM0 consists of a INLINEFORM1 -dimensional document-event distribution layer, INLINEFORM2 -dimensional hidden layer and INLINEFORM3 -dimensional event-related word distribution layer. Here, INLINEFORM4 denotes the event number, INLINEFORM5 is the number of units in the hidden layer, INLINEFORM6 is the vocabulary size and equals to INLINEFORM7 + INLINEFORM8 + INLINEFORM9 + INLINEFORM10 . As shown in Figure FIGREF4 , INLINEFORM11 firstly employs a random document-event distribution INLINEFORM12 as an input. To model the multinomial property of the document-event distribution, INLINEFORM13 is drawn from a Dirichlet distribution parameterized with INLINEFORM14 which is formulated as: DISPLAYFORM0 where INLINEFORM0 is the hyper-parameter of the dirichlet distribution, INLINEFORM1 is the number of events which should be set in AEM, INLINEFORM2 , INLINEFORM3 represents the proportion of event INLINEFORM4 in the document and INLINEFORM5 . Subsequently, INLINEFORM0 transforms INLINEFORM1 into a INLINEFORM2 -dimensional hidden space using a linear layer followed by layer normalization, and the transformation is defined as: DISPLAYFORM0 where INLINEFORM0 represents the weight matrix of hidden layer, and INLINEFORM1 denotes the bias term, INLINEFORM2 is the parameter of LeakyReLU activation and is set to 0.1, INLINEFORM3 and INLINEFORM4 denote the normalized hidden states and the outputs of the hidden layer, and INLINEFORM5 represents the layer normalization. Then, to project INLINEFORM0 into four document-level event related word distributions ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 shown in Figure FIGREF4 ), four subnets (each contains a linear layer, a batch normalization layer and a softmax layer) are employed in INLINEFORM5 . And the exact transformation is based on the formulas below: DISPLAYFORM0 where INLINEFORM0 means softmax layer, INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 denote the weight matrices of the linear layers in subnets, INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 represent the corresponding bias terms, INLINEFORM9 , INLINEFORM10 , INLINEFORM11 and INLINEFORM12 are state vectors. INLINEFORM13 , INLINEFORM14 , INLINEFORM15 and INLINEFORM16 denote the generated entity distribution, location distribution, keyword distribution and date distribution, respectively, that correspond to the given event distribution INLINEFORM17 . And each dimension represents the relevance between corresponding entity/location/keyword/date term and the input event distribution. Finally, four generated distributions are concatenated to represent the generated document INLINEFORM0 corresponding to the input INLINEFORM1 : DISPLAYFORM0 The discriminator network INLINEFORM0 is designed as a fully-connected network which contains an input layer, a discriminative feature layer (discriminative features are employed for event visualization) and an output layer. In AEM, INLINEFORM1 uses fake document INLINEFORM2 and real document INLINEFORM3 as input and outputs the signal INLINEFORM4 to indicate the source of the input data (lower value denotes that INLINEFORM5 is prone to predict the input data as a fake document and vice versa). As have previously been discussed in BIBREF7 , BIBREF8 , lipschitz continuity of INLINEFORM0 network is crucial to the training of the GAN-based approaches. To ensure the lipschitz continuity of INLINEFORM1 , we employ the spectral normalization technique BIBREF9 . More concretely, for each linear layer INLINEFORM2 (bias term is omitted for simplicity) in INLINEFORM3 , the weight matrix INLINEFORM4 is normalized by INLINEFORM5 . Here, INLINEFORM6 is the spectral norm of the weight matrix INLINEFORM7 with the definition below: DISPLAYFORM0 which is equivalent to the largest singular value of INLINEFORM0 . The weight matrix INLINEFORM1 is then normalized using: DISPLAYFORM0 Obviously, the normalized weight matrix INLINEFORM0 satisfies that INLINEFORM1 and further ensures the lipschitz continuity of the INLINEFORM2 network BIBREF9 . To reduce the high cost of computing spectral norm INLINEFORM3 using singular value decomposition at each iteration, we follow BIBREF10 and employ the power iteration method to estimate INLINEFORM4 instead. With this substitution, the spectral norm can be estimated with very small additional computational time. Objective and Training Procedure The real document INLINEFORM0 and fake document INLINEFORM1 shown in Figure FIGREF4 could be viewed as random samples from two distributions INLINEFORM2 and INLINEFORM3 , and each of them is a joint distribution constituted by four Dirichlet distributions (corresponding to entity distribution, location distribution, keyword distribution and date distribution). The training objective of AEM is to let the distribution INLINEFORM4 (produced by INLINEFORM5 network) to approximate the real data distribution INLINEFORM6 as much as possible. To compare the different GAN losses, Kurach kurach2018gan takes a sober view of the current state of GAN and suggests that the Jansen-Shannon divergence used in BIBREF3 performs more stable than variant objectives. Besides, Kurach also advocates that the gradient penalty (GP) regularization devised in BIBREF8 will further improve the stability of the model. Thus, the objective function of the proposed AEM is defined as: DISPLAYFORM0 where INLINEFORM0 denotes the discriminator loss, INLINEFORM1 represents the gradient penalty regularization loss, INLINEFORM2 is the gradient penalty coefficient which trade-off the two components of objective, INLINEFORM3 could be obtained by sampling uniformly along a straight line between INLINEFORM4 and INLINEFORM5 , INLINEFORM6 denotes the corresponding distribution. The training procedure of AEM is presented in Algorithm SECREF15 , where INLINEFORM0 is the event number, INLINEFORM1 denotes the number of discriminator iterations per generator iteration, INLINEFORM2 is the batch size, INLINEFORM3 represents the learning rate, INLINEFORM4 and INLINEFORM5 are hyper-parameters of Adam BIBREF11 , INLINEFORM6 denotes INLINEFORM7 . In this paper, we set INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . Moreover, INLINEFORM11 , INLINEFORM12 and INLINEFORM13 are set as 0.0002, 0.5 and 0.999. [!h] Training procedure for AEM [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 the trained INLINEFORM7 and INLINEFORM8 . Initial INLINEFORM9 parameters INLINEFORM10 and INLINEFORM11 parameter INLINEFORM12 INLINEFORM13 has not converged INLINEFORM14 INLINEFORM15 Sample INLINEFORM16 , Sample a random INLINEFORM17 Sample a random number INLINEFORM18 INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 INLINEFORM23 INLINEFORM24 Sample INLINEFORM25 noise INLINEFORM26 INLINEFORM27 Event Generation After the model training, the generator INLINEFORM0 learns the mapping function between the document-event distribution and the document-level event-related word distributions (entity, location, keyword and date). In other words, with an event distribution INLINEFORM1 as input, INLINEFORM2 could generate the corresponding entity distribution, location distribution, keyword distribution and date distribution. In AEM, we employ event seed INLINEFORM0 , an INLINEFORM1 -dimensional vector with one-hot encoding, to generate the event related word distributions. For example, in ten event setting, INLINEFORM2 represents the event seed of the first event. With the event seed INLINEFORM3 as input, the corresponding distributions could be generated by INLINEFORM4 based on the equation below: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 denote the entity distribution, location distribution, keyword distribution and date distribution of the first event respectively. Experiments In this section, we firstly describe the datasets and baseline approaches used in our experiments and then present the experimental results. Experimental Setup To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below: FSD dataset (social media) is the first story detection dataset containing 2,499 tweets. We filter out events mentioned in less than 15 tweets since events mentioned in very few tweets are less likely to be significant. The final dataset contains 2,453 tweets annotated with 20 events. Twitter dataset (social media) is collected from tweets published in the month of December in 2010 using Twitter streaming API. It contains 1,000 tweets annotated with 20 events. Google dataset (news article) is a subset of GDELT Event Database INLINEFORM0 , documents are retrieved by event related words. For example, documents which contain `malaysia', `airline', `search' and `plane' are retrieved for event MH370. By combining 30 events related documents, the dataset contains 11,909 news articles. We choose the following three models as the baselines: K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF. LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration. DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration. For social media text corpus (FSD and Twitter), a named entity tagger specifically built for Twitter is used to extract named entities including locations from tweets. A Twitter Part-of-Speech (POS) tagger BIBREF15 is used for POS tagging and only words tagged with nouns, verbs and adjectives are retained as keywords. For the Google dataset, we use the Stanford Named Entity Recognizer to identify the named entities (organization, location and person). Due to the `date' information not being provided in the Google dataset, we further divide the non-location named entities into two categories (`person' and `organization') and employ a quadruple <organization, location, person, keyword> to denote an event in news articles. We also remove common stopwords and only keep the recognized named entities and the tokens which are verbs, nouns or adjectives. Experimental Results To evaluate the performance of the proposed approach, we use the evaluation metrics such as precision, recall and F-measure. Precision is defined as the proportion of the correctly identified events out of the model generated events. Recall is defined as the proportion of correctly identified true events. For calculating the precision of the 4-tuple, we use following criteria: (1) Do the entity/organization, location, date/person and keyword that we have extracted refer to the same event? (2) If the extracted representation contains keywords, are they informative enough to tell us what happened? Table TABREF35 shows the event extraction results on the three datasets. The statistics are obtained with the default parameter setting that INLINEFORM0 is set to 5, number of hidden units INLINEFORM1 is set to 200, and INLINEFORM2 contains three fully-connected layers. The event number INLINEFORM3 for three datasets are set to 25, 25 and 35, respectively. The examples of extracted events are shown in Table. TABREF36 . It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. We can also observe that apart from K-means, all the approaches perform worse on the Twitter dataset compared to FSD, possibly due to the limited size of the Twitter dataset. Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM. It improves upon LEM by 15.5% and upon DPEMM by more than 30% in F-measure. This is because: (1) the assumption made by LEM and DPEMM that all words in a document are generated from a single event is not suitable for long text such as news articles; (2) DPEMM generates too many irrelevant events which leads to a very low precision score. Overall, we see the superior performance of AEM across all datasets, with more significant improvement on the for Google datasets (long text). We next visualize the detected events based on the discriminative features learned by the trained INLINEFORM0 network in AEM. The t-SNE BIBREF16 visualization results on the datasets are shown in Figure FIGREF19 . For clarity, each subplot is plotted on a subset of the dataset containing ten randomly selected events. It can be observed that documents describing the same event have been grouped into the same cluster. To further evaluate if a variation of the parameters INLINEFORM0 (the number of discriminator iterations per generator iteration), INLINEFORM1 (the number of units in hidden layer) and the structure of generator INLINEFORM2 will impact the extraction performance, additional experiments have been conducted on the Google dataset, with INLINEFORM3 set to 5, 7 and 10, INLINEFORM4 set to 100, 150 and 200, and three INLINEFORM5 structures (3, 4 and 5 layers). The comparison results on precision, recall and F-measure are shown in Figure FIGREF20 . From the results, it could be observed that AEM with the 5-layer generator performs the best and achieves 96.7% in F-measure, and the worst F-measure obtained by AEM is 85.7%. Overall, the AEM outperforms all compared approaches acorss various parameter settings, showing relatively stable performance. Finally, we compare in Figure FIGREF37 the training time required for each model, excluding the constant time required by each model to load the data. We could observe that K-means runs fastest among all four approaches. Both LEM and DPEMM need to sample the event allocation for each document and update the relevant counts during Gibbs sampling which are time consuming. AEM only requires a fraction of the training time compared to LEM and DPEMM. Moreover, on a larger dataset such as the Google dataset, AEM appears to be far more efficient compared to LEM and DPEMM. Conclusions and Future Work In this paper, we have proposed a novel approach based on adversarial training to extract the structured representation of events from online text. The experimental comparison with the state-of-the-art methods shows that AEM achieves improved extraction performance, especially on long text corpora with an improvement of 15% observed in F-measure. AEM only requires a fraction of training time compared to existing Bayesian graphical modeling approaches. In future work, we will explore incorporating external knowledge (e.g. word relatedness contained in word embeddings) into the learning framework for event extraction. Besides, exploring nonparametric neural event extraction approaches and detecting the evolution of events over time from news articles are other promising future directions. Acknowledgments We would like to thank anonymous reviewers for their valuable comments and helpful suggestions. This work was funded by the National Key Research and Development Program of China (2016YFC1306704), the National Natural Science Foundation of China (61772132), the Natural Science Foundation of Jiangsu Province of China (BK20161430).
Do they report results only on English data?
Unanswerable
3,838
qasper
4k
Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Related Work Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions. Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public. Data Collection We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users). Data Collection ::: Crowdsourced Question Elicitation The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document. Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Data Collection ::: Analysis ::: Answer Validation When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another. We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not). Experimental Setup We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41). Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Experimental Setup ::: Privacy Question Answering ::: Baselines We describe baselines on this task, including a human performance baseline. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline. Results and Discussion The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain. Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section . Results and Discussion ::: Error Analysis Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories. We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Conclusion We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study.
Who were the experts used for annotation?
Individuals with legal training
3,846
qasper
4k
Introduction Cyberbullying has been defined by the National Crime Prevention Council as the use of the Internet, cell phones or other devices to send or post text or images intended to hurt or embarrass another person. Various studies have estimated that between to 10% to 40% of internet users are victims of cyberbullying BIBREF0 . Effects of cyberbullying can range from temporary anxiety to suicide BIBREF1 . Many high profile incidents have emphasized the prevalence of cyberbullying on social media. Most recently in October 2017, a Swedish model Arvida Byström was cyberbullied to the extent of receiving rape threats after she appeared in an advertisement with hairy legs. Detection of cyberbullying in social media is a challenging task. Definition of what constitutes cyberbullying is quite subjective. For example, frequent use of swear words might be considered as bullying by the general population. However, for teen oriented social media platforms such as Formspring, this does not necessarily mean bullying (Table TABREF9 ). Across multiple SMPs, cyberbullies attack victims on different topics such as race, religion, and gender. Depending on the topic of cyberbullying, vocabulary and perceived meaning of words vary significantly across SMPs. For example, in our experiments we found that for word `fat', the most similar words as per Twitter dataset are `female' and `woman' (Table TABREF23 ). However, other two datasets do not show such particular bias against women. This platform specific semantic similarity between words is a key aspect of cyberbullying detection across SMPs. Style of communication varies significantly across SMPs. For example, Twitter posts are short and lack anonymity. Whereas posts on Q&A oriented SMPs are long and have option of anonymity (Table TABREF7 ). Fast evolving words and hashtags in social media make it difficult to detect cyberbullying using swear word list based simple filtering approaches. The option of anonymity in certain social networks also makes it harder to identify cyberbullying as profile and history of the bully might not be available. Past works on cyberbullying detection have at least one of the following three bottlenecks. First (Bottleneck B1), they target only one particular social media platform. How these methods perform across other SMPs is unknown. Second (Bottleneck B2), they address only one topic of cyberbullying such as racism, and sexism. Depending on the topic, vocabulary and nature of cyberbullying changes. These models are not flexible in accommodating changes in the definition of cyberbullying. Third (Bottleneck B3), they rely on carefully handcrafted features such as swear word list and POS tagging. However, these handcrafted features are not robust against variations in writing style. In contrast to existing bottlenecks, this work targets three different types of social networks (Formspring: a Q&A forum, Twitter: microblogging, and Wikipedia: collaborative knowledge repository) for three topics of cyberbullying (personal attack, racism, and sexism) without doing any explicit feature engineering by developing deep learning based models along with transfer learning. We experimented with diverse traditional machine learning models (logistic regression, support vector machine, random forest, naive Bayes) and deep neural network models (CNN, LSTM, BLSTM, BLSTM with Attention) using variety of representation methods for words (bag of character n-gram, bag of word unigram, GloVe embeddings, SSWE embeddings). Summary of our findings and research contributions is as follows. Datasets Please refer to Table TABREF7 for summary of datasets used. We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages). Each dataset addresses a different topic of cyberbullying. Twitter dataset contains examples of racism and sexism. Wikipedia dataset contains examples of personal attack. However, Formspring dataset is not specifically about any single topic. All three datasets have the problem of class imbalance where posts labeled as cyberbullying are in the minority as compared to neutral posts. Variation in the number of posts across datasets also affects vocabulary size that represents the number of distinct words encountered in the dataset. We measure the size of a post in terms of the number of words in the post. For each dataset, there are only a few posts with large size. We truncate such large posts to the size of post ranked at 95 percentile in that dataset. For example, in Wikipedia dataset, the largest post has 2846 words. However, size of post ranked at 95 percentile in that dataset is only 231. Any post larger than size 231 in Wikipedia dataset will be truncated by considering only first 231 words. This truncation affects only a small minority of posts in each dataset. However, it is required for efficiently training various models in our experiments. Details of each dataset are as follows. Formspring BIBREF2 : It was a question and answer based website where users could openly invite others to ask and answer questions. The dataset includes 12K annotated question and answer pairs. Each post is manually labeled by three workers. Among these pairs, 825 were labeled as containing cyberbullying content by at least two Amazon Mechanical turk workers. Twitter BIBREF3 : This dataset includes 16K annotated tweets. The authors bootstrapped the corpus collection, by performing an initial manual search of common slurs and terms used pertaining to religious, sexual, gender, and ethnic minorities. Of the 16K tweets, 3117 are labeled as sexist, 1937 as racist, and the remaining are marked as neither sexist nor racist. Wikipedia BIBREF4 : For each page in Wikipedia, a corresponding talk page maintains the history of discussion among users who participated in its editing. This data set includes over 100k labeled discussion comments from English Wikipedia's talk pages. Each comment was labeled by 10 annotators via Crowdflower on whether it contains a personal attack. There are total 13590 comments labeled as personal attack. Use of Swear Words and Anonymity Please refer to Table TABREF9 . We use the following short forms in this section: B=Bullying, S=Swearing, A=Anonymous. Some of the values for Twitter dataset are undefined as Twitter does not allow anonymous postings. Use of swear words has been repeatedly linked to cyberbullying. However, preliminary analysis of datasets reveals that depending on swear word usage can neither lead to high precision nor high recall for cyberbullying detection. Swear word list based methods will have low precision as P(B INLINEFORM0 S) is not close to 1. In fact, for teen oriented social network Formspring, 78% of the swearing posts are non-bullying. Swear words based filtering will be irritating to the users in such SMPs where swear words are used casually. Swear word list based methods will also have a low recall as P(S INLINEFORM1 B) is not close to 1. For Twitter dataset, 82% of bullying posts do not use any swear words. Such passive-aggressive cyberbullying will go undetected with swear word list based methods. Anonymity is another clue that is used for detecting cyberbullying as bully might prefer to hide its identity. Anonymity definitely leads to increased use of swear words (P(S INLINEFORM2 A) INLINEFORM3 P(S)) and cyberbullying (P(B INLINEFORM4 A) INLINEFORM5 P(B), and P(B INLINEFORM6 A&S)) INLINEFORM7 P(B)). However, significant fraction of anonymous posts are non-bullying (P(B INLINEFORM8 A) not close to 1) and many of bullying posts are not anonymous (P(A INLINEFORM9 B) not close to 1). Further, anonymity might not be allowed by many SMPs such as Twitter. Related Work Cyberbullying is recognized as a phenomenon at least since 2003 BIBREF5 . Use of social media exploded with launching of multiple platforms such as Wikipedia (2001), MySpace (2003), Orkut (2004), Facebook (2004), and Twitter (2005). By 2006, researchers had pointed that cyberbullying was as serious phenomenon as offline bullying BIBREF6 . However, automatic detection of cyberbullying was addressed only since 2009 BIBREF7 . As a research topic, cyberbullying detection is a text classification problem. Most of the existing works fit in the following template: get training dataset from single SMP, engineer variety of features with certain style of cyberbullying as the target, apply a few traditional machine learning methods, and evaluate success in terms of measures such as F1 score and accuracy. These works heavily rely on handcrafted features such as use of swear words. These methods tend to have low precision for cyberbullying detection as handcrafted features are not robust against variations in bullying style across SMPs and bullying topics. Only recently, deep learning has been applied for cyberbullying detection BIBREF8 . Table TABREF27 summarizes important related work. Deep Neural Network (DNN) Based Models We experimented with four DNN based models for cyberbullying detection: CNN, LSTM, BLSTM, and BLSTM with attention. These models are listed in the increasing complexity of their neural architecture and amount of information used by these models. Please refer to Figure 1 for general architecture that we have used across four models. Various models differ only in the Neural Architecture layer while having identical rest of the layers. CNNs are providing state-of-the-results on extracting contextual feature for classification tasks in images, videos, audios, and text. Recently, CNNs were used for sentiment classification BIBREF9 . Long Short Term Memory networks are a special kind of RNN, capable of learning long-term dependencies. Their ability to use their internal memory to process arbitrary sequences of inputs has been found to be effective for text classification BIBREF10 . Bidirectional LSTMs BIBREF11 further increase the amount of input information available to the network by encoding information in both forward and backward direction. By using two directions, input information from both the past and future of the current time frame can be used. Attention mechanisms allow for a more direct dependence between the state of the model at different points in time. Importantly, attention mechanism lets the model learn what to attend to based on the input sentence and what it has produced so far. The embedding layer processes a fixed size sequence of words. Each word is represented as a real-valued vector, also known as word embeddings. We have experimented with three methods for initializing word embeddings: random, GloVe BIBREF12 , and SSWE BIBREF13 . During the training, model improves upon the initial word embeddings to learn task specific word embeddings. We have observed that these task specific word embeddings capture the SMP specific and topic specific style of cyberbullying. Using GloVe vectors over random vector initialization has been reported to improve performance for some NLP tasks. Most of the word embedding methods such as GloVe, consider only syntactic context of the word while ignoring the sentiment conveyed by the text. SSWE method overcomes this problem by incorporating the text sentiment as one of the parameters for word embedding generation. We experimented with various dimension size for word embeddings. Experimental results reported here are with dimension size as 50. There was no significant variation in results with dimension size ranging from 30 to 200. To avoid overfitting, we used two dropout layers, one before the neural architecture layer and one after, with dropout rates of 0.25 and 0.5 respectively. Fully connected layer is a dense output layer with the number of neurons equal to the number of classes, followed by softmax layer that provides softmax activation. All our models are trained using backpropagation. The optimizer used for training is Adam and the loss function is categorical cross-entropy. Besides learning the network weights, these methods also learn task-specific word embeddings tuned towards the bullying labels (See Section SECREF21 ). Our code is available at: https://github.com/sweta20/Detecting-Cyberbullying-Across-SMPs. Experiments Existing works have heavily relied on traditional machine learning models for cyberbullying detection. However, they do not study the performance of these models across multiple SMPs. We experimented with four models: logistic regression (LR), support vector machine (SVM), random forest (RF), and naive Bayes (NB), as these are used in previous works (Table TABREF27 ). We used two data representation methods: character n-gram and word unigram. Past work in the domain of detecting abusive language have showed that simple n-gram features are more powerful than linguistic and syntactic features, hand-engineered lexicons, and word and paragraph embeddings BIBREF14 . As compared to DNN models, performance of all four traditional machine learning models was significantly lower. Please refer to Table TABREF11 . All DNN models reported here were implemented using Keras. We pre-process the data, subjecting it to standard operations of removal of stop words, punctuation marks and lowercasing, before annotating it to assigning respective labels to each comment. For each trained model, we report its performance after doing five-fold cross-validation. We use following short forms. Effect of Oversampling Bullying Instances The training datasets had a major problem of class imbalance with posts marked as bullying in the minority. As a result, all models were biased towards labeling the posts as non-bullying. To remove this bias, we oversampled the data from bullying class thrice. That is, we replicated bullying posts thrice in the training data. This significantly improved the performance of all DNN models with major leap in all three evaluation measures. Table TABREF17 shows the effect of oversampling for a variety of word embedding methods with BLSTM Attention as the detection model. Results for other models are similar BIBREF15 . We can notice that oversampled datasets (F+, T+, W+) have far better performance than their counterparts (F, T, W respectively). Oversampling particularly helps the smallest dataset Formspring where number of training instances for bullying class is quite small (825) as compared to other two datasets (about 5K and 13K). We also experimented with varying the replication rate for bullying posts BIBREF15 . However, we observed that for bullying posts, replication rate of three is good enough. Choice of Initial Word Embeddings and Model Initial word embeddings decide data representation for DNN models. However during the training, DNN models modify these initial word embeddings to learn task specific word embeddings. We have experimented with three methods to initialize word embeddings. Please refer to Table TABREF19 . This table shows the effect of varying initial word embeddings for multiple DNN models across datasets. We can notice that initial word embeddings do not have a significant effect on cyberbullying detection when oversampling of bullying posts is done (rows corresponding to F+, T+, W+). In the absence of oversampling (rows corresponding to F, T W), there is a gap in performance of simplest (CNN) and most complex (BLSTM with attention) models. However, this gap goes on reducing with the increase in the size of datasets. Table TABREF20 compares the performance of four DNN models for three evaluation measures while using SSWE as the initial word embeddings. We have noticed that most of the time LSTM performs weaker than other three models. However, performance gap in the other three models is not significant. Task Specific Word Embeddings DNN models learn word embeddings over the training data. These learned embeddings across multiple datasets show the difference in nature and style of bullying across cyberbullying topics and SMPs. Here we report results for BLSTM with attention model. Results for other models are similar. We first verify that important words for each topic of cyberbullying form clusters in the learned embeddings. To enable the visualization of grouping, we reduced dimensionality with t-SNE BIBREF16 , a well-known technique for dimensionality reduction particularly well suited for visualization of high dimensional datasets. Please refer to Table TABREF22 . This table shows important clusters observed in t-SNE projection of learned word embeddings. Each cluster shows that words most relevant to a particular topic of bullying form cluster. We also observed changes in the meanings of the words across topics of cyberbullying. Table TABREF23 shows most similar words for a given query word for two datasets. Twitter dataset which is heavy on sexism and racism, considers word slave as similar to targets of racism and sexism. However, Wikipedia dataset that is about personal attacks does not show such bias. Transfer Learning We used transfer learning to check if the knowledge gained by DNN models on one dataset can be used to improve cyberbullying detection performance on other datasets. We report results where BLSTM with attention is used as the DNN model. Results for other models are similar BIBREF15 . We experimented with following three flavors of transfer learning. Complete Transfer Learning (TL1): In this flavor, a model trained on one dataset was directly used to detect cyberbullying in other datasets without any extra training. TL1 resulted in significantly low recall indicating that three datasets have different nature of cyberbullying with low overlap (Table TABREF25 ). However precision was relatively higher for TL1, indicating that DNN models are cautious in labeling a post as bully (Table TABREF25 ). TL1 also helps to measure similarity in nature of cyberbullying across three datasets. We can observe that bullying nature in Formspring and Wikipedia datasets is more similar to each other than the Twitter dataset. This can be inferred from the fact that with TL1, cyberbullying detection performance for Formspring dataset is higher when base model is Wikipedia (precision =0.51 and recall=0.66)as compared to Twitter as the base model (precision=0.38 and recall=0.04). Similarly, for Wikipedia dataset, Formspring acts as a better base model than Twitter while using TL1 flavor of transfer learning. Nature of SMP might be a factor behind this similarity in nature of cyberbullying. Both Formspring and Wikipedia are task oriented social networks (Q&A and collaborative knowledge repository respectively) that allow anonymity and larger posts. Whereas communication on Twitter is short, free of anonymity and not oriented towards a particular task. Feature Level Transfer Learning (TL2): In this flavor, a model was trained on one dataset and only learned word embeddings were transferred to another dataset for training a new model. As compared to TL1, recall score improved dramatically with TL2 (Table TABREF25 ). Improvement in precision was also significant (Table TABREF25 ). These improvements indicate that learned word embeddings are an essential part of knowledge transfer across datasets for cyberbullying detection. Model Level Transfer Learning (TL3): In this flavor, a model was trained on one dataset and learned word embeddings, as well as network weights, were transferred to another dataset for training a new model. TL3 does not result in any significant improvement over TL2. This lack of improvement indicates that transfer of network weights is not essential for cyberbullying detection and learned word embeddings is the key knowledge gained by the DNN models. DNN based models coupled with transfer learning beat the best-known results for all three datasets. Previous best F1 scores for Wikipedia BIBREF4 and Twitter BIBREF8 datasets were 0.68 and 0.93 respectively. We achieve F1 scores of 0.94 for both these datasets using BLSTM with attention and feature level transfer learning (Table TABREF25 ). For Formspring dataset, authors have not reported F1 score. Their method has accuracy score of 78.5% BIBREF2 . We achieve F1 score of 0.95 with accuracy score of 98% for the same dataset. Conclusion and Future Work We have shown that DNN models can be used for cyberbullying detection on various topics across multiple SMPs using three datasets and four DNN models. These models coupled with transfer learning beat state of the art results for all three datasets. These models can be further improved with extra data such as information about the profile and social graph of users. Most of the current datasets do not provide any information about the severity of bullying. If such fine-grained information is made available, then cyberbullying detection models can be further improved to take a variety of actions depending on the perceived seriousness of the posts.
What cyberbulling topics did they address?
personal attack, racism, and sexism
3,244
qasper
4k
Introduction In recent years, gender has become a hot topic within the political, societal and research spheres. Numerous studies have been conducted in order to evaluate the presence of women in media, often revealing their under-representation, such as the Global Media Monitoring Project BIBREF0. In the French context, the CSA BIBREF1 produces a report on gender representation in media on a yearly basis. The 2017 report shows that women represent 40% of French media speakers, with a significant drop during high-audience hours (6:00-8:00pm) reaching a value of only 29%. Another large scale study confirmed this trend with an automatic analysis of gender in French audiovisuals streams, highlighting a huge variation across type of shows BIBREF2. Besides the social impact of gender representation, broadcast recordings are also a valuable source of data for the speech processing community. Indeed, automatic speech recognition (ASR) systems require large amount of annotated speech data to be efficiently trained, which leaves us facing the emerging concern about the fact that "AI artifacts tend to reflect the goals, knowledge and experience of their creators" BIBREF3. Since we know that women are under-represented in media and that the AI discipline has retained a male-oriented focus BIBREF4, we can legitimately wonder about the impact of using such data as a training set for ASR technologies. This concern is strengthened by the recent works uncovering gender bias in several natural language processing (NLP) tools such as BIBREF5, BIBREF6, BIBREF7, BIBREF8. In this paper, we first highlight the importance of TV and radio broadcast as a source of data for ASR, and the potential impact it can have. We then perform a statistical analysis of gender representation in a data set composed of four state-of-the-art corpora of French broadcast, widely used within the speech community. Finally we question the impact of such a representation on the systems developed on this data, through the perspective of an ASR system. From gender representation in data to gender bias in AI ::: On the importance of data The ever growing use of machine learning in science has been enabled by several progresses among which the exponential growth of data available. The quality of a system now depends mostly on the quality and quantity of the data it has been trained on. If it does not discard the importance of an appropriate architecture, it reaffirms the fact that rich and large corpora are a valuable resource. Corpora are research contributions which do not only allow to save and observe certain phenomena or validate a hypothesis or model, but are also a mandatory part of the technology development. This trend is notably observable within the NLP field, where industrial technologies, such as Apple, Amazon or Google vocal assistants now reach high performance level partly due to the amount of data possessed by these companies BIBREF9. Surprisingly, as data is said to be “the new oil", few data sets are available for ASR systems. The best known are corpora like TIMIT BIBREF10, Switchboard BIBREF11 or Fisher BIBREF12 which date back to the early 1990s. The scarceness of available corpora is justified by the fact that gathering and annotating audio data is costly both in terms of money and time. Telephone conversations and broadcast recordings have been the primary source of spontaneous speech used. Out of all the 130 audio resources proposed by LDC to train automatic speech recognition systems in English, approximately 14% of them are based on broadcast news and conversation. For French speech technologies, four corpora containing radio and TV broadcast are the most widely used: ESTER1 BIBREF13, ESTER2 BIBREF14, ETAPE BIBREF15 and REPERE BIBREF16. These four corpora have been built alongside evaluation campaigns and are still, to our knowledge, the largest French ones of their type available to date. From gender representation in data to gender bias in AI ::: From data to bias The gender issue has returned to the forefront of the media scene in recent years and with the emergence of AI technologies in our daily lives, gender bias has become a scientific topic that researchers are just beginning to address. Several studies revealed the existence of gender bias in AI technologies such as face recognition (GenderShades BIBREF17), NLP (word embeddings BIBREF5 and semantics BIBREF6) and machine translation (BIBREF18, BIBREF7). The impact of the training data used within these deep-learning algorithms is therefore questioned. Bias can be found at different levels as pointed out by BIBREF19. BIBREF20 defines bias as a skew that produces a type of harm. She distinguishes two types of harms that are allocation harm and representation harm. The allocation harm occurs when a system is performing better or worse for a certain group while representational harm contributes to the perpetuation of stereotypes. Both types of harm are the results of bias in machine learning that often comes from the data systems are trained on. Disparities in representation in our social structures is captured and reflected by the training data, through statistical patterns. The GenderShades study is a striking example of what data disparity and lack of representation can produce: the authors tested several gender recognition modules used by facial recognition tools and found difference in error-rate as high as 34 percentage points between recognition of white male and black female faces. The scarce presence of women and colored people in training set resulted in bias in performance towards these two categories, with a strong intersectional bias. As written by BIBREF21 "A data set may have many millions of pieces of data, but this does not mean it is random or representative. To make statistical claims about a data set, we need to know where data is coming from; it is similarly important to know and account for the weaknesses in that data." (p.668). Regarding ASR technology, little work has explored the presence of gender bias within the systems and no consensus has been reached. BIBREF22 found that speech recognizers perform better on female voice on a broadcast news and telephone corpus. They proposed several explanations to this observation, such as the larger presence of non-professional male speech in the broadcast data, implying a less prepared speech for these speakers or a more normative language and standard pronunciation for women linked to the traditional role of women in language acquisition and education. The same trend was observed by BIBREF23. More recently, BIBREF24 discovered a gender bias within YouTube's automatic captioning system but this bias was not observed in a second study evaluating Bing Speech system and YouTube Automatic Captions on a larger data set BIBREF8. However race and dialect bias were found. General American speakers and white speakers had the lowest error rate for both systems. If the better performance on General American speakers could be explained by the fact that they are all voice professionals, producing clear and articulated speech, but no explanation is provided for biases towards non-white speakers. Gender bias in ASR technology is still an open research question as no clear answer has been reached so far. It seems that many parameters are to take into account to achieve a general agreement. As we established the importance of TV and radio broadcast as a source of data for ASR, and the potential impact it can have, the following content of this paper is structured as this: we first describe statistically the gender representation of a data set composed of four state-of-the-art corpora of French broadcast, widely used within the speech community, introducing the notion of speaker's role to refine our analysis in terms of voice professionalism. We then question the impact of such a representation on a ASR system trained on these data. BIBREF25 Methodology This section is organized as follows: we first present the data we are working on. In a second time we explain how we proceed to describe the gender representation in our corpus and introduce the notion of speaker's role. The third subsection introduces the ASR system and metrics used to evaluate gender bias in performance. Methodology ::: Data presentation Our data consists of two sets used to train and evaluate our automatic speech recognition system. Four major evaluation campaigns have enabled the creation of wide corpora of French broadcast speech: ESTER1 BIBREF13, ESTER2 BIBREF14, ETAPE BIBREF15 and REPERE BIBREF16. These four collections contain radio and/or TV broadcasts aired between 1998 and 2013 which are used by most academic researchers in ASR. Show duration varies between 10min and an hour. As years went by and speech processing research was progressing, the difficulty of the tasks augmented and the content of these evaluation corpora changed. ESTER1 and ESTER2 mainly contain prepared speech such as broadcast news, whereas ETAPE and REPERE consists also of debates and entertainment shows, spontaneous speech introducing more difficulty in its recognition. Our training set contains 27,085 speech utterances produced by 2,506 speakers, accounting for approximately 100 hours of speech. Our evaluation set contains 74,064 speech utterances produced by 1,268 speakers for a total of 70 hours of speech. Training data by show, medium and speech type is summarized in Table and evaluation data in Table . Evaluation data has a higher variety of shows with both prepared (P) and spontaneous (S) speech type (accented speech from African radio broadcast is also included in the evaluation set). Methodology ::: Methodology for descriptive analysis of gender representation in training data We first describe the gender representation in training data. Gender representation is measured in terms of number of speakers, number of utterances (or speech turns), and turn lengths (descriptive statistics are given in Section SECREF16). Each speech turn was mapped to its speaker in order to associate it with a gender. As pointed out by the CSA report BIBREF1, women presence tends to be marginal within the high-audience hours, showing that women are represented but less than men and within certain given conditions. It is clear that a small number of speakers is responsible for a large number of speech turns. Most of these speakers are journalists, politicians, presenters and such, who are representative of a show. Therefore, we introduce the notion of speaker's role to refine our exploration of gender disparity, following studies which quantified women's presence in terms of role. Within our work, we define the notion of speaker role by two criteria specifying the speaker's on-air presence, namely the number of speech turns and the cumulative duration of his or her speaking time in a show. Based on the available speech transcriptions and meta-data, we compute for each speaker the number of speech turns uttered as well as their total length. We then use the following criteria to define speaker's role: a speaker is considered as speaking often (respectively seldom) if he/she accumulates a total of turns higher (respectively lower) than 1% of the total number of speech turns in a given show. The same process is applied to identify speakers talking for a long period from those who do not. We end up with two salient roles called Anchors and Punctual speakers: the Anchor speakers (A) are above the threshold of 1% for both criteria, meaning they are intervening often and for a long time thus holding an important place in interaction; the Punctual speakers (PS) on the contrary are below the threshold of 1% for both the total number of turns and the total speech time. These roles are defined at the show level. They could be roughly assimilated to the categorization “host/guest” in radio and TV shows. Anchors could be described as professional speakers, producing mostly prepared speech, whereas Punctual speakers are more likely to be “everyday people". The concept of speaker's role makes sense at both sociological and technical levels. An Anchor speaker is more likely to be known from the audience (society), but he or she will also likely have a professional (clear) way of speaking (as mentioned by BIBREF22 and BIBREF8), as well as a high number of utterances, augmenting the amount of data available for a given gender category. Methodology ::: Gender bias evaluation procedure of an ASR system performance ::: ASR system The ASR system used in this work is described in BIBREF25. It uses the KALDI toolkit BIBREF26, following a standard Kaldi recipe. The acoustic model is based on a hybrid HMM-DNN architecture and trained on the data summarized in Table . Acoustic training data correspond to 100h of non-spontaneous speech type (mostly broadcast news) coming from both radio and TV shows. A 5-gram language model is trained from several French corpora (3,323M words in total) using SRILM toolkit BIBREF27. The pronunciation model is developed using the lexical resource BDLEX BIBREF28 as well as automatic grapheme-to-phoneme (G2P) transcription to find pronunciation variants of our vocabulary (limited to 80K). It is important to re-specify here, for further analysis, that our Kaldi pipeline follows speaker adaptive training (SAT) where we train and decode using speaker adapted features (fMLLR-adapted features) in per-speaker mode. It is well known that speaker adaptation acts as an effective procedure to reduce mismatch between training and evaluation conditions BIBREF29, BIBREF26. Methodology ::: Gender bias evaluation procedure of an ASR system performance ::: Evaluation Word Error Rate (WER) is a common metric to evaluate ASR performance. It is measured as the sum of errors (insertions, deletions and substitutions) divided by the total number of words in the reference transcription. As we are investigating the impact on performance of speaker's gender and role, we computed the WER for each speaker at the episode (show occurrence) level. Analyzing at such granularity allows us to avoid large WER variation that could be observed at utterance level (especially for short speech turns) but also makes possible to get several WER values for a given speaker, one for each occurrence of a show in which he/she appears on. Speaker's gender was provided by the meta-data and role was obtained using the criteria from Section SECREF6 computed for each show. This enables us to analyze our results across gender and role categories which was done using Wilcoxon rank sum tests also called Mann-Whitney U test (with $\alpha $= 0.001) BIBREF30. The choice of a Wilcoxon rank sum test and not the commonly used t-test is motivated by the non-normality of our data. Results ::: Descriptive analysis of gender representation in training data ::: Gender representation As expected, we observe a disparity in terms of gender representation in our data (see Table ). Women represent 33.16% of the speakers, confirming the figures given by the GMMP report BIBREF0. However, it is worth noticing that women account for only 22.57% of the total speech time, which leads us to conclude that women also speak less than men. Results ::: Descriptive analysis of gender representation in training data ::: Speaker's role representation Table presents roles' representation in training data and shows that despite the small number of Anchor speakers in our data (3.79%), they nevertheless concentrate 35.71 % of the total speech time. Results ::: Descriptive analysis of gender representation in training data ::: Role and gender interaction When crossing both parameters, we can observe that the gender distribution is not constant throughout roles. Women represent 29.47% of the speakers within the Anchor category, even less than among the Punctual speakers. Their percentage of speech is also smaller. When calculating the average speech time uttered by a female Anchor, we obtain a value of 15.9 min against 25.2 min for a male Anchor, which suggests that even within the Anchor category men tend to speak more. This confirms the existence of gender disparities within French media. It corroborates with the analysis of the CSA BIBREF1, which shows that women were less present during high-audience hours. Our study shows that they are also less present in important roles. These results legitimate our initial questioning on the impact of gender balance on ASR performance trained on broadcast recordings. Results ::: Performance (WER) analysis on evaluation data ::: Impact of gender on WER As explained in Section SECREF13, WER is the sum of errors divided by the number of words in the transcription reference. The higher the WER, the poorer the system performance. Our 70h evaluation data contains a large amount of spontaneous speech and is very challenging for the ASR system trained on prepared speech: we observe an overall average WER of 42.9% for women and 34.3% for men. This difference of WER between men and women is statistically significant (med(M) = 25%; med(F) = 29%; U = 709040; p-value < 0.001). However, when observing gender differences across shows, no clear trend can be identified, as shown in Figure FIGREF21. For shows like Africa1 Infos or La Place du Village, we find an average WER lower for women than for men, while the trend is reversed for shows such as Un Temps de Pauchon or Le Masque et la Plume. The disparity of the results depending on the show leads us to believe that other factors may be entangled within the observed phenomenon. Results ::: Performance (WER) analysis on evaluation data ::: Impact of role on WER Speaker's role seems to have an impact on WER: we obtain an average WER of 30.8% for the Anchor speakers and 42.23% for the Punctual speakers. This difference is statistically significant with a p-value smaller than $10^{-14}$ (med(A) = 21%; med(P) = 31%; U = 540,430; p-value < 0.001) . Results ::: Performance (WER) analysis on evaluation data ::: Role and gender interaction Figure FIGREF25 presents the WER distribution (WER being obtained for each speaker in a show occurrence) according to the speaker's role and gender. It is worth noticing that the gender difference is only significant within the Punctual speakers group. The average WER is of 49.04% for the women and 38.56% for the men with a p-value smaller than $10^{-6}$ (med(F) = 39%; med(M) = 29%; U = 251,450; p-value < 0.001), whereas it is just a trend between male and female Anchors (med(F) = 21%; med(M) = 21%; U = 116,230; p-value = 0.173). This could be explained by the quantity of data available per speaker. Results ::: Performance (WER) analysis on evaluation data ::: Speech type as a third entangled factor? In order to try to explain the observed variation in our results depending on shows and gender (Figure FIGREF21), we add the notion of speech type to shed some light on our results. BIBREF22 and BIBREF24 suggested that the speaker professionalism, associated with clear and hyper-articulated speech could be an explaining factor for better performance. Based on our categorization in prepared speech (mostly news reports) and spontaneous speech (mostly debates and entertainment shows), we cross this parameter in our performance analysis. As shown on Figure FIGREF26, these results confirm the inherent challenge of spontaneous speech compared to prepared speech. WER scores are similar between men and women when considering prepared speech (med(F) = 18%; med(M) = 21%; U = 217,160; p-value = 0.005) whereas they are worse for women (61.29%) than for men (46.51%) with p-value smaller than $10^{-14}$ for the spontaneous speech type (med(F) = 61%; med(M) = 37%; U = 153,580; p-value < 0.001). Discussion We find a clear disparity in terms of women presence and speech quantity in French media. Our data being recorded between 1998 and 2013, we can expect this disparity to be smaller on more recent broadcast recordings, especially since the French government displays efforts toward parity in media representation. One can also argue that even if our analysis was conducted on a large amount of data it does not reach the exhaustiveness of large-scale studies such as the one of BIBREF2. Nonetheless it does not affect the relevance of our findings, because if real-world gender representation might be more balanced today, these corpora are still used as training data for AI systems. The performance difference across gender we observed corroborates (on a larger quantity and variety of language data produced by more than 2400 speakers) the results obtained by BIBREF24 on isolated words recognition. However the following study on read speech does not replicate these results. Yet a performance degradation is observed across dialect and race BIBREF8. BIBREF22 found lower WER for women than men on broadcast news and conversational telephone speech for both English and French. The authors suggest that gender stereotypes associated with women role in education and language acquisition induce a more normative elocution. We observed that the higher the degree of normativity of speech the smaller the gender difference. No significant gender bias is observed for prepared speech nor within the Anchor category. Even if we do not find similar results with lower WER for women than men, we obtained a median WER smaller for women on prepared speech and equal to the male median WER for the Anchor speakers. Another explanation could be the use of adaptation within the pipeline. Most broadcast programs transcription systems have a speaker adaptation step within their decoding pipeline, which is the case for our system. An Anchor speaker intervening more often would have a larger quantity of data to realize such adaptation of the acoustic model. On the contrary, Punctual speakers who appear scarcely in the data are not provided with the same amount of adaptation data. Hence we can hypothesize that gender performance difference observed for Punctual speakers is due to the fact that female speech is further from the (initial non-adapted) acoustic model as it was trained on unbalanced data (as shown in Table ). Considering that Punctual speakers represent 92.78% of the speakers, this explains why gender difference is significant over our entire data set. A way to confirm our hypothesis would be to reproduce our analysis on WER values obtained without using speaker adapted features at the decoding step. When decoding prepared speech (hence similar to the training data), no significant difference is found in WER between men and women, revealing that the speaker adaptation step could be sufficient to reach same performance for both genders. But when decoding more spontaneous speech, there is a mismatch with the initial acoustic model (trained on prepared speech). Consequently, the speaker adaptation step might not be enough to recover good ASR performance, especially for women for whom less adaptation data is available (see Section 4.2.3). Conclusion This paper has investigated gender bias in ASR performance through the following research questions: i) what is the proportion of men and women in French radio and TV media data ? ii) what is the impact of the observed disparity on ASR performance ? iii) is this as simple as a problem of gender proportion in the training data or are other factors entangled ? Our contributions are the following: Descriptive analysis of the broadcast data used to train our ASR system confirms the already known disparity, where 65% of the speakers are men, speaking more than 75% of the time. When investigating WER scores according to gender, speaker's role and speech type, huge variations are observed. We conclude that gender is clearly a factor of variation in ASR performance, with a WER increase of 24% for women compared to men, exhibiting a clear gender bias. Gender bias varies across speaker's role and speech spontaneity level. Performance for Punctual speakers respectively spontaneous speech seems to reinforce this gender bias with a WER increase of 27.2% respectively 31.8% between male and female speakers. We found that an ASR system trained on unbalanced data regarding gender produces gender bias performance. Therefore, in order to create fair systems it is necessary to take into account the representation problems in society that are going to be encapsulated in the data. Understanding how women under-representation in broadcast data can lead to bias in ASR performances is the key to prevent re-implementing and reinforcing discrimination already existing in our societies. This is in line with the concept of “Fairness by Design" proposed by BIBREF31. Gender, race, religion, nationality are all characteristics that we deem unfair to classify on, and these ethical standpoints needs to be taken into account in systems' design. Characteristics that are not considered as relevant in a given task can be encapsulated in data nonetheless, and lead to bias performance. Being aware of the demographic skews our data set might contain is a first step to track the life cycle of a training data set and a necessary step to control the tools we develop.
How big is imbalance in analyzed corpora?
Women represent 33.16% of the speakers
4,055
qasper
4k
Introduction Chinese word segmentation (CWS) is a task for Chinese natural language process to delimit word boundary. CWS is a basic and essential task for Chinese which is written without explicit word delimiters and different from alphabetical languages like English. BIBREF0 treats Chinese word segmentation (CWS) as a sequence labeling task with character position tags, which is followed by BIBREF1, BIBREF2, BIBREF3. Traditional CWS models depend on the design of features heavily which effects the performance of model. To minimize the effort in feature engineering, some CWS models BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11 are developed following neural network architecture for sequence labeling tasks BIBREF12. Neural CWS models perform strong ability of feature representation, employing unigram and bigram character embedding as input and approach good performance. The CWS task is often modeled as one graph model based on a scoring model that means it is composed of two parts, one part is an encoder which is used to generate the representation of characters from the input sequence, the other part is a decoder which performs segmentation according to the encoder scoring. Table TABREF1 summarizes typical CWS models according to their decoding ways for both traditional and neural models. Markov models such as BIBREF13 and BIBREF4 depend on the maximum entropy model or maximum entropy Markov model both with a Viterbi decoder. Besides, conditional random field (CRF) or Semi-CRF for sequence labeling has been used for both traditional and neural models though with different representations BIBREF2, BIBREF15, BIBREF10, BIBREF17, BIBREF18. Generally speaking, the major difference between traditional and neural network models is about the way to represent input sentences. Recent works about neural CWS which focus on benchmark dataset, namely SIGHAN Bakeoff BIBREF21, may be put into the following three categories roughly. Encoder. Practice in various natural language processing tasks has been shown that effective representation is essential to the performance improvement. Thus for better CWS, it is crucial to encode the input character, word or sentence into effective representation. Table TABREF2 summarizes regular feature sets for typical CWS models including ours as well. The building blocks that encoders use include recurrent neural network (RNN) and convolutional neural network (CNN), and long-term memory network (LSTM). Graph model. As CWS is a kind of structure learning task, the graph model determines which type of decoder should be adopted for segmentation, also it may limit the capability of defining feature, as shown in Table 2, not all graph models can support the word features. Thus recent work focused on finding more general or flexible graph model to make model learn the representation of segmentation more effective as BIBREF9, BIBREF11. External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose BIBREF22, BIBREF23. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21. In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement. Shown in Table TABREF1, different decoders have particular decoding algorithms to match the respective CWS models. Markov models and CRF-based models often use Viterbi decoders with polynomial time complexity. In general graph model, search space may be too large for model to search. Thus it forces graph models to use an approximate beam search strategy. Beam search algorithm has a kind low-order polynomial time complexity. Especially, when beam width $b$=1, the beam search algorithm will reduce to greedy algorithm with a better time complexity $O(Mn)$ against the general beam search time complexity $O(Mnb^2)$, where $n$ is the number of units in one sentences, $M$ is a constant representing the model complexity. Greedy decoding algorithm can bring the fastest speed of decoding while it is not easy to guarantee the precision of decoding when the encoder is not strong enough. In this paper, we focus on more effective encoder design which is capable of offering fast and accurate Chinese word segmentation with only unigram feature and greedy decoding. Our proposed encoder will only consist of attention mechanisms as building blocks but nothing else. Motivated by the Transformer BIBREF24 and its strength of capturing long-range dependencies of input sentences, we use a self-attention network to generate the representation of input which makes the model encode sentences at once without feeding input iteratively. Considering the weakness of the Transformer to model relative and absolute position information directly BIBREF25 and the importance of localness information, position information and directional information for CWS, we further improve the architecture of standard multi-head self-attention of the Transformer with a directional Gaussian mask and get a variant called Gaussian-masked directional multi-head attention. Based on the newly improved attention mechanism, we expand the encoder of the Transformer to capture different directional information. With our powerful encoder, our model uses only simple unigram features to generate representation of sentences. For decoder which directly performs the segmentation, we use the bi-affinal attention scorer, which has been used in dependency parsing BIBREF26 and semantic role labeling BIBREF27, to implement greedy decoding on finding the boundaries of words. In our proposed model, greedy decoding ensures a fast segmentation while powerful encoder design ensures a good enough segmentation performance even working with greedy decoder together. Our model will be strictly evaluated on benchmark datasets from SIGHAN Bakeoff shared task on CWS in terms of closed test setting, and the experimental results show that our proposed model achieves new state-of-the-art. The technical contributions of this paper can be summarized as follows. We propose a CWS model with only attention structure. The encoder and decoder are both based on attention structure. With a powerful enough encoder, we for the first time show that unigram (character) featues can help yield strong performance instead of diverse $n$-gram (character and word) features in most of previous work. To capture the representation of localness information and directional information, we propose a variant of directional multi-head self-attention to further enhance the state-of-the-art Transformer encoder. Models The CWS task is often modelled as one graph model based on an encoder-based scoring model. The model for CWS task is composed of an encoder to represent the input and a decoder based on the encoder to perform actual segmentation. Figure FIGREF6 is the architecture of our model. The model feeds sentence into encoder. Embedding captures the vector $e=(e_1,...,e_n)$ of the input character sequences of $c=(c_1,...,c_n)$. The encoder maps vector sequences of $ {e}=(e_1,..,e_n)$ to two sequences of vector which are $ {v^b}=(v_1^b,...,v_n^b)$ and ${v^f}=(v_1^f,...v_n^f)$ as the representation of sentences. With $v^b$ and $v^f$, the bi-affinal scorer calculates the probability of each segmentation gaps and predicts the word boundaries of input. Similar as the Transformer, the encoder is an attention network with stacked self-attention and point-wise, fully connected layers while our encoder includes three independent directional encoders. Models ::: Encoder Stacks In the Transformer, the encoder is composed of a stack of N identical layers and each layer has one multi-head self-attention layer and one position-wise fully connected feed-forward layer. One residual connection is around two sub-layers and followed by layer normalization BIBREF24. This architecture provides the Transformer a good ability to generate representation of sentence. With the variant of multi-head self-attention, we design a Gaussian-masked directional encoder to capture representation of different directions to improve the ability of capturing the localness information and position information for the importance of adjacent characters. One unidirectional encoder can capture information of one particular direction. For CWS tasks, one gap of characters, which is from a word boundary, can divide one sequence into two parts, one part in front of the gap and one part in the rear of it. The forward encoder and backward encoder are used to capture information of two directions which correspond to two parts divided by the gap. One central encoder is paralleled with forward and backward encoders to capture the information of entire sentences. The central encoder is a special directional encoder for forward and backward information of sentences. The central encoder can fuse the information and enable the encoder to capture the global information. The encoder outputs one forward information and one backward information of each positions. The representation of sentence generated by center encoder will be added to these information directly: where $v^{b}=(v^b_1,...,v^b_n)$ is the backward information, $v^{f}=(v^f_1,...,v^f_n)$ is the forward information, $r^{b}=(r^b_1,...,r^b_n)$ is the output of backward encoder, $r^{c}=(r^c_1,...,r^c_n)$ is the output of center encoder and $r^{f}=(r^f_1,...,r^f_n)$ is the output of forward encoder. Models ::: Gaussian-Masked Directional Multi-Head Attention Similar as scaled dot-product attention BIBREF24, Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query $Q$ with all keys $K$, dividing each values by $\sqrt{d_k}$, where $\sqrt{d_k}$ is the dimension of keys, and apply a softmax function to generate the weights in the attention: Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters. Firstly we introduce the Gaussian weight matrix $G$ which presents the localness relationship between each two characters: where $g_{ij}$ is the Gaussian weight between character $i$ and $j$, $dis_{ij}$ is the distance between character $i$ and $j$, $\Phi (x)$ is the cumulative distribution function of Gaussian, $\sigma $ is the standard deviation of Gaussian function and it is a hyperparameter in our method. Equation (DISPLAY_FORM13) can ensure the Gaussian weight equals 1 when $dis_{ij}$ is 0. The larger distance between charactersis, the smaller the weight is, which makes one character can affect its adjacent characters more compared with other characters. To combine the Gaussian weight to the self-attention, we produce the Hadamard product of Gaussian weight matrix $G$ and the score matrix produced by $Q{K^{T}}$ where $AG$ is the Gaussian-masked attention. It ensures that the relationship between two characters with long distances is weaker than adjacent characters. The scaled dot-product attention models the relationship between two characters without regard to their distances in one sequence. For CWS task, the weight between adjacent characters should be more important while it is hard for self-attention to achieve the effect explicitly because the self-attention cannot get the order of sentences directly. The Gaussian-masked attention adjusts the weight between characters and their adjacent character to a larger value which stands for the effect of adjacent characters. For forward and backward encoder, the self-attention sublayer needs to use a triangular matrix mask to let the self-attention focus on different weights: where $pos_i$ is the position of character $c_i$. The triangular matrix for forward and backward encode are: $\left[ \begin{matrix} 1 & 0 & 0 & \cdots &0\\ 1 & 1 & 0 & \cdots &0\\ 1 & 1 & 1 & \cdots &0\\ \vdots &\vdots &\vdots &\ddots &\vdots \\ 1 & 1 & 1 & \cdots & 1\\ \end{matrix} \right]$ $\left[ \begin{matrix} 1 & 1 & 1 & \cdots &1 \\ 0 & 1 & 1 & \cdots &1 \\ 0 & 0& 1 & \cdots &1 \\ \vdots &\vdots &\vdots &\ddots &\vdots \\ 0 & 0 & 0 & \cdots & 1\\ \end{matrix}\right]$ Similar as BIBREF24, we use multi-head attention to capture information from different dimension positions as Figure FIGREF16 and get Gaussian-masked directional multi-head attention. With multi-head attention architecture, the representation of input can be captured by where $MH$ is the Gaussian-masked multi-head attention, ${W_i^q, W_i^k,W_i^v} \in \mathbb {R}^{d_k \times d_h}$ is the parameter matrices to generate heads, $d_k$ is the dimension of model and $d_h$ is the dimension of one head. Models ::: Bi-affinal Attention Scorer Regarding word boundaries as gaps between any adjacent words converts the character labeling task to the gap labeling task. Different from character labeling task, gap labeling task requires information of two adjacent characters. The relationship between adjacent characters can be represented as the type of gap. The characteristic of word boundaries makes bi-affine attention an appropriate scorer for CWS task. Bi-affinal attention scorer is the component that we use to label the gap. Bi-affinal attention is developed from bilinear attention which has been used in dependency parsing BIBREF26 and SRL BIBREF27. The distribution of labels in a labeling task is often uneven which makes the output layer often include a fixed bias term for the prior probability of different labels BIBREF27. Bi-affine attention uses bias terms to alleviate the burden of the fixed bias term and get the prior probability which makes it different from bilinear attention. The distribution of the gap is uneven that is similar as other labeling task which fits bi-affine. Bi-affinal attention scorer labels the target depending on information of independent unit and the joint information of two units. In bi-affinal attention, the score $s_{ij}$ of characters $c_i$ and $c_j$ $(i < j)$ is calculated by: where $v_i^f$ is the forward information of $c_i$ and $v_i^b$ is the backward information of $c_j$. In Equation (DISPLAY_FORM21), $W$, $U$ and $b$ are all parameters that can be updated in training. $W$ is a matrix with shape $(d_i \times N\times d_j)$ and $U$ is a $(N\times (d_i + d_j))$ matrix where $d_i$ is the dimension of vector $v_i^f$ and $N$ is the number of labels. In our model, the biaffine scorer uses the forward information of character in front of the gap and the backward information of the character behind the gap to distinguish the position of characters. Figure FIGREF22 is an example of labeling gap. The method of using biaffine scorer ensures that the boundaries of words can be determined by adjacent characters with different directional information. The score vector of the gap is formed by the probability of being a boundary of word. Further, the model generates all boundaries using activation function in a greedy decoding way. Experiments ::: Experimental Settings ::: Data We train and evaluate our model on datasets from SIGHAN Bakeoff 2005 BIBREF21 which has four datasets, PKU, MSR, AS and CITYU. Table TABREF23 shows the statistics of train data. We use F-score to evaluate CWS models. To train model with pre-trained embeddings in AS and CITYU, we use OpenCC to transfer data from traditional Chinese to simplified Chinese. Experiments ::: Experimental Settings ::: Pre-trained Embedding We only use unigram feature so we only trained character embeddings. Our pre-trained embedding are pre-trained on Chinese Wikipedia corpus by word2vec BIBREF29 toolkit. The corpus used for pre-trained embedding is all transferred to simplified Chinese and not segmented. On closed test, we use embeddings initialized randomly. Experiments ::: Experimental Settings ::: Hyperparameters For different datasets, we use two kinds of hyperparameters which are presented in Table TABREF24. We use hyperparameters in Table TABREF24 for small corpora (PKU and CITYU) and normal corpora (MSR and AS). We set the standard deviation of Gaussian function in Equation (DISPLAY_FORM13) to 2. Each training batch contains sentences with at most 4096 tokens. Experiments ::: Experimental Settings ::: Optimizer To train our model, we use the Adam BIBREF30 optimizer with $\beta _1=0.9$, $\beta _2=0.98$ and $\epsilon =10^{-9}$. The learning rate schedule is the same as BIBREF24: where $d$ is the dimension of embeddings, $step$ is the step number of training and $warmup_step$ is the step number of warmup. When the number of steps is smaller than the step of warmup, the learning rate increases linearly and then decreases. Experiments ::: Hardware and Implements We trained our models on a single CPU (Intel i7-5960X) with an nVidia 1080 Ti GPU. We implement our model in Python with Pytorch 1.0. Experiments ::: Results Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in BIBREF20, our model outperforms all the other models in MSR and AS except BIBREF18 and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various $n$-gram features while only our model takes unigram ones. With unsupervised segmentation features introduced by BIBREF20, our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff. Table TABREF36 compares our model and recent neural models in terms of open test setting in which any external resources, especially pre-trained embeddings or language models can be used. In MSR and AS, our model gets a comparable result while our results in CITYU and PKU are not remarkable. However, it is well known that it is always hard to compare models when using open test setting, especially with pre-trained embedding. Not all models may use the same method and data to pre-train. Though pre-trained embedding or language model can improve the performance, the performance improvement itself may be from multiple sources. It often that there is a success of pre-trained embedding to improve the performance, while it cannot prove that the model is better. Compared with other LSTM models, our model performs better in AS and MSR than in CITYU and PKU. Considering the scale of different corpora, we believe that the size of corpus affects our model and the larger size is, the better model performs. For small corpus, the model tends to be overfitting. Tables TABREF25 and TABREF26 also show the decoding time in different datasets. Our model finishes the segmentation with the least decoding time in all four datasets, thanks to the architecture of model which only takes attention mechanism as basic block. Related Work ::: Chinese Word Segmentation CWS is a task for Chinese natural language process to delimit word boundary. BIBREF0 for the first time formulize CWS as a sequence labeling task. BIBREF3 show that different character tag sets can make essential impact for CWS. BIBREF2 use CRFs as a model for CWS, achieving new state-of-the-art. Works of statistical CWS has built the basis for neural CWS. Neural word segmentation has been widely used to minimize the efforts in feature engineering which was important in statistical CWS. BIBREF4 introduce the neural model with sliding-window based sequence labeling. BIBREF6 propose a gated recursive neural network (GRNN) for CWS to incorporate complicated combination of contextual character and n-gram features. BIBREF7 use LSTM to learn long distance information. BIBREF9 propose a neural framework that eliminates context windows and utilize complete segmentation history. BIBREF33 explore a joint model that performs segmentation, POS-Tagging and chunking simultaneously. BIBREF34 propose a feature-enriched neural model for joint CWS and part-of-speech tagging. BIBREF35 present a joint model to enhance the segmentation of Chinese microtext by performing CWS and informal word detection simultaneously. BIBREF17 propose a character-based convolutional neural model to capture $n$-gram features automatically and an effective approach to incorporate word embeddings. BIBREF11 improve the model in BIBREF9 and propose a greedy neural word segmenter with balanced word and character embedding inputs. BIBREF23 propose a novel neural network model to incorporate unlabeled and partially-labeled data. BIBREF36 propose two methods that extend the Bi-LSTM to perform incorporating dictionaries into neural networks for CWS. BIBREF37 propose Switch-LSTMs to segment words and provided a more flexible solution for multi-criteria CWS which is easy to transfer the learned knowledge to new criteria. Related Work ::: Transformer Transformer BIBREF24 is an attention-based neural machine translation model. The Transformer is one kind of self-attention networks (SANs) which is proposed in BIBREF38. Encoder of the Transformer consists of one self-attention layer and a position-wise feed-forward layer. Decoder of the Transformer contains one self-attention layer, one encoder-decoder attention layer and one position-wise feed-forward layer. The Transformer uses residual connections around the sublayers and then followed by a layer normalization layer. Scaled dot-product attention is the key component in the Transformer. The input of attention contains queries, keys, and values of input sequences. The attention is generated using queries and keys like Equation (DISPLAY_FORM11). Structure of scaled dot-product attention allows the self-attention layer generate the representation of sentences at once and contain the information of the sentence which is different from RNN that process characters of sentences one by one. Standard self-attention is similar as Gaussian-masked direction attention while it does not have directional mask and gaussian mask. BIBREF24 also propose multi-head attention which is better to generate representation of sentence by dividing queries, keys and values to different heads and get information from different subspaces. Conclusion In this paper, we propose an attention mechanism only based Chinese word segmentation model. Our model uses self-attention from the Transformer encoder to take sequence input and bi-affine attention scorer to predict the label of gaps. To improve the ability of capturing the localness and directional information of self-attention based encoder, we propose a variant of self-attention called Gaussian-masked directional multi-head attention to replace the standard self-attention. We also extend the Transformer encoder to capture directional features. Our model uses only unigram features instead of multiple $n$-gram features in previous work. Our model is evaluated on standard benchmark dataset, SIGHAN Bakeoff 2005, which shows not only our model performs segmentation faster than any previous models but also gives new higher or comparable segmentation performance against previous state-of-the-art models.
What are strong baselines model is compared to?
Baseline models are: - Chen et al., 2015a - Chen et al., 2015b - Liu et al., 2016 - Cai and Zhao, 2016 - Cai et al., 2017 - Zhou et al., 2017 - Ma et al., 2018 - Wang et al., 2019
3,629
qasper
4k
Introduction Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer. Question generation for reading comprehension is firstly formalized as a declarative-to-interrogative sentence transformation problem with predefined rules or templates BIBREF4, BIBREF0. With the rise of neural models, Du2017LearningTA propose to model this task under the sequence-to-sequence (Seq2Seq) learning framework BIBREF5 with attention mechanism BIBREF6. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence. Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation. To capture answer-relevant words in the sentence, they adopt a BIO tagging scheme to incorporate the answer position embedding in Seq2Seq learning. Furthermore, Sun2018AnswerfocusedAP propose that tokens close to the answer fragments are more likely to be answer-relevant. Therefore, they explicitly encode the relative distance between sentence words and the answer via position embedding and position-aware attention. Although existing proximity-based answer-aware approaches achieve reasonable performance, we argue that such intuition may not apply to all cases especially for sentences with complex structure. For example, Figure FIGREF1 shows such an example where those approaches fail. This sentence contains a few facts and due to the parenthesis (i.e. “the area's coldest month”), some facts intertwine: “The daily mean temperature in January is 0.3$^\circ $C” and “January is the area's coldest month”. From the question generated by a proximity-based answer-aware baseline, we find that it wrongly uses the word “coldest” but misses the correct word “mean” because “coldest” has a shorter distance to the answer “0.3$^\circ $C”. In summary, their intuition that “the neighboring words of the answer are more likely to be answer-relevant and have a higher chance to be used in the question” is not reliable. To quantitatively show this drawback of these models, we implement the approach proposed by Sun2018AnswerfocusedAP and analyze its performance under different relative distances between the answer and other non-stop sentence words that also appear in the ground truth question. The results are shown in Table TABREF2. We find that the performance drops at most 36% when the relative distance increases from “$0\sim 10$” to “$>10$”. In other words, when the useful context is located far away from the answer, current proximity-based answer-aware approaches will become less effective, since they overly emphasize neighboring words of the answer. To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (“The daily mean temperature in January”, “is”, “32.6$^\circ $F (0.3$^\circ $C)”) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules. Nevertheless, it is challenging to train a model to effectively utilize both the unstructured sentence and the structured answer-relevant relation because both of them could be noisy: the unstructured sentence may contain multiple facts which are irrelevant to the target question, while the limitation of the OpenIE tool may produce less accurate extracted relations. To explore their advantages simultaneously and avoid the drawbacks, we design a gated attention mechanism and a dual copy mechanism based on the encoder-decoder framework, where the former learns to control the information flow between the unstructured and structured inputs, while the latter learns to copy words from two sources to maintain the informativeness and faithfulness of generated questions. In the evaluations on the SQuAD dataset, our system achieves significant and consistent improvement as compared to all baseline methods. In particular, we demonstrate that the improvement is more significant with a larger relative distance between the answer and other non-stop sentence words that also appear in the ground truth question. Furthermore, our model is capable of generating diverse questions for a single sentence-answer pair where the sentence conveys multiple relations of its answer fragment. Framework Description In this section, we first introduce the task definition and our protocol to extract structured answer-relevant relations. Then we formalize the task under the encoder-decoder framework with gated attention and dual copy mechanism. Framework Description ::: Problem Definition We formalize our task as an answer-aware Question Generation (QG) problem BIBREF8, which assumes answer phrases are given before generating questions. Moreover, answer phrases are shown as text fragments in passages. Formally, given the sentence $S$, the answer $A$, and the answer-relevant relation $M$, the task of QG aims to find the best question $\overline{Q}$ such that, where $A$ is a contiguous span inside $S$. Framework Description ::: Answer-relevant Relation Extraction We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence. We join all arguments in the extracted n-ary relation into a sequence as our to the point context. Figure FIGREF5 shows n-ary relations extracted from OpenIE. As we can see, OpenIE extracts multiple relations for complex sentences. Here we select the most informative relation according to three criteria in the order of descending importance: (1) having the maximal number of overlapped tokens between the answer and the relation; (2) being assigned the highest confidence score by OpenIE; (3) containing maximum non-stop words. As shown in Figure FIGREF5, our criteria can select answer-relevant relations (waved in Figure FIGREF5), which is especially useful for sentences with extraneous information. In rare cases, OpenIE cannot extract any relation, we treat the sentence itself as the to the point context. Table TABREF8 shows some statistics to verify the intuition that the extracted relations can serve as more to the point context. We find that the tokens in relations are 61% more likely to be used in the target question than the tokens in sentences, and thus they are more to the point. On the other hand, on average the sentences contain one more question token than the relations (1.86 v.s. 2.87). Therefore, it is still necessary to take the original sentence into account to generate a more accurate question. Framework Description ::: Our Proposed Model ::: Overview. As shown in Figure FIGREF10, our framework consists offour components (1) Sentence Encoder and Relation Encoder, (2) Decoder, (3) Gated Attention Mechanism and (4) Dual Copy Mechanism. The sentence encoder and relation encoder encode the unstructured sentence and the structured answer-relevant relation, respectively. To select and combine the source information from the two encoders, a gated attention mechanism is employed to jointly attend both contextualized information sources, and a dual copy mechanism copies words from either the sentence or the relation. Framework Description ::: Our Proposed Model ::: Answer-aware Encoder. We employ two encoders to integrate information from the unstructured sentence $S$ and the answer-relevant relation $M$ separately. Sentence encoder takes in feature-enriched embeddings including word embeddings $\mathbf {w}$, linguistic embeddings $\mathbf {l}$ and answer position embeddings $\mathbf {a}$. We follow BIBREF9 to transform POS and NER tags into continuous representation ($\mathbf {l}^p$ and $\mathbf {l}^n$) and adopt a BIO labelling scheme to derive the answer position embedding (B: the first token of the answer, I: tokens within the answer fragment except the first one, O: tokens outside of the answer fragment). For each word $w_i$ in the sentence $S$, we simply concatenate all features as input: $\mathbf {x}_i^s= [\mathbf {w}_i; \mathbf {l}^p_i; \mathbf {l}^n_i; \mathbf {a}_i]$. Here $[\mathbf {a};\mathbf {b}]$ denotes the concatenation of vectors $\mathbf {a}$ and $\mathbf {b}$. We use bidirectional LSTMs to encode the sentence $(\mathbf {x}_1^s, \mathbf {x}_2^s, ..., \mathbf {x}_n^s)$ to get a contextualized representation for each token: where $\overrightarrow{\mathbf {h}}^{s}_i$ and $\overleftarrow{\mathbf {h}}^{s}_i$ are the hidden states at the $i$-th time step of the forward and the backward LSTMs. The output state of the sentence encoder is the concatenation of forward and backward hidden states: $\mathbf {h}^{s}_i=[\overrightarrow{\mathbf {h}}^{s}_i;\overleftarrow{\mathbf {h}}^{s}_i]$. The contextualized representation of the sentence is $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$. For the relation encoder, we firstly join all items in the n-ary relation $M$ into a sequence. Then we only take answer position embedding as an extra feature for the sequence: $\mathbf {x}_i^m= [\mathbf {w}_i; \mathbf {a}_i]$. Similarly, we take another bidirectional LSTMs to encode the relation sequence and derive the corresponding contextualized representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$. Framework Description ::: Our Proposed Model ::: Decoder. We use an LSTM as the decoder to generate the question. The decoder predicts the word probability distribution at each decoding timestep to generate the question. At the t-th timestep, it reads the word embedding $\mathbf {w}_{t}$ and the hidden state $\mathbf {u}_{t-1}$ of the previous timestep to generate the current hidden state: Framework Description ::: Our Proposed Model ::: Gated Attention Mechanism. We design a gated attention mechanism to jointly attend the sentence representation and the relation representation. For sentence representation $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$, we employ the Luong2015EffectiveAT's attention mechanism to obtain the sentence context vector $\mathbf {c}^s_t$, where $\mathbf {W}_a$ is a trainable weight. Similarly, we obtain the vector $\mathbf {c}^m_t$ from the relation representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$. To jointly model the sentence and the relation, a gating mechanism is designed to control the information flow from two sources: where $\odot $ represents element-wise dot production and $\mathbf {W}_g, \mathbf {W}_h$ are trainable weights. Finally, the predicted probability distribution over the vocabulary $V$ is computed as: where $\mathbf {W}_V$ and $\mathbf {b}_V$ are parameters. Framework Description ::: Our Proposed Model ::: Dual Copy Mechanism. To deal with the rare and unknown words, the decoder applies the pointing method BIBREF10, BIBREF11, BIBREF12 to allow copying a token from the input sentence at the $t$-th decoding step. We reuse the attention score $\mathbf {\alpha }_{t}^s$ and $\mathbf {\alpha }_{t}^m$ to derive the copy probability over two source inputs: Different from the standard pointing method, we design a dual copy mechanism to copy from two sources with two gates. The first gate is designed for determining copy tokens from two sources of inputs or generate next word from $P_V$, which is computed as $g^v_t = \text{sigmoid}(\mathbf {w}^v_g \tilde{\mathbf {h}}_t + b^v_g)$. The second gate takes charge of selecting the source (sentence or relation) to copy from, which is computed as $g^c_t = \text{sigmoid}(\mathbf {w}^c_g [\mathbf {c}_t^s;\mathbf {c}_t^m] + b^c_g)$. Finally, we combine all probabilities $P_V$, $P_S$ and $P_M$ through two soft gates $g^v_t$ and $g^c_t$. The probability of predicting $w$ as the $t$-th token of the question is: Framework Description ::: Our Proposed Model ::: Training and Inference. Given the answer $A$, sentence $S$ and relation $M$, the training objective is to minimize the negative log-likelihood with regard to all parameters: where $\mathcal {\lbrace }Q\rbrace $ is the set of all training instances, $\theta $ denotes model parameters and $\text{log} P(Q|A,S,M;\theta )$ is the conditional log-likelihood of $Q$. In testing, our model targets to generate a question $Q$ by maximizing: Experimental Setting ::: Dataset & Metrics We conduct experiments on the SQuAD dataset BIBREF3. It contains 536 Wikipedia articles and 100k crowd-sourced question-answer pairs. The questions are written by crowd-workers and the answers are spans of tokens in the articles. We employ two different data splits by following Zhou2017NeuralQG and Du2017LearningTA . In Zhou2017NeuralQG, the original SQuAD development set is evenly divided into dev and test sets, while Du2017LearningTA treats SQuAD development set as its development set and splits original SQuAD training set into a training set and a test set. We also filter out questions which do not have any overlapped non-stop words with the corresponding sentences and perform some preprocessing steps, such as tokenization and sentence splitting. The data statistics are given in Table TABREF27. We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC. Experimental Setting ::: Baseline Models We compare with the following models. [leftmargin=*] s2s BIBREF13 proposes an attention-based sequence-to-sequence neural network for question generation. NQG++ BIBREF9 takes the answer position feature and linguistic features into consideration and equips the Seq2Seq model with copy mechanism. M2S+cp BIBREF14 conducts multi-perspective matching between the answer and the sentence to derive an answer-aware sentence representation for question generation. s2s+MP+GSA BIBREF8 introduces a gated self-attention into the encoder and a maxout pointer mechanism into the decoder. We report their sentence-level results for a fair comparison. Hybrid BIBREF15 is a hybrid model which considers the answer embedding for the question word generation and the position of context words for modeling the relative distance between the context words and the answer. ASs2s BIBREF16 replaces the answer in the sentence with a special token to avoid its appearance in the generated questions. Experimental Setting ::: Implementation Details We take the most frequent 20k words as our vocabulary and use the GloVe word embeddings BIBREF20 for initialization. The embedding dimensions for POS, NER, answer position are set to 20. We use two-layer LSTMs in both encoder and decoder, and the LSTMs hidden unit size is set to 600. We use dropout BIBREF21 with the probability $p=0.3$. All trainable parameters, except word embeddings, are randomly initialized with the Xavier uniform in $(-0.1, 0.1)$ BIBREF22. For optimization in the training, we use SGD as the optimizer with a minibatch size of 64 and an initial learning rate of 1.0. We train the model for 15 epochs and start halving the learning rate after the 8th epoch. We set the gradient norm upper bound to 3 during the training. We adopt the teacher-forcing for the training. In the testing, we select the model with the lowest perplexity and beam search with size 3 is employed for generating questions. All hyper-parameters and models are selected on the validation dataset. Results and Analysis ::: Main Results Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models BIBREF9, BIBREF15 on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework. Recall that existing proximity-based answer-aware models perform poorly when the distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question is large (Table TABREF2). Here we investigate whether our proposed model using the structured answer-relevant relations can alleviate this issue or not, by conducting experiments for our model under the same setting as in Table TABREF2. The broken-down performances by different relative distances are shown in Table TABREF40. We find that our proposed model outperforms Hybrid (our re-implemented version for this experiment) on all ranges of relative distances, which shows that the structured answer-relevant relations can capture both short and long term answer-relevant dependencies of the answer in sentences. Furthermore, comparing the performance difference between Hybrid and our model, we find the improvements become more significant when the distance increases from “$0\sim 10$” to “$>10$”. One reason is that our model can extract relations with distant dependencies to the answer, which greatly helps our model ignore the extraneous information. Proximity-based answer-aware models may overly emphasize the neighboring words of answers and become less effective as the useful context becomes further away from the answer in the complex sentences. In fact, the breakdown intervals in Table TABREF40 naturally bound its sentence length, say for “$>10$”, the sentences in this group must be longer than 10. Thus, the length variances in these two intervals could be significant. To further validate whether our model can extract long term dependency words. We rerun the analysis of Table TABREF40 only for long sentences (length $>$ 20) of each interval. The improvement percentages over Hybrid are shown in Table TABREF40, which become more significant when the distance increases from “$0\sim 10$” to “$>10$”. Results and Analysis ::: Case Study Figure FIGREF42 provides example questions generated by crowd-workers (ground truth questions), the baseline Hybrid BIBREF15, and our model. In the first case, there are two subsequences in the input and the answer has no relation with the second subsequence. However, we see that the baseline model prediction copies irrelevant words “The New York Times” while our model can avoid using the extraneous subsequence “The New York Times noted ...” with the help of the structured answer-relevant relation. Compared with the ground truth question, our model cannot capture the cross-sentence information like “her fifth album”, where the techniques in paragraph-level QG models BIBREF8 may help. In the second case, as discussed in Section SECREF1, this sentence contains a few facts and some facts intertwine. We find that our model can capture distant answer-relevant dependencies such as “mean temperature” while the proximity-based baseline model wrongly takes neighboring words of the answer like “coldest” in the generated question. Results and Analysis ::: Diverse Question Generation Another interesting observation is that for the same answer-sentence pair, our model can generate diverse questions by taking different answer-relevant relations as input. Such capability improves the interpretability of our model because the model is given not only what to be asked (i.e., the answer) but also the related fact (i.e., the answer-relevant relation) to be covered in the question. In contrast, proximity-based answer-aware models can only generate one question given the sentence-answer pair regardless of how many answer-relevant relations in the sentence. We think such capability can also validate our motivation: questions should be generated according to the answer-aware relations instead of neighboring words of answer fragments. Figure FIGREF45 show two examples of diverse question generation. In the first case, the answer fragment `Hugh L. Dryden' is the appositive to `NASA Deputy Administrator' but the subject to the following tokens `announced the Apollo program ...'. Our framework can extract these two answer-relevant relations, and by feeding them to our model separately, we can receive two questions asking different relations with regard to the answer. Related Work The topic of question generation, initially motivated for educational purposes, is tackled by designing many complex rules for specific question types BIBREF4, BIBREF23. Heilman2010GoodQS improve rule-based question generation by introducing a statistical ranking model. First, they remove extraneous information in the sentence to transform it into a simpler one, which can be transformed easily into a succinct question with predefined sets of general rules. Then they adopt an overgenerate-and-rank approach to select the best candidate considering several features. With the rise of dominant neural sequence-to-sequence learning models BIBREF5, Du2017LearningTA frame question generation as a sequence-to-sequence learning problem. Compared with rule-based approaches, neural models BIBREF24 can generate more fluent and grammatical questions. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence, which confuses the model during train and prevents concrete automatic evaluation. To tackle this issue, Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer is already known and acts as a contiguous span inside the input sentence. They adopt a BIO tagging scheme to incorporate the answer position information as learned embedding features in Seq2Seq learning. Song2018LeveragingCI explicitly model the information between answer and sentence with a multi-perspective matching model. Kim2019ImprovingNQ also focus on the answer information and proposed an answer-separated Seq2Seq model by masking the answer with special tokens. All answer-aware neural models treat question generation as a one-to-one mapping problem, but existing models perform poorly for sentences with a complex structure (as shown in Table TABREF2). Our work is inspired by the process of extraneous information removing in BIBREF0, BIBREF25. Different from Heilman2010GoodQS which directly use the simplified sentence for generation and cao2018faithful which only consider aggregate two sources of information via gated attention in summarization, we propose to combine the structured answer-relevant relation and the original sentence. Factoid question generation from structured text is initially investigated by Serban2016GeneratingFQ, but our focus here is leveraging structured inputs to help question generation over unstructured sentences. Our proposed model can take advantage of unstructured sentences and structured answer-relevant relations to maintain informativeness and faithfulness of generated questions. The proposed model can also be generalized in other conditional sequence generation tasks which require multiple sources of inputs, e.g., distractor generation for multiple choice questions BIBREF26. Conclusions and Future Work In this paper, we propose a question generation system which combines unstructured sentences and structured answer-relevant relations for generation. The unstructured sentences maintain the informativeness of generated questions while structured answer-relevant relations keep the faithfulness of questions. Extensive experiments demonstrate that our proposed model achieves state-of-the-art performance across several metrics. Furthermore, our model can generate diverse questions with different structured answer-relevant relations. For future work, there are some interesting dimensions to explore, such as difficulty levels BIBREF27, paragraph-level information BIBREF8 and conversational question generation BIBREF28. Acknowledgments This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund). We would like to thank the anonymous reviewers for their comments. We would also like to thank Department of Computer Science and Engineering, The Chinese University of Hong Kong for the conference grant support.
On what datasets are experiments performed?
SQuAD
3,757
qasper
4k
Introduction Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) BIBREF0 have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, representational power and often accuracy. RNN applications in the natural language domain range from sentence classification BIBREF1 to word- and character-level language modeling BIBREF2 . RNNs are also commonly the basic building block for more complex models for tasks such as machine translation BIBREF3 , BIBREF4 , BIBREF5 or question answering BIBREF6 , BIBREF7 . Unfortunately standard RNNs, including LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classification or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel. Convolutional neural networks (CNNs) BIBREF8 , though more popular on tasks involving image data, have also been applied to sequence encoding tasks BIBREF9 . Such models apply time-invariant filter functions in parallel to windows along the input sequence. CNNs possess several advantages over recurrent models, including increased parallelism and better scaling to long sequences such as those often seen with character-level language data. Convolutional models for sequence processing have been more successful when combined with RNN layers in a hybrid architecture BIBREF10 , because traditional max- and average-pooling approaches to combining convolutional features across timesteps assume time invariance and hence cannot make full use of large-scale sequence order information. We present quasi-recurrent neural networks for neural sequence modeling. QRNNs address both drawbacks of standard models: like CNNs, QRNNs allow for parallel computation across both timestep and minibatch dimensions, enabling high throughput and good scaling to long sequences. Like RNNs, QRNNs allow the output to depend on the overall order of elements in the sequence. We describe QRNN variants tailored to several natural language tasks, including document-level sentiment classification, language modeling, and character-level machine translation. These models outperform strong LSTM baselines on all three tasks while dramatically reducing computation time. Model Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions. Given an input sequence INLINEFORM0 of INLINEFORM1 INLINEFORM2 -dimensional vectors INLINEFORM3 , the convolutional subcomponent of a QRNN performs convolutions in the timestep dimension with a bank of INLINEFORM4 filters, producing a sequence INLINEFORM5 of INLINEFORM6 -dimensional candidate vectors INLINEFORM7 . In order to be useful for tasks that include prediction of the next token, the filters must not allow the computation for any given timestep to access information from future timesteps. That is, with filters of width INLINEFORM8 , each INLINEFORM9 depends only on INLINEFORM10 through INLINEFORM11 . This concept, known as a masked convolution BIBREF11 , is implemented by padding the input to the left by the convolution's filter size minus one. We apply additional convolutions with separate filter banks to obtain sequences of vectors for the elementwise gates that are needed for the pooling function. While the candidate vectors are passed through a INLINEFORM0 nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a forget gate INLINEFORM1 and an output gate INLINEFORM2 at each timestep, the full set of computations in the convolutional component is then: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , each in INLINEFORM3 , are the convolutional filter banks and INLINEFORM4 denotes a masked convolution along the timestep dimension. Note that if the filter width is 2, these equations reduce to the LSTM-like DISPLAYFORM0 Convolution filters of larger width effectively compute higher INLINEFORM0 -gram features at each timestep; thus larger widths are especially important for character-level tasks. Suitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which BIBREF12 term “dynamic average pooling”, uses only a forget gate: DISPLAYFORM0 We term these three options f-pooling, fo-pooling, and ifo-pooling respectively; in each case we initialize INLINEFORM0 or INLINEFORM1 to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time. A single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combination of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions. Variants Motivated by several common natural language tasks, and the long history of work on related architectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types. Regularization An important extension to the stacked QRNN is a robust regularization scheme inspired by recent work in regularizing LSTMs. The need for an effective regularization method for LSTMs, and dropout's relative lack of efficacy when applied to recurrent connections, led to the development of recurrent dropout schemes, including variational inference–based dropout BIBREF13 and zoneout BIBREF14 . These schemes extend dropout to the recurrent setting by taking advantage of the repeating structure of recurrent networks, providing more powerful and less destructive regularization. Variational inference–based dropout locks the dropout mask used for the recurrent connections across timesteps, so a single RNN pass uses a single stochastic subset of the recurrent weights. Zoneout stochastically chooses a new subset of channels to “zone out” at each timestep; for these channels the network copies states from one timestep to the next without modification. As QRNNs lack recurrent weights, the variational inference approach does not apply. Thus we extended zoneout to the QRNN architecture by modifying the pooling function to keep the previous pooling state for a stochastic subset of channels. Conveniently, this is equivalent to stochastically setting a subset of the QRNN's INLINEFORM0 gate channels to 1, or applying dropout on INLINEFORM1 : DISPLAYFORM0 Thus the pooling function itself need not be modified at all. We note that when using an off-the-shelf dropout layer in this context, it is important to remove automatic rescaling functionality from the implementation if it is present. In many experiments, we also apply ordinary dropout between layers, including between word embeddings and the first QRNN layer. Densely-Connected Layers We can also extend the QRNN architecture using techniques introduced for convolutional networks. For sequence classification tasks, we found it helpful to use skip-connections between every QRNN layer, a technique termed “dense convolution” by BIBREF15 . Where traditional feed-forward or convolutional networks have connections only between subsequent layers, a “DenseNet” with INLINEFORM0 layers has feed-forward or convolutional connections between every pair of layers, for a total of INLINEFORM1 . This can improve gradient flow and convergence properties, especially in deeper networks, although it requires a parameter count that is quadratic in the number of layers. When applying this technique to the QRNN, we include connections between the input embeddings and every QRNN layer and between every pair of QRNN layers. This is equivalent to concatenating each QRNN layer's input to its output along the channel dimension before feeding the state into the next layer. The output of the last layer alone is then used as the overall encoding result. Encoder–Decoder Models To demonstrate the generality of QRNNs, we extend the model architecture to sequence-to-sequence tasks, such as machine translation, by using a QRNN as encoder and a modified QRNN, enhanced with attention, as decoder. The motivation for modifying the decoder is that simply feeding the last encoder hidden state (the output of the encoder's pooling layer) into the decoder's recurrent pooling layer, analogously to conventional recurrent encoder–decoder architectures, would not allow the encoder state to affect the gate or update values that are provided to the decoder's pooling layer. This would substantially limit the representational power of the decoder. Instead, the output of each decoder QRNN layer's convolution functions is supplemented at every timestep with the final encoder hidden state. This is accomplished by adding the result of the convolution for layer INLINEFORM0 (e.g., INLINEFORM1 , in INLINEFORM2 ) with broadcasting to a linearly projected copy of layer INLINEFORM3 's last encoder state (e.g., INLINEFORM4 , in INLINEFORM5 ): DISPLAYFORM0 where the tilde denotes that INLINEFORM0 is an encoder variable. Encoder–decoder models which operate on long sequences are made significantly more powerful with the addition of soft attention BIBREF3 , which removes the need for the entire input representation to fit into a fixed-length encoding vector. In our experiments, we computed an attentional sum of the encoder's last layer's hidden states. We used the dot products of these encoder hidden states with the decoder's last layer's un-gated hidden states, applying a INLINEFORM1 along the encoder timesteps, to weight the encoder states into an attentional sum INLINEFORM2 for each decoder timestep. This context, and the decoder state, are then fed into a linear layer followed by the output gate: DISPLAYFORM0 where INLINEFORM0 is the last layer. While the first step of this attention procedure is quadratic in the sequence length, in practice it takes significantly less computation time than the model's linear and convolutional layers due to the simple and highly parallel dot-product scoring function. Experiments We evaluate the performance of the QRNN on three different natural language tasks: document-level sentiment classification, language modeling, and character-based neural machine translation. Our QRNN models outperform LSTM-based models of equal hidden size on all three tasks while dramatically improving computation speed. Experiments were implemented in Chainer BIBREF16 . Sentiment Classification We evaluate the QRNN architecture on a popular document-level sentiment classification benchmark, the IMDb movie review dataset BIBREF17 . The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words BIBREF18 . We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., BIBREF19 ). Our best performance on a held-out development set was achieved using a four-layer densely-connected QRNN with 256 units per layer and word vectors initialized using 300-dimensional cased GloVe embeddings BIBREF20 . Dropout of 0.3 was applied between layers, and we used INLINEFORM0 regularization of INLINEFORM1 . Optimization was performed on minibatches of 24 examples using RMSprop BIBREF21 with learning rate of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 . Small batch sizes and long sequence lengths provide an ideal situation for demonstrating the QRNN's performance advantages over traditional recurrent architectures. We observed a speedup of 3.2x on IMDb train time per epoch compared to the optimized LSTM implementation provided in NVIDIA's cuDNN library. For specific batch sizes and sequence lengths, a 16x speed gain is possible. Figure FIGREF15 provides extensive speed comparisons. In Figure FIGREF12 , we visualize the hidden state vectors INLINEFORM0 of the final QRNN layer on part of an example from the IMDb dataset. Even without any post-processing, changes in the hidden state are visible and interpretable in regards to the input. This is a consequence of the elementwise nature of the recurrent pooling function, which delays direct interaction between different channels of the hidden state until the computation of the next QRNN layer. Language Modeling We replicate the language modeling experiment of BIBREF2 and BIBREF13 to benchmark the QRNN architecture for natural language sequence prediction. The experiment uses a standard preprocessed version of the Penn Treebank (PTB) by BIBREF25 . We implemented a gated QRNN model with medium hidden size: 2 layers with 640 units in each layer. Both QRNN layers use a convolutional filter width INLINEFORM0 of two timesteps. While the “medium” models used in other work BIBREF2 , BIBREF13 consist of 650 units in each layer, it was more computationally convenient to use a multiple of 32. As the Penn Treebank is a relatively small dataset, preventing overfitting is of considerable importance and a major focus of recent research. It is not obvious in advance which of the many RNN regularization schemes would perform well when applied to the QRNN. Our tests showed encouraging results from zoneout applied to the QRNN's recurrent pooling layer, implemented as described in Section SECREF5 . The experimental settings largely followed the “medium” setup of BIBREF2 . Optimization was performed by stochastic gradient descent (SGD) without momentum. The learning rate was set at 1 for six epochs, then decayed by 0.95 for each subsequent epoch, for a total of 72 epochs. We additionally used INLINEFORM0 regularization of INLINEFORM1 and rescaled gradients with norm above 10. Zoneout was applied by performing dropout with ratio 0.1 on the forget gates of the QRNN, without rescaling the output of the dropout function. Batches consist of 20 examples, each 105 timesteps. Comparing our results on the gated QRNN with zoneout to the results of LSTMs with both ordinary and variational dropout in Table TABREF14 , we see that the QRNN is highly competitive. The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of BIBREF2 which do not use recurrent dropout and is even competitive with variational LSTMs. This may be due to the limited computational capacity that the QRNN's pooling layer has relative to the LSTM's recurrent weights, providing structural regularization over the recurrence. Without zoneout, early stopping based upon validation loss was required as the QRNN would begin overfitting. By applying a small amount of zoneout ( INLINEFORM0 ), no early stopping is required and the QRNN achieves competitive levels of perplexity to the variational LSTM of BIBREF13 , which had variational inference based dropout of 0.2 applied recurrently. Their best performing variation also used Monte Carlo (MC) dropout averaging at test time of 1000 different masks, making it computationally more expensive to run. When training on the PTB dataset with an NVIDIA K40 GPU, we found that the QRNN is substantially faster than a standard LSTM, even when comparing against the optimized cuDNN LSTM. In Figure FIGREF15 we provide a breakdown of the time taken for Chainer's default LSTM, the cuDNN LSTM, and QRNN to perform a full forward and backward pass on a single batch during training of the RNN LM on PTB. For both LSTM implementations, running time was dominated by the RNN computations, even with the highly optimized cuDNN implementation. For the QRNN implementation, however, the “RNN” layers are no longer the bottleneck. Indeed, there are diminishing returns from further optimization of the QRNN itself as the softmax and optimization overhead take equal or greater time. Note that the softmax, over a vocabulary size of only 10,000 words, is relatively small; for tasks with larger vocabularies, the softmax would likely dominate computation time. It is also important to note that the cuDNN library's RNN primitives do not natively support any form of recurrent dropout. That is, running an LSTM that uses a state-of-the-art regularization scheme at cuDNN-like speeds would likely require an entirely custom kernel. Character-level Neural Machine Translation We evaluate the sequence-to-sequence QRNN architecture described in SECREF5 on a challenging neural machine translation task, IWSLT German–English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a unified vocabulary of 187 Unicode code points. Our best performance on a development set (TED.tst2013) was achieved using a four-layer encoder–decoder QRNN with 320 units per layer, no dropout or INLINEFORM0 regularization, and gradient rescaling to a maximum magnitude of 5. Inputs were supplied to the encoder reversed, while the encoder convolutions were not masked. The first encoder layer used convolutional filter width INLINEFORM1 , while the other encoder layers used INLINEFORM2 . Optimization was performed for 10 epochs on minibatches of 16 examples using Adam BIBREF28 with INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . Decoding was performed using beam search with beam width 8 and length normalization INLINEFORM7 . The modified log-probability ranking criterion is provided in the appendix. Results using this architecture were compared to an equal-sized four-layer encoder–decoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied. Table TABREF17 shows that the QRNN outperformed the character-level LSTM, almost matching the performance of a word-level attentional baseline. Related Work Exploring alternatives to traditional RNNs for sequence tasks is a major area of current research. Quasi-recurrent neural networks are related to several such recently described models, especially the strongly-typed recurrent neural networks (T-RNN) introduced by BIBREF12 . While the motivation and constraints described in that work are different, BIBREF12 's concepts of “learnware” and “firmware” parallel our discussion of convolution-like and pooling-like subcomponents. As the use of a fully connected layer for recurrent connections violates the constraint of “strong typing”, all strongly-typed RNN architectures (including the T-RNN, T-GRU, and T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attention or skip-connections) are not “strongly typed”. In particular, a T-RNN differs from a QRNN as described in this paper with filter size 1 and f-pooling only in the absence of an activation function on INLINEFORM0 . Similarly, T-GRUs and T-LSTMs differ from QRNNs with filter size 2 and fo- or ifo-pooling respectively in that they lack INLINEFORM1 on INLINEFORM2 and use INLINEFORM3 rather than sigmoid on INLINEFORM4 . The QRNN is also related to work in hybrid convolutional–recurrent models. BIBREF31 apply CNNs at the word level to generate INLINEFORM0 -gram features used by an LSTM for text classification. BIBREF32 also tackle text classification by applying convolutions at the character level, with a stride to reduce sequence length, then feeding these features into a bidirectional LSTM. A similar approach was taken by BIBREF10 for character-level machine translation. Their model's encoder uses a convolutional layer followed by max-pooling to reduce sequence length, a four-layer highway network, and a bidirectional GRU. The parallelism of the convolutional, pooling, and highway layers allows training speed comparable to subword-level models without hard-coded text segmentation. The QRNN encoder–decoder model shares the favorable parallelism and path-length properties exhibited by the ByteNet BIBREF33 , an architecture for character-level machine translation based on residual convolutions over binary trees. Their model was constructed to achieve three desired properties: parallelism, linear-time computational complexity, and short paths between any pair of words in order to better propagate gradient signals. Conclusion Intuitively, many aspects of the semantics of long sequences are context-invariant and can be computed in parallel (e.g., convolutionally), but some aspects require long-distance context and must be computed recurrently. Many existing neural network architectures either fail to take advantage of the contextual information or fail to take advantage of the parallelism. QRNNs exploit both parallelism and context, exhibiting advantages from both convolutional and recurrent neural networks. QRNNs have better predictive accuracy than LSTM-based models of equal hidden size, even though they use fewer parameters and run substantially faster. Our experiments show that the speed and accuracy advantages remain consistent across tasks and at both word and character levels. Extensions to both CNNs and RNNs are often directly applicable to the QRNN, while the model's hidden states are more interpretable than those of other recurrent architectures as its channels maintain their independence across timesteps. We believe that QRNNs can serve as a building block for long-sequence tasks that were previously impractical with traditional RNNs. Beam search ranking criterion The modified log-probability ranking criterion we used in beam search for translation experiments is: DISPLAYFORM0 where INLINEFORM0 is a length normalization parameter BIBREF34 , INLINEFORM1 is the INLINEFORM2 th output character, and INLINEFORM3 is a “target length” equal to the source sentence length plus five characters. This reduces at INLINEFORM4 to ordinary beam search with probabilities: DISPLAYFORM0 and at INLINEFORM0 to beam search with probabilities normalized by length (up to the target length): DISPLAYFORM0 Conveniently, this ranking criterion can be computed at intermediate beam-search timesteps, obviating the need to apply a separate reranking on complete hypotheses.
What sentiment classification dataset is used?
the IMDb movie review dataset BIBREF17
3,432
qasper
4k
Introduction This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ In the spirit of the brevity of social media's messages and reactions, people have got used to express feelings minimally and symbolically, as with hashtags on Twitter and Instagram. On Facebook, people tend to be more wordy, but posts normally receive more simple “likes” than longer comments. Since February 2016, Facebook users can express specific emotions in response to a post thanks to the newly introduced reaction feature (see Section SECREF2 ), so that now a post can be wordlessly marked with an expression of say “joy" or “surprise" rather than a generic “like”. It has been observed that this new feature helps Facebook to know much more about their users and exploit this information for targeted advertising BIBREF0 , but interest in people's opinions and how they feel isn't limited to commercial reasons, as it invests social monitoring, too, including health care and education BIBREF1 . However, emotions and opinions are not always expressed this explicitly, so that there is high interest in developing systems towards their automatic detection. Creating manually annotated datasets large enough to train supervised models is not only costly, but also—especially in the case of opinions and emotions—difficult, due to the intrinsic subjectivity of the task BIBREF2 , BIBREF3 . Therefore, research has focused on unsupervised methods enriched with information derived from lexica, which are manually created BIBREF3 , BIBREF4 . Since go2009twitter have shown that happy and sad emoticons can be successfully used as signals for sentiment labels, distant supervision, i.e. using some reasonably safe signals as proxies for automatically labelling training data BIBREF5 , has been used also for emotion recognition, for example exploiting both emoticons and Twitter hashtags BIBREF6 , but mainly towards creating emotion lexica. mohammad2015using use hashtags, experimenting also with highly fine-grained emotion sets (up to almost 600 emotion labels), to create the large Hashtag Emotion Lexicon. Emoticons are used as proxies also by hallsmarmulti, who use distributed vector representations to find which words are interchangeable with emoticons but also which emoticons are used in a similar context. We take advantage of distant supervision by using Facebook reactions as proxies for emotion labels, which to the best of our knowledge hasn't been done yet, and we train a set of Support Vector Machine models for emotion recognition. Our models, differently from existing ones, exploit information which is acquired entirely automatically, and achieve competitive or even state-of-the-art results for some of the emotion labels on existing, standard evaluation datasets. For explanatory purposes, related work is discussed further and more in detail when we describe the benchmarks for evaluation (Section SECREF3 ) and when we compare our models to existing ones (Section SECREF5 ). We also explore and discuss how choosing different sets of Facebook pages as training data provides an intrinsic domain-adaptation method. Facebook reactions as labels For years, on Facebook people could leave comments to posts, and also “like” them, by using a thumbs-up feature to explicitly express a generic, rather underspecified, approval. A “like” could thus mean “I like what you said", but also “I like that you bring up such topic (though I find the content of the article you linked annoying)". In February 2016, after a short trial, Facebook made a more explicit reaction feature available world-wide. Rather than allowing for the underspecified “like” as the only wordless response to a post, a set of six more specific reactions was introduced, as shown in Figure FIGREF1 : Like, Love, Haha, Wow, Sad and Angry. We use such reactions as proxies for emotion labels associated to posts. We collected Facebook posts and their corresponding reactions from public pages using the Facebook API, which we accessed via the Facebook-sdk python library. We chose different pages (and therefore domains and stances), aiming at a balanced and varied dataset, but we did so mainly based on intuition (see Section SECREF4 ) and with an eye to the nature of the datasets available for evaluation (see Section SECREF5 ). The choice of which pages to select posts from is far from trivial, and we believe this is actually an interesting aspect of our approach, as by using different Facebook pages one can intrinsically tackle the domain-adaptation problem (See Section SECREF6 for further discussion on this). The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney. Note that thankful was only available during specific time spans related to certain events, as Mother's Day in May 2016. For each page, we downloaded the latest 1000 posts, or the maximum available if there are fewer, from February 2016, retrieving the counts of reactions for each post. The output is a JSON file containing a list of dictionaries with a timestamp, the post and a reaction vector with frequency values, which indicate how many users used that reaction in response to the post (Figure FIGREF3 ). The resulting emotion vectors must then be turned into an emotion label. In the context of this experiment, we made the simple decision of associating to each post the emotion with the highest count, ignoring like as it is the default and most generic reaction people tend to use. Therefore, for example, to the first post in Figure FIGREF3 , we would associate the label sad, as it has the highest score (284) among the meaningful emotions we consider, though it also has non-zero scores for other emotions. At this stage, we didn't perform any other entropy-based selection of posts, to be investigated in future work. Emotion datasets Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation. Affective Text dataset Task 14 at SemEval 2007 BIBREF7 was concerned with the classification of emotions and valence in news headlines. The headlines where collected from several news websites including Google news, The New York Times, BBC News and CNN. The used emotion labels were Anger, Disgust, Fear, Joy, Sadness, Surprise, in line with the six basic emotions of Ekman's standard model BIBREF8 . Valence was to be determined as positive or negative. Classification of emotion and valence were treated as separate tasks. Emotion labels were not considered as mututally exclusive, and each emotion was assigned a score from 0 to 100. Training/developing data amounted to 250 annotated headlines (Affective development), while systems were evaluated on another 1000 (Affective test). Evaluation was done using two different methods: a fine-grained evaluation using Pearson's r to measure the correlation between the system scores and the gold standard; and a coarse-grained method where each emotion score was converted to a binary label, and precision, recall, and f-score were computed to assess performance. As it is done in most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , we also treat this as a classification problem (coarse-grained). This dataset has been extensively used for the evaluation of various unsupervised methods BIBREF2 , but also for testing different supervised learning techniques and feature portability BIBREF10 . Fairy Tales dataset This is a dataset collected by alm2008affect, where about 1,000 sentences from fairy tales (by B. Potter, H.C. Andersen and Grimm) were annotated with the same six emotions of the Affective Text dataset, though with different names: Angry, Disgusted, Fearful, Happy, Sad, and Surprised. In most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , only sentences where all annotators agreed are used, and the labels angry and disgusted are merged. We adopt the same choices. ISEAR The ISEAR (International Survey on Emotion Antecedents and Reactions BIBREF11 , BIBREF12 ) is a dataset created in the context of a psychology project of the 1990s, by collecting questionnaires answered by people with different cultural backgrounds. The main aim of this project was to gather insights in cross-cultural aspects of emotional reactions. Student respondents, both psychologists and non-psychologists, were asked to report situations in which they had experienced all of seven major emotions (joy, fear, anger, sadness, disgust, shame and guilt). In each case, the questions covered the way they had appraised a given situation and how they reacted. The final dataset contains reports by approximately 3000 respondents from all over the world, for a total of 7665 sentences labelled with an emotion, making this the largest dataset out of the three we use. Overview of datasets and emotions We summarise datasets and emotion distribution from two viewpoints. First, because there are different sets of emotions labels in the datasets and Facebook data, we need to provide a mapping and derive a subset of emotions that we are going to use for the experiments. This is shown in Table TABREF8 , where in the “Mapped” column we report the final emotions we use in this paper: anger, joy, sadness, surprise. All labels in each dataset are mapped to these final emotions, which are therefore the labels we use for training and testing our models. Second, the distribution of the emotions for each dataset is different, as can be seen in Figure FIGREF9 . In Figure FIGREF9 we also provide the distribution of the emotions anger, joy, sadness, surprise per Facebook page, in terms of number of posts (recall that we assign to a post the label corresponding to the majority emotion associated to it, see Section SECREF2 ). We can observe that for example pages about news tend to have more sadness and anger posts, while pages about cooking and tv-shows have a high percentage of joy posts. We will use this information to find the best set of pages for a given target domain (see Section SECREF5 ). Model There are two main decisions to be taken in developing our model: (i) which Facebook pages to select as training data, and (ii) which features to use to train the model, which we discuss below. Specifically, we first set on a subset of pages and then experiment with features. Further exploration of the interaction between choice of pages and choice of features is left to future work, and partly discussed in Section SECREF6 . For development, we use a small portion of the Affective data set described in Section SECREF4 , that is the portion that had been released as development set for SemEval's 2007 Task 14 BIBREF7 , which contains 250 annotated sentences (Affective development, Section SECREF4 ). All results reported in this section are on this dataset. The test set of Task 14 as well as the other two datasets described in Section SECREF3 will be used to evaluate the final models (Section SECREF4 ). Selecting Facebook pages Although page selection is a crucial ingredient of this approach, which we believe calls for further and deeper, dedicated investigation, for the experiments described here we took a rather simple approach. First, we selected the pages that would provide training data based on intuition and availability, then chose different combinations according to results of a basic model run on development data, and eventually tested feature combinations, still on the development set. For the sake of simplicity and transparency, we first trained an SVM with a simple bag-of-words model and default parameters as per the Scikit-learn implementation BIBREF13 on different combinations of pages. Based on results of the attempted combinations as well as on the distribution of emotions in the development dataset (Figure FIGREF9 ), we selected a best model (B-M), namely the combined set of Time, The Guardian and Disney, which yields the highest results on development data. Time and The Guardian perform well on most emotions but Disney helps to boost the performance for the Joy class. Features In selecting appropriate features, we mainly relied on previous work and intuition. We experimented with different combinations, and all tests were still done on Affective development, using the pages for the best model (B-M) described above as training data. Results are in Table TABREF20 . Future work will further explore the simultaneous selection of features and page combinations. We use a set of basic text-based features to capture the emotion class. These include a tf-idf bag-of-words feature, word (2-3) and character (2-5) ngrams, and features related to the presence of negation words, and to the usage of punctuation. This feature is used in all unsupervised models as a source of information, and we mainly include it to assess its contribution, but eventually do not use it in our final model. We used the NRC10 Lexicon because it performed best in the experiments by BIBREF10 , which is built around the emotions anger, anticipation, disgust, fear, joy, sadness, and surprise, and the valence values positive and negative. For each word in the lexicon, a boolean value indicating presence or absence is associated to each emotion. For a whole sentence, a global score per emotion can be obtained by summing the vectors for all content words of that sentence included in the lexicon, and used as feature. As additional feature, we also included Word Embeddings, namely distributed representations of words in a vector space, which have been exceptionally successful in boosting performance in a plethora of NLP tasks. We use three different embeddings: Google embeddings: pre-trained embeddings trained on Google News and obtained with the skip-gram architecture described in BIBREF14 . This model contains 300-dimensional vectors for 3 million words and phrases. Facebook embeddings: embeddings that we trained on our scraped Facebook pages for a total of 20,000 sentences. Using the gensim library BIBREF15 , we trained the embeddings with the following parameters: window size of 5, learning rate of 0.01 and dimensionality of 100. We filtered out words with frequency lower than 2 occurrences. Retrofitted embeddings: Retrofitting BIBREF16 has been shown as a simple but efficient way of informing trained embeddings with additional information derived from some lexical resource, rather than including it directly at the training stage, as it's done for example to create sense-aware BIBREF17 or sentiment-aware BIBREF18 embeddings. In this work, we retrofit general embeddings to include information about emotions, so that emotion-similar words can get closer in space. Both the Google as well as our Facebook embeddings were retrofitted with lexical information obtained from the NRC10 Lexicon mentioned above, which provides emotion-similarity for each token. Note that differently from the previous two types of embeddings, the retrofitted ones do rely on handcrafted information in the form of a lexical resource. Results on development set We report precision, recall, and f-score on the development set. The average f-score is reported as micro-average, to better account for the skewed distribution of the classes as well as in accordance to what is usually reported for this task BIBREF19 . From Table TABREF20 we draw three main observations. First, a simple tf-idf bag-of-word mode works already very well, to the point that the other textual and lexicon-based features don't seem to contribute to the overall f-score (0.368), although there is a rather substantial variation of scores per class. Second, Google embeddings perform a lot better than Facebook embeddings, and this is likely due to the size of the corpus used for training. Retrofitting doesn't seem to help at all for the Google embeddings, but it does boost the Facebook embeddings, leading to think that with little data, more accurate task-related information is helping, but corpus size matters most. Third, in combination with embeddings, all features work better than just using tf-idf, but removing the Lexicon feature, which is the only one based on hand-crafted resources, yields even better results. Then our best model (B-M) on development data relies entirely on automatically obtained information, both in terms of training data as well as features. Results In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 . Our B-M model relies on subsets of Facebook pages for training, which were chosen according to their performance on the development set as well as on the observation of emotions distribution on different pages and in the different datasets, as described in Section SECREF4 . The feature set we use is our best on the development set, namely all the features plus Google-based embeddings, but excluding the lexicon. This makes our approach completely independent of any manual annotation or handcrafted resource. Our model's performance is compared to the following systems, for which results are reported in the referred literature. Please note that no other existing model was re-implemented, and results are those reported in the respective papers. Discussion, conclusions and future work We have explored the potential of using Facebook reactions in a distant supervised setting to perform emotion classification. The evaluation on standard benchmarks shows that models trained as such, especially when enhanced with continuous vector representations, can achieve competitive results without relying on any handcrafted resource. An interesting aspect of our approach is the view to domain adaptation via the selection of Facebook pages to be used as training data. We believe that this approach has a lot of potential, and we see the following directions for improvement. Feature-wise, we want to train emotion-aware embeddings, in the vein of work by tang:14, and iacobacci2015sensembed. Retrofitting FB-embeddings trained on a larger corpus might also be successful, but would rely on an external lexicon. The largest room for yielding not only better results but also interesting insights on extensions of this approach lies in the choice of training instances, both in terms of Facebook pages to get posts from, as well as in which posts to select from the given pages. For the latter, one could for example only select posts that have a certain length, ignore posts that are only quotes or captions to images, or expand posts by including content from linked html pages, which might provide larger and better contexts BIBREF23 . Additionally, and most importantly, one could use an entropy-based measure to select only posts that have a strong emotion rather than just considering the majority emotion as training label. For the former, namely the choice of Facebook pages, which we believe deserves the most investigation, one could explore several avenues, especially in relation to stance-based issues BIBREF24 . In our dataset, for example, a post about Chile beating Colombia in a football match during the Copa America had very contradictory reactions, depending on which side readers would cheer for. Similarly, the very same political event, for example, would get very different reactions from readers if it was posted on Fox News or The Late Night Show, as the target audience is likely to feel very differently about the same issue. This also brings up theoretical issues related more generally to the definition of the emotion detection task, as it's strongly dependent on personal traits of the audience. Also, in this work, pages initially selected on availability and intuition were further grouped into sets to make training data according to performance on development data, and label distribution. Another criterion to be exploited would be vocabulary overlap between the pages and the datasets. Lastly, we could develop single models for each emotion, treating the problem as a multi-label task. This would even better reflect the ambiguity and subjectivity intrinsic to assigning emotions to text, where content could be at same time joyful or sad, depending on the reader. Acknowledgements In addition to the anonymous reviewers, we want to thank Lucia Passaro and Barbara Plank for insightful discussions, and for providing comments on draft versions of this paper.
Which existing benchmarks did they compare to?
Affective Text, Fairy Tales, ISEAR
3,390
qasper
4k
10pt 1.10pt [ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users. ] Introduction While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited. Since the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers. Such use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”. Our current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter. Specifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views. For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties. From our results, the following main observations can be made: Our findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter. The rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work. Defining Fake news Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news. With respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation. In relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization. The conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows: Research Hypotheses Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information: Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset. Exposure. Characterization. Polarization. Data and Methodology For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016). One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times. Once we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused. Finally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news: In the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal. Results The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window. The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth. The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered. Exposure Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time. However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal. In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant. Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news. Characterization We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts. Turning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions. If we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant. A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed. With respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant. The analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant. On the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 ). Polarization Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates. Discussion As a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them: These findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier. Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising). Finally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news. Conclusions With the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news. We believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle. Within the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample. Author Disclosure Statement No competing financial interest exist.
What were their distribution results?
Distributions of Followers, Friends and URLs are significantly different between the set of tweets containing fake news and those non containing them, but for Favourites, Mentions, Media, Retweets and Hashtags they are not significantly different
3,164
qasper
4k
Introduction A hashtag is a keyphrase represented as a sequence of alphanumeric characters plus underscore, preceded by the # symbol. Hashtags play a central role in online communication by providing a tool to categorize the millions of posts generated daily on Twitter, Instagram, etc. They are useful in search, tracking content about a certain topic BIBREF0 , BIBREF1 , or discovering emerging trends BIBREF2 . Hashtags often carry very important information, such as emotion BIBREF3 , sentiment BIBREF4 , sarcasm BIBREF5 , and named entities BIBREF6 , BIBREF7 . However, inferring the semantics of hashtags is non-trivial since many hashtags contain multiple tokens joined together, which frequently leads to multiple potential interpretations (e.g., lion head vs. lionhead). Table TABREF3 shows several examples of single- and multi-token hashtags. While most hashtags represent a mix of standard tokens, named entities and event names are prevalent and pose challenges to both human and automatic comprehension, as these are more likely to be rare tokens. Hashtags also tend to be shorter to allow fast typing, to attract attention or to satisfy length limitations imposed by some social media platforms. Thus, they tend to contain a large number of abbreviations or non-standard spelling variations (e.g., #iloveu4eva) BIBREF8 , BIBREF9 , which hinders their understanding. The goal of our study is to build efficient methods for automatically splitting a hashtag into a meaningful word sequence. Our contributions are: Our new dataset includes segmentation for 12,594 unique hashtags and their associated tweets annotated in a multi-step process for higher quality than the previous dataset of 1,108 hashtags BIBREF10 . We frame the segmentation task as a pairwise ranking problem, given a set of candidate segmentations. We build several neural architectures using this problem formulation which use corpus-based, linguistic and thesaurus based features. We further propose a multi-task learning approach which jointly learns segment ranking and single- vs. multi-token hashtag classification. The latter leads to an error reduction of 24.6% over the current state-of-the-art. Finally, we demonstrate the utility of our method by using hashtag segmentation in the downstream task of sentiment analysis. Feeding the automatically segmented hashtags to a state-of-the-art sentiment analysis method on the SemEval 2017 benchmark dataset results in a 2.6% increase in the official metric for the task. Background and Preliminaries Current approaches for hashtag segmentation can be broadly divided into three categories: (a) gazeteer and rule based BIBREF11 , BIBREF12 , BIBREF13 , (b) word boundary detection BIBREF14 , BIBREF15 , and (c) ranking with language model and other features BIBREF16 , BIBREF10 , BIBREF0 , BIBREF17 , BIBREF18 . Hashtag segmentation approaches draw upon work on compound splitting for languages such as German or Finnish BIBREF19 and word segmentation BIBREF20 for languages with no spaces between words such as Chinese BIBREF21 , BIBREF22 . Similar to our work, BIBREF10 BansalBV15 extract an initial set of candidate segmentations using a sliding window, then rerank them using a linear regression model trained on lexical, bigram and other corpus-based features. The current state-of-the-art approach BIBREF14 , BIBREF15 uses maximum entropy and CRF models with a combination of language model and hand-crafted features to predict if each character in the hashtag is the beginning of a new word. Generating Candidate Segmentations. Microsoft Word Breaker BIBREF16 is, among the existing methods, a strong baseline for hashtag segmentation, as reported in BIBREF14 and BIBREF10 . It employs a beam search algorithm to extract INLINEFORM0 best segmentations as ranked by the n-gram language model probability: INLINEFORM1 where INLINEFORM0 is the word sequence of segmentation INLINEFORM1 and INLINEFORM2 is the window size. More sophisticated ranking strategies, such as Binomial and word length distribution based ranking, did not lead to a further improvement in performance BIBREF16 . The original Word Breaker was designed for segmenting URLs using language models trained on web data. In this paper, we reimplemented and tailored this approach to segmenting hashtags by using a language model specifically trained on Twitter data (implementation details in § SECREF26 ). The performance of this method itself is competitive with state-of-the-art methods (evaluation results in § SECREF46 ). Our proposed pairwise ranking method will effectively take the top INLINEFORM3 segmentations generated by this baseline as candidates for reranking. However, in prior work, the ranking scores of each segmentation were calculated independently, ignoring the relative order among the top INLINEFORM0 candidate segmentations. To address this limitation, we utilize a pairwise ranking strategy for the first time for this task and propose neural architectures to model this. Multi-task Pairwise Neural Ranking We propose a multi-task pairwise neural ranking approach to better incorporate and distinguish the relative order between the candidate segmentations of a given hashtag. Our model adapts to address single- and multi-token hashtags differently via a multi-task learning strategy without requiring additional annotations. In this section, we describe the task setup and three variants of pairwise neural ranking models (Figure FIGREF11 ). Segmentation as Pairwise Ranking The goal of hashtag segmentation is to divide a given hashtag INLINEFORM0 into a sequence of meaningful words INLINEFORM1 . For a hashtag of INLINEFORM2 characters, there are a total of INLINEFORM3 possible segmentations but only one, or occasionally two, of them ( INLINEFORM4 ) are considered correct (Table TABREF9 ). We transform this task into a pairwise ranking problem: given INLINEFORM0 candidate segmentations { INLINEFORM1 }, we rank them by comparing each with the rest in a pairwise manner. More specifically, we train a model to predict a real number INLINEFORM2 for any two candidate segmentations INLINEFORM3 and INLINEFORM4 of hashtag INLINEFORM5 , which indicates INLINEFORM6 is a better segmentation than INLINEFORM7 if positive, and vice versa. To quantify the quality of a segmentation in training, we define a gold scoring function INLINEFORM8 based on the similarities with the ground-truth segmentation INLINEFORM9 : INLINEFORM10 We use the Levenshtein distance (minimum number of single-character edits) in this paper, although it is possible to use other similarity measurements as alternatives. We use the top INLINEFORM0 segmentations generated by Microsoft Word Breaker (§ SECREF2 ) as initial candidates. Pairwise Neural Ranking Model For an input candidate segmentation pair INLINEFORM0 , we concatenate their feature vectors INLINEFORM1 and INLINEFORM2 , and feed them into a feedforward network which emits a comparison score INLINEFORM3 . The feature vector INLINEFORM4 or INLINEFORM5 consists of language model probabilities using Good-Turing BIBREF23 and modified Kneser-Ney smoothing BIBREF24 , BIBREF25 , lexical and linguistic features (more details in § SECREF23 ). For training, we use all the possible pairs INLINEFORM6 of the INLINEFORM7 candidates as the input and their gold scores INLINEFORM8 as the target. The training objective is to minimize the Mean Squared Error (MSE): DISPLAYFORM0 where INLINEFORM0 is the number of training examples. To aggregate the pairwise comparisons, we follow a greedy algorithm proposed by BIBREF26 cohen1998learning and used for preference ranking BIBREF27 . For each segmentation INLINEFORM0 in the candidate set INLINEFORM1 , we calculate a single score INLINEFORM2 , and find the segmentation INLINEFORM3 corresponding to the highest score. We repeat the same procedure after removing INLINEFORM4 from INLINEFORM5 , and continue until INLINEFORM6 reduces to an empty set. Figure FIGREF11 (a) shows the architecture of this model. Margin Ranking (MR) Loss As an alternative to the pairwise ranker (§ SECREF15 ), we propose a pairwise model which learns from candidate pairs INLINEFORM0 but ranks each individual candidate directly rather than relatively. We define a new scoring function INLINEFORM1 which assigns a higher score to the better candidate, i.e., INLINEFORM2 , if INLINEFORM3 is a better candidate than INLINEFORM4 and vice-versa. Instead of concatenating the features vectors INLINEFORM5 and INLINEFORM6 , we feed them separately into two identical feedforward networks with shared parameters. During testing, we use only one of the networks to rank the candidates based on the INLINEFORM7 scores. For training, we add a ranking layer on top of the networks to measure the violations in the ranking order and minimize the Margin Ranking Loss (MR): DISPLAYFORM0 where INLINEFORM0 is the number of training samples. The architecture of this model is presented in Figure FIGREF11 (b). Adaptive Multi-task Learning Both models in § SECREF15 and § SECREF17 treat all the hashtags uniformly. However, different features address different types of hashtags. By design, the linguistic features capture named entities and multi-word hashtags that exhibit word shape patterns, such as camel case. The ngram probabilities with Good-Turing smoothing gravitate towards multi-word segmentations with known words, as its estimate for unseen ngrams depends on the fraction of ngrams seen once which can be very low BIBREF28 . The modified Kneser-Ney smoothing is more likely to favor segmentations that contain rare words, and single-word segmentations in particular. Please refer to § SECREF46 for a more detailed quantitative and qualitative analysis. To leverage this intuition, we introduce a binary classification task to help the model differentiate single-word from multi-word hashtags. The binary classifier takes hashtag features INLINEFORM0 as the input and outputs INLINEFORM1 , which represents the probability of INLINEFORM2 being a multi-word hashtag. INLINEFORM3 is used as an adaptive gating value in our multi-task learning setup. The gold labels for this task are obtained at no extra cost by simply verifying whether the ground-truth segmentation has multiple words. We train the pairwise segmentation ranker and the binary single- vs. multi-token hashtag classifier jointly, by minimizing INLINEFORM4 for the pairwise ranker and the Binary Cross Entropy Error ( INLINEFORM5 ) for the classifier: DISPLAYFORM0 where INLINEFORM0 is the adaptive gating value, INLINEFORM1 indicates if INLINEFORM2 is actually a multi-word hashtag and INLINEFORM3 is the number of training examples. INLINEFORM4 and INLINEFORM5 are the weights for each loss. For our experiments, we apply equal weights. More specifically, we divide the segmentation feature vector INLINEFORM0 into two subsets: (a) INLINEFORM1 with modified Kneser-Ney smoothing features, and (b) INLINEFORM2 with Good-Turing smoothing and linguistic features. For an input candidate segmentation pair INLINEFORM3 , we construct two pairwise vectors INLINEFORM4 and INLINEFORM5 by concatenation, then combine them based on the adaptive gating value INLINEFORM6 before feeding them into the feedforward network INLINEFORM7 for pairwise ranking: DISPLAYFORM0 We use summation with padding, as we find this simple ensemble method achieves similar performance in our experiments as the more complex multi-column networks BIBREF29 . Figure FIGREF11 (c) shows the architecture of this model. An analogue multi-task formulation can also be used for the Margin Ranking loss as: DISPLAYFORM0 Features We use a combination of corpus-based and linguistic features to rank the segmentations. For a candidate segmentation INLINEFORM0 , its feature vector INLINEFORM1 includes the number of words in the candidate, the length of each word, the proportion of words in an English dictionary or Urban Dictionary BIBREF30 , ngram counts from Google Web 1TB corpus BIBREF31 , and ngram probabilities from trigram language models trained on the Gigaword corpus BIBREF32 and 1.1 billion English tweets from 2010, respectively. We train two language models on each corpus: one with Good-Turing smoothing using SRILM BIBREF33 and the other with modified Kneser-Ney smoothing using KenLM BIBREF34 . We also add boolean features, such as if the candidate is a named-entity present in the list of Wikipedia titles, and if the candidate segmentation INLINEFORM2 and its corresponding hashtag INLINEFORM3 satisfy certain word-shapes (more details in appendix SECREF61 ). Similarly, for hashtag INLINEFORM0 , we extract the feature vector INLINEFORM1 consisting of hashtag length, ngram count of the hashtag in Google 1TB corpus BIBREF31 , and boolean features indicating if the hashtag is in an English dictionary or Urban Dictionary, is a named-entity, is in camel case, ends with a number, and has all the letters as consonants. We also include features of the best-ranked candidate by the Word Breaker model. Implementation Details We use the PyTorch framework to implement our multi-task pairwise ranking model. The pairwise ranker consists of an input layer, three hidden layers with eight nodes in each layer and hyperbolic tangent ( INLINEFORM0 ) activation, and a single linear output node. The auxiliary classifier consists of an input layer, one hidden layer with eight nodes and one output node with sigmoid activation. We use the Adam algorithm BIBREF35 for optimization and apply a dropout of 0.5 to prevent overfitting. We set the learning rate to 0.01 and 0.05 for the pairwise ranker and auxiliary classifier respectively. For each experiment, we report results obtained after 100 epochs. For the baseline model used to extract the INLINEFORM0 initial candidates, we reimplementated the Word Breaker BIBREF16 as described in § SECREF2 and adapted it to use a language model trained on 1.1 billion tweets with Good-Turing smoothing using SRILM BIBREF33 to give a better performance in segmenting hashtags (§ SECREF46 ). For all our experiments, we set INLINEFORM1 . Hashtag Segmentation Data We use two datasets for experiments (Table TABREF29 ): (a) STAN INLINEFORM0 , created by BIBREF10 BansalBV15, which consists of 1,108 unique English hashtags from 1,268 randomly selected tweets in the Stanford Sentiment Analysis Dataset BIBREF36 along with their crowdsourced segmentations and our additional corrections; and (b) STAN INLINEFORM1 , our new expert curated dataset, which includes all 12,594 unique English hashtags and their associated tweets from the same Stanford dataset. Experiments In this section, we present experimental results that compare our proposed method with the other state-of-the-art approaches on hashtag segmentation datasets. The next section will show experiments of applying hashtag segmentation to the popular task of sentiment analysis. Existing Methods We compare our pairwise neural ranker with the following baseline and state-of-the-art approaches: The original hashtag as a single token; A rule-based segmenter, which employs a set of word-shape rules with an English dictionary BIBREF13 ; A Viterbi model which uses word frequencies from a book corpus BIBREF0 ; The specially developed GATE Hashtag Tokenizer from the open source toolkit, which combines dictionaries and gazetteers in a Viterbi-like algorithm BIBREF11 ; A maximum entropy classifier (MaxEnt) trained on the STAN INLINEFORM0 training dataset. It predicts whether a space should be inserted at each position in the hashtag and is the current state-of-the-art BIBREF14 ; Our reimplementation of the Word Breaker algorithm which uses beam search and a Twitter ngram language model BIBREF16 ; A pairwise linear ranker which we implemented for comparison purposes with the same features as our neural model, but using perceptron as the underlying classifier BIBREF38 and minimizing the hinge loss between INLINEFORM0 and a scoring function similar to INLINEFORM1 . It is trained on the STAN INLINEFORM2 dataset. Evaluation Metrics We evaluate the performance by the top INLINEFORM0 ( INLINEFORM1 ) accuracy (A@1, A@2), average token-level F INLINEFORM2 score (F INLINEFORM3 @1), and mean reciprocal rank (MRR). In particular, the accuracy and MRR are calculated at the segmentation-level, which means that an output segmentation is considered correct if and only if it fully matches the human segmentation. Average token-level F INLINEFORM4 score accounts for partially correct segmentation in the multi-token hashtag cases. Results Tables TABREF32 and TABREF33 show the results on the STAN INLINEFORM0 and STAN INLINEFORM1 datasets, respectively. All of our pairwise neural rankers are trained on the 2,518 manually segmented hashtags in the training set of STAN INLINEFORM2 and perform favorably against other state-of-the-art approaches. Our best model (MSE+multitask) that utilizes different features adaptively via a multi-task learning procedure is shown to perform better than simply combining all the features together (MR and MSE). We highlight the 24.6% error reduction on STAN INLINEFORM3 and 16.5% on STAN INLINEFORM4 of our approach over the previous SOTA BIBREF14 on the Multi-token hashtags, and the importance of having a separate evaluation of multi-word cases as it is trivial to obtain 100% accuracy for Single-token hashtags. While our hashtag segmentation model is achieving a very high accuracy@2, to be practically useful, it remains a challenge to get the top one predication exactly correct. Some hashtags are very difficult to interpret, e.g., #BTVbrownSMB refers to the Social Media Breakfast (SMB) in Burlington, Vermont (BTV). The improved Word Breaker with our addition of a Twitter-specific language model is a very strong baseline, which echos the findings of the original Word Breaker paper BIBREF16 that having a large in-domain language model is extremely helpful for word segmentation tasks. It is worth noting that the other state-of-the-art system BIBREF14 also utilized a 4-gram language model trained on 476 million tweets from 2009. Analysis and Discussion To empirically illustrate the effectiveness of different features on different types of hashtags, we show the results for models using individual feature sets in pairwise ranking models (MSE) in Table TABREF45 . Language models with modified Kneser-Ney smoothing perform best on single-token hashtags, while Good-Turing and Linguistic features work best on multi-token hashtags, confirming our intuition about their usefulness in a multi-task learning approach. Table TABREF47 shows a qualitative analysis with the first column ( INLINEFORM0 INLINEFORM1 INLINEFORM2 ) indicating which features lead to correct or wrong segmentations, their count in our data and illustrative examples with human segmentation. As expected, longer hashtags with more than three tokens pose greater challenges and the segmentation-level accuracy of our best model (MSE+multitask) drops to 82.1%. For many error cases, our model predicts a close-to-correct segmentation, e.g., #youbrownknowyoubrownupttoobrownearly, #iseebrownlondoniseebrownfrance, which is also reflected by the higher token-level F INLINEFORM0 scores across hashtags with different lengths (Figure FIGREF51 ). Since our approach heavily relies on building a Twitter language model, we experimented with its sizes and show the results in Figure FIGREF52 . Our approach can perform well even with access to a smaller amount of tweets. The drop in F INLINEFORM0 score for our pairwise neural ranker is only 1.4% and 3.9% when using the language models trained on 10% and 1% of the total 1.1 billion tweets, respectively. Language use in Twitter changes with time BIBREF9 . Our pairwise ranker uses language models trained on the tweets from the year 2010. We tested our approach on a set of 500 random English hashtags posted in tweets from the year 2019 and show the results in Table TABREF55 . With a segmentation-level accuracy of 94.6% and average token-level F INLINEFORM0 score of 95.6%, our approach performs favorably on 2019 hashtags. Extrinsic Evaluation: Twitter Sentiment Analysis We attempt to demonstrate the effectiveness of our hashtag segmentation system by studying its impact on the task of sentiment analysis in Twitter BIBREF39 , BIBREF40 , BIBREF41 . We use our best model (MSE+multitask), under the name HashtagMaster, in the following experiments. Experimental Setup We compare the performance of the BiLSTM+Lex BIBREF42 sentiment analysis model under three configurations: (a) tweets with hashtags removed, (b) tweets with hashtags as single tokens excluding the # symbol, and (c) tweets with hashtags as segmented by our system, HashtagMaster. BiLSTM+Lex is a state-of-the-art open source system for predicting tweet-level sentiment BIBREF43 . It learns a context-sensitive sentiment intensity score by leveraging a Twitter-based sentiment lexicon BIBREF44 . We use the same settings as described by BIBREF42 teng-vo-zhang:2016:EMNLP2016 to train the model. We use the dataset from the Sentiment Analysis in Twitter shared task (subtask A) at SemEval 2017 BIBREF41 . Given a tweet, the goal is to predict whether it expresses POSITIVE, NEGATIVE or NEUTRAL sentiment. The training and development sets consist of 49,669 tweets and we use 40,000 for training and the rest for development. There are a total of 12,284 tweets containing 12,128 hashtags in the SemEval 2017 test set, and our hashtag segmenter ended up splitting 6,975 of those hashtags present in 3,384 tweets. Results and Analysis In Table TABREF59 , we report the results based on the 3,384 tweets where HashtagMaster predicted a split, as for the rest of tweets in the test set, the hashtag segmenter would neither improve nor worsen the sentiment prediction. Our hashtag segmenter successfully improved the sentiment analysis performance by 2% on average recall and F INLINEFORM0 comparing to having hashtags unsegmented. This improvement is seemingly small but decidedly important for tweets where sentiment-related information is embedded in multi-word hashtags and sentiment prediction would be incorrect based only on the text (see Table TABREF60 for examples). In fact, 2,605 out of the 3,384 tweets have multi-word hashtags that contain words in the Twitter-based sentiment lexicon BIBREF44 and 125 tweets contain sentiment words only in the hashtags but not in the rest of the tweet. On the entire test set of 12,284 tweets, the increase in the average recall is 0.5%. Other Related Work Automatic hashtag segmentation can improve the performance of many applications besides sentiment analysis, such as text classification BIBREF13 , named entity linking BIBREF10 and modeling user interests for recommendations BIBREF45 . It can also help in collecting data of higher volume and quality by providing a more nuanced interpretation of its content, as shown for emotion analysis BIBREF46 , sarcasm and irony detection BIBREF11 , BIBREF47 . Better semantic analysis of hashtags can also potentially be applied to hashtag annotation BIBREF48 , to improve distant supervision labels in training classifiers for tasks such as sarcasm BIBREF5 , sentiment BIBREF4 , emotions BIBREF3 ; and, more generally, as labels for pre-training representations of words BIBREF49 , sentences BIBREF50 , and images BIBREF51 . Conclusion We proposed a new pairwise neural ranking model for hashtag segmention and showed significant performance improvements over the state-of-the-art. We also constructed a larger and more curated dataset for analyzing and benchmarking hashtag segmentation methods. We demonstrated that hashtag segmentation helps with downstream tasks such as sentiment analysis. Although we focused on English hashtags, our pairwise ranking approach is language-independent and we intend to extend our toolkit to languages other than English as future work. Acknowledgments We thank Ohio Supercomputer Center BIBREF52 for computing resources and the NVIDIA for providing GPU hardware. We thank Alan Ritter, Quanze Chen, Wang Ling, Pravar Mahajan, and Dushyanta Dhyani for valuable discussions. We also thank the annotators: Sarah Flanagan, Kaushik Mani, and Aswathnarayan Radhakrishnan. This material is based in part on research sponsored by the NSF under grants IIS-1822754 and IIS-1755898, DARPA through the ARO under agreement number W911NF-17-C-0095, through a Figure-Eight (CrowdFlower) AI for Everyone Award and a Criteo Faculty Research Award to Wei Xu. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of the U.S. Government. Word-shape rules Our model uses the following word shape rules as boolean features. If the candidate segmentation INLINEFORM0 and its corresponding hashtag INLINEFORM1 satisfies a word shape rule, then the boolean feature is set to True.
How is the dataset of hashtags sourced?
1,268 randomly selected tweets in the Stanford Sentiment Analysis Dataset BIBREF36, all 12,594 unique English hashtags and their associated tweets from the same Stanford dataset
3,756
qasper
4k
Introduction Nowadays deep learning techniques outperform the other conventional methods in most of the speech-related tasks. Training robust deep neural networks for each task depends on the availability of powerful processing GPUs, as well as standard and large scale datasets. In text-independent speaker verification, large-scale datasets are available, thanks to the NIST SRE evaluations and other data collection projects such as VoxCeleb BIBREF0. In text-dependent speaker recognition, experiments with end-to-end architectures conducted on large proprietary databases have demonstrated their superiority over traditional approaches BIBREF1. Yet, contrary to text-independent speaker recognition, text-dependent speaker recognition lacks large-scale publicly available databases. The two most well-known datasets are probably RSR2015 BIBREF2 and RedDots BIBREF3. The former contains speech data collected from 300 individuals in a controlled manner, while the latter is used primarily for evaluation rather than training, due to its small number of speakers (only 64). Motivated by this lack of large-scale dataset for text-dependent speaker verification, we chose to proceed with the collection of the DeepMine dataset, which we expect to become a standard benchmark for the task. Apart from speaker recognition, large amounts of training data are required also for training automatic speech recognition (ASR) systems. Such datasets should not only be large in size, they should also be characterized by high variability with respect to speakers, age and dialects. While several datasets with these properties are available for languages like English, Mandarin, French, this is not the case for several other languages, such as Persian. To this end, we proceeded with collecting a large-scale dataset, suitable for building robust ASR models in Persian. The main goal of the DeepMine project was to collect speech from at least a few thousand speakers, enabling research and development of deep learning methods. The project started at the beginning of 2017, and after designing the database and the developing Android and server applications, the data collection began in the middle of 2017. The project finished at the end of 2018 and the cleaned-up and final version of the database was released at the beginning of 2019. In BIBREF4, the running project and its data collection scenarios were described, alongside with some preliminary results and statistics. In this paper, we announce the final and cleaned-up version of the database, describe its different parts and provide various evaluation setups for each part. Finally, since the database was designed mainly for text-dependent speaker verification purposes, some baseline results are reported for this task on the official evaluation setups. Additional baseline results are also reported for Persian speech recognition. However, due to the space limitation in this paper, the baseline results are not reported for all the database parts and conditions. They will be defined and reported in the database technical documentation and in a future journal paper. Data Collection DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4. Data Collection ::: Post-Processing In order to clean-up the database, the main post-processing step was to filter out problematic utterances. Possible problems include speaker word insertions (e.g. repeating some part of a phrase), deletions, substitutions, and involuntary disfluencies. To detect these, we implemented an alignment stage, similar to the second alignment stage in the LibriSpeech project BIBREF5. In this method, a custom decoding graph was generated for each phrase. The decoding graph allows for word skipping and word insertion in the phrase. For text-dependent and text-prompted parts of the database, such errors are not allowed. Hence, any utterances with errors were removed from the enrollment and test lists. For the speech recognition part, a sub-part of the utterance which is correctly aligned to the corresponding transcription is kept. After the cleaning step, around 190 thousand utterances with full transcription and 10 thousand with sub-part alignment have remained in the database. Data Collection ::: Statistics After processing the database and removing problematic respondents and utterances, 1969 respondents remained in the database, with 1149 of them being male and 820 female. 297 of the respondents could not read English and have therefore read only the Persian prompts. About 13200 sessions were recorded by females and similarly, about 9500 sessions by males, i.e. women are over-represented in terms of sessions, even though their number is 17% smaller than that of males. Other useful statistics related to the database are shown in Table TABREF4. The last status of the database, as well as other related and useful information about its availability can be found on its website, together with a limited number of samples. DeepMine Database Parts The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part: “My voice is my password.” “OK Google.” “Artificial intelligence is for real.” “Actions speak louder than words.” “There is no such thing as a free lunch.” DeepMine Database Parts ::: Part1 - Text-dependent (TD) This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded. We have created three experimental setups with different numbers of speakers in the evaluation set. For each setup, speakers with more recording sessions are included in the evaluation set and the rest of the speakers are used for training in the background set (in the database, all background sets are basically training data). The rows in Table TABREF13 corresponds to the different experimental setups and shows the numbers of speakers in each set. Note that, for English, we have filtered the (Persian native) speakers by the ability to read English. Therefore, there are fewer speakers in each set for English than for Persian. There is a small “dev” set in each setup which can be used for parameter tuning to prevent over-tuning on the evaluation set. For each experimental setup, we have defined several official trial lists with different numbers of enrollment utterances per trial in order to investigate the effects of having different amounts of enrollment data. All trials in one trial list have the same number of enrollment utterances (3 to 6) and only one test utterance. All enrollment utterances in a trial are taken from different consecutive sessions and the test utterance is taken from yet another session. From all the setups and conditions, the 100-spk with 3-session enrollment (3-sess) is considered as the main evaluation condition. In Table TABREF14, the number of trials for Persian 3-sess are shown for the different types of trial in the text-dependent speaker verification (SV). Note that for Imposter-Wrong (IW) trials (i.e. imposter speaker pronouncing wrong phrase), we merely create one wrong trial for each Imposter-Correct (IC) trial to limit the huge number of possible trials for this case. So, the number of trials for IC and IW cases are the same. DeepMine Database Parts ::: Part2 - Text-prompted (TP) For this part, in each session, 3 random sequences of Persian month names are shown to the respondent in two modes: In the first mode, the sequence consists of all 12 months, which will be used for speaker enrollment. The second mode contains a sequence of 3 month names that will be used as a test utterance. In each 8 sessions received by a respondent from the server, there are 3 enrollment phrases of all 12 months (all in just one session), and $7 \times 3$ other test phrases, containing fewer words. For a respondent who can read English, 3 random sequences of English digits are also recorded in each session. In one of the sessions, these sequences contain all digits and the remaining ones contain only 4 digits. Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order). From all setups an all conditions, the 100-spk with 1-session enrolment (1-sess) is considered as the main evaluation condition for the text-prompted case. In Table TABREF16, the numbers of trials (sum for both seq and full conditions) for Persian 1-sess are shown for the different types of trials in the text-prompted SV. Again, we just create one IW trial for each IC trial. DeepMine Database Parts ::: Part3 - Text-independent (TI) In this part, 8 Persian phrases that have already been transcribed on the phone level are displayed to the respondent. These phrases are chosen mostly from news and Persian Wikipedia. If the respondent is unable to read English, instead of 5 fixed phrases and 3 random digit strings, 8 other Persian phrases are also prompted to the respondent to have exactly 24 phrases in each recording session. This part can be useful at least for three potential applications. First, it can be used for text-independent speaker verification. The second application of this part (same as Part4 of RedDots) is text-prompted speaker verification using random text (instead of a random sequence of words). Finally, the third application is large vocabulary speech recognition in Persian (explained in the next sub-section). Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. Table TABREF18 shows numbers of speakers in each set of the database for text-independent SV case. For text-independent SV, we have considered 4 scenarios for enrollment and 4 scenarios for test. The speaker can be enrolled using utterances from 1, 2 or 3 consecutive sessions (1sess to 3sess) or using 8 utterances from 8 different sessions. The test speech can be one utterance (1utt) for short duration scenario or all utterances in one session (1sess) for long duration case. In addition, test speech can be selected from 5 English phrases for cross-language testing (enrollment using Persian utterances and test using English utterances). From all setups, 1sess-1utt and 1sess-1sess for 438-spk set are considered as the main evaluation setups for text-independent case. Table TABREF19 shows number of trials for these setups. For text-prompted SV with random text, the same setup as text-independent case together with corresponding utterance transcriptions can be used. DeepMine Database Parts ::: Part3 - Speech Recognition As explained before, Part3 of the DeepMine database can be used for Persian read speech recognition. There are only a few databases for speech recognition in Persian BIBREF6, BIBREF7. Hence, this part can at least partly address this problem and enable robust speech recognition applications in Persian. Additionally, it can be used for speaker recognition applications, such as training deep neural networks (DNNs) for extracting bottleneck features BIBREF8, or for collecting sufficient statistics using DNNs for i-vector training. We have randomly selected 50 speakers (25 for each gender) from the all speakers in the database which have net speech (without silence parts) between 25 minutes to 50 minutes as test speakers. For each speaker, the utterances in the first 5 sessions are included to (small) test-set and the other utterances of test speakers are considered as a large-test-set. The remaining utterances of the other speakers are included in the training set. The test-set, large-test-set and train-set contain 5.9, 28.5 and 450 hours of speech respectively. There are about 8300 utterances in Part3 which contain only Persian full names (i.e. first and family name pairs). Each phrase consists of several full names and their phoneme transcriptions were extracted automatically using a trained Grapheme-to-Phoneme (G2P). These utterances can be used to evaluate the performance of a systems for name recognition, which is usually more difficult than the normal speech recognition because of the lack of a reliable language model. Experiments and Results Due to the space limitation, we present results only for the Persian text-dependent speaker verification and speech recognition. Experiments and Results ::: Speaker Verification Experiments We conducted an experiment on text-dependent speaker verification part of the database, using the i-vector based method proposed in BIBREF9, BIBREF10 and applied it to the Persian portion of Part1. In this experiment, 20-dimensional MFCC features along with first and second derivatives are extracted from 16 kHz signals using HTK BIBREF11 with 25 ms Hamming windowed frames with 15 ms overlap. The reported results are obtained with a 400-dimensional gender independent i-vector based system. The i-vectors are first length-normalized and are further normalized using phrase- and gender-dependent Regularized Within-Class Covariance Normalization (RWCCN) BIBREF10. Cosine distance is used to obtain speaker verification scores and phrase- and gender-dependent s-norm is used for normalizing the scores. For aligning speech frames to Gaussian components, monophone HMMs with 3 states and 8 Gaussian components in each state are used BIBREF10. We only model the phonemes which appear in the 5 Persian text-dependent phrases. For speaker verification experiments, the results were reported in terms of Equal Error Rate (EER) and Normalized Detection Cost Function as defined for NIST SRE08 ($\mathrm {NDCF_{0.01}^{min}}$) and NIST SRE10 ($\mathrm {NDCF_{0.001}^{min}}$). As shown in Table TABREF22, in text-dependent SV there are 4 types of trials: Target-Correct and Imposter-Correct refer to trials when the pass-phrase is uttered correctly by target and imposter speakers respectively, and in same manner, Target-Wrong and Imposter-Wrong refer to trials when speakers uttered a wrong pass-phrase. In this paper, only the correct trials (i.e. Target-Correct as target trials vs Imposter-Correct as non-target trials) are considered for evaluating systems as it has been proved that these are the most challenging trials in text-dependent SV BIBREF8, BIBREF12. Table TABREF23 shows the results of text-dependent experiments using Persian 100-spk and 3-sess setup. For filtering trials, the respondents' mobile brand and model were used in this experiment. In the table, the first two letters in the filter notation relate to the target trials and the second two letters (i.e. right side of the colon) relate for non-target trials. For target trials, the first Y means the enrolment and test utterances were recorded using a device with the same brand by the target speaker. The second Y letter means both recordings were done using exactly the same device model. Similarly, the first Y for non-target trials means that the devices of target and imposter speakers are from the same brand (i.e. manufacturer). The second Y means that, in addition to the same brand, both devices have the same model. So, the most difficult target trials are “NN”, where the speaker has used different a device at the test time. In the same manner, the most difficult non-target trials which should be rejected by the system are “YY” where the imposter speaker has used the same device model as the target speaker (note that it does not mean physically the same device because each speaker participated in the project using a personal mobile device). Hence, the similarity in the recording channel makes rejection more difficult. The first row in Table TABREF23 shows the results for all trials. By comparing the results with the best published results on RSR2015 and RedDots BIBREF10, BIBREF8, BIBREF12, it is clear that the DeepMine database is more challenging than both RSR2015 and RedDots databases. For RSR2015, the same i-vector/HMM-based method with both RWCCN and s-norm has achieved EER less than 0.3% for both genders (Table VI in BIBREF10). The conventional Relevance MAP adaptation with HMM alignment without applying any channel-compensation techniques (i.e. without applying RWCCN and s-norm due to the lack of suitable training data) on RedDots Part1 for the male has achieved EER around 1.5% (Table XI in BIBREF10). It is worth noting that EERs for DeepMine database without any channel-compensation techniques are 2.1 and 3.7% for males and females respectively. One interesting advantage of the DeepMine database compared to both RSR2015 and RedDots is having several target speakers with more than one mobile device. This is allows us to analyse the effects of channel compensation methods. The second row in Table TABREF23 corresponds to the most difficult trials where the target trials come from mobile devices with different models while imposter trials come from the same device models. It is clear that severe degradation was caused by this kind of channel effects (i.e. decreasing within-speaker similarities while increasing between-speaker similarities), especially for females. The results in the third row show the condition when target speakers at the test time use exactly the same device that was used for enrollment. Comparing this row with the results in the first row proves how much improvement can be achieved when exactly the same device is used by the target speaker. The results in the fourth row show the condition when imposter speakers also use the same device model at test time to fool the system. So, in this case, there is no device mismatch in all trials. By comparing the results with the third row, we can see how much degradation is caused if we only consider the non-target trials with the same device. The fifth row shows similar results when the imposter speakers use device of the same brand as the target speaker but with a different model. Surprisingly, in this case, the degradation is negligible and it means that mobiles from a specific brand (manufacturer) have different recording channel properties. The degraded female results in the sixth row as compared to the third row show the effect of using a different device model from the same brand for target trials. For males, the filters brings almost the same subsets of trials, which explains the very similar results in this case. Looking at the first two and the last row of Table TABREF23, one can notice the significantly worse performance obtained for the female trials as compared to males. Note that these three rows include target trials where the devices used for enrollment do not necessarily match the devices used for recording test utterances. On the other hand, in rows 3 to 6, which exclude such mismatched trials, the performance for males and females is comparable. This suggest that the degraded results for females are caused by some problematic trials with device mismatch. The exact reason for this degradation is so far unclear and needs a further investigation. In the last row of the table, the condition of the second row is relaxed: the target device should have different model possibly from the same brand and imposter device only needs to be from the same brand. In this case, as was expected, the performance degradation is smaller than in the second row. Experiments and Results ::: Speech Recognition Experiments In addition to speaker verification, we present several speech recognition experiments on Part3. The experiments were performed with the Kaldi toolkit BIBREF13. For training HMM-based MonoPhone model, only 20 thousands of shortest utterances are used and for other models the whole training data is used. The DNN based acoustic model is a time-delay DNN with low-rank factorized layers and skip connections without i-vector adaptation (a modified network from one of the best performing LibriSpeech recipes). The network is shown in Table TABREF25: there are 16 F-TDNN layers, with dimension 1536 and linear bottleneck layers of dimension 256. The acoustic model is trained for 10 epochs using lattice-free maximum mutual information (LF-MMI) with cross-entropy regularization BIBREF14. Re-scoring is done using a pruned trigram language model and the size of the dictionary is around 90,000 words. Table TABREF26 shows the results in terms of word error rate (WER) for different evaluated methods. As can be seen, the created database can be used to train well performing and practically usable Persian ASR models. Conclusions In this paper, we have described the final version of a large speech corpus, the DeepMine database. It has been collected using crowdsourcing and, according to the best of our knowledge, it is the largest public text-dependent and text-prompted speaker verification database in two languages: Persian and English. In addition, it is the largest text-independent speaker verification evaluation database, making it suitable to robustly evaluate state-of-the-art methods on different conditions. Alongside these appealing properties, it comes with phone-level transcription, making it suitable to train deep neural network models for Persian speech recognition. We provided several evaluation protocols for each part of the database. The protocols allow researchers to investigate the performance of different methods in various scenarios and study the effects of channels, duration and phrase text on the performance. We also provide two test sets for speech recognition: One normal test set with a few minutes of speech for each speaker and one large test set with more (30 minutes on average) speech that can be used for any speaker adaptation method. As baseline results, we reported the performance of an i-vector/HMM based method on Persian text-dependent part. Moreover, we conducted speech recognition experiments using conventional HMM-based methods, as well as state-of-the-art deep neural network based method using Kaldi toolkit with promising performance. Text-dependent results have shown that the DeepMine database is more challenging than RSR2015 and RedDots databases. Acknowledgments The data collection project was mainly supported by Sharif DeepMine company. The work on the paper was supported by Czech National Science Foundation (GACR) project "NEUREM3" No. 19-26934X and the National Programme of Sustainability (NPU II) project "IT4Innovations excellence in science - LQ1602".
what accents are present in the corpus?
Unanswerable
3,794
qasper
4k
Introduction Automatically generating text to describe the content of images, also known as image captioning, is a multimodal task of considerable interest in both the computer vision and the NLP communities. Image captioning can be framed as a translation task from an image to a descriptive natural language statement. Many existing captioning models BIBREF0, BIBREF1, BIBREF2, BIBREF3 follow the typical encoder-decoder framework where a convolutional network is used to condense images into visual feature representations, combined with a recurrent network for language generation. While these models demonstrate promising results, quantifying image captioning performance remains a challenging problem, in a similar way to other generative tasks BIBREF4, BIBREF5. Evaluating candidate captions for human preference is slow and laborious. To alleviate this problem, many automatic evaluation metrics have been proposed, such as BLEU BIBREF6, METEOR BIBREF7, ROUGE BIBREF8 and CIDEr BIBREF9. These n-gram-based metrics evaluate captioning performance based on surface similarity between a candidate caption and reference statements. A more recent evaluation metric for image captioning is SPICE BIBREF10, which takes into account semantic propositional content of generated captions by scoring a caption based upon a graph-based semantic representation transformed from reference captions. The rationale behind these evaluation metrics is that human reference captions serve as an approximate target and comparing model outputs to this target is a proxy for how well a system performs. Thus, a candidate caption is not directly evaluated with respect to image content, but compared to a set of human statements about that image. However, in image captioning, visual scenes with multiple objects and relations correspond to a diversity of valid descriptions. Consider the example image and captions from the ShapeWorld framework BIBREF11 shown in Figure FIGREF1. The first three captions are true statements about the image and express relevant ideas, but describe different objects, attributes and spatial relationships, while the fourth caption is wrong despite referring to the same objects as in the third caption. This casts doubt on the sufficiency of using a set of reference captions to approximate the content of an image. We argue that, while existing metrics have undeniably been useful for real-world captioning evaluation, their focus on approximate surface comparison limits deeper insights into the learning process and eventual behavior of captioning models. To address this problem, we propose a set of principled evaluation criteria which evaluate image captioning models for grammaticality, truthfulness and diversity (GTD). These criteria correspond to necessary requirements for image captioning systems: (a) that the output is grammatical, (b) that the output statement is true with respect to the image, and (c) that outputs are diverse and mirror the variability of training captions. Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions. We believe that as a supplementary evaluation method to real-world metrics, the GTD framework provides evaluation insights that are sufficiently interesting to motivate future work. Related work ::: Existing evaluation of image captioning As a natural language generation task, image captioning frequently uses evaluation metrics such as BLEU BIBREF6, METEOR BIBREF7, ROUGE BIBREF8 and CIDEr BIBREF9. These metrics use n-gram similarity between the candidate caption and reference captions to approximate the correlation between a candidate caption and the associated ground truth. SPICE BIBREF10 is a more recent metric specifically designed for image captioning. For SPICE, both the candidate caption and reference captions are parsed to scene graphs, and the agreement between tuples extracted from these scene graphs is examined. SPICE more closely relates to our truthfulness evaluation than the other metrics, but it still uses overlap comparison to reference captions as a proxy to ground truth. In contrast, our truthfulness metric directly evaluates a candidate caption against a model of the actual visual content. Many researchers have pointed out problems with existing reference-based metrics including low correlations with human judgment BIBREF12, BIBREF10, BIBREF13 and strong baselines using nearest-neighbor methods BIBREF14 or relying solely on object detection BIBREF15. Fundamental concerns have been raised with respect to BLEU, including variability in parameterization and precise score calculation leading to significantly different results BIBREF16. Its validity as a metric for tasks other than machine translation has been questioned BIBREF17, particularly for tasks for which the output content is not narrowly constrained, like dialogue BIBREF18. Some recent work focuses on increasing the diversity of generated captions, for which various measures are proposed. Devlin et al. BIBREF19 explored the concept of caption diversity by evaluating performance on compositionally novel images. van Miltenburg et al BIBREF20 framed image captioning as a word recall task and proposed several metrics, predominantly focusing on diversity at the word level. However, this direction is still relatively new and lacks standardized benchmarks and metrics. Related work ::: Synthetic datasets Recently, many synthetic datasets have been proposed as diagnostic tools for deep learning models, such as CLEVR BIBREF21 for visual question answering (VQA), the bAbI tasks BIBREF22 for text understanding and reasoning, and ShapeWorld BIBREF11 for visually grounded language understanding. The primary motivation is to reduce complexity which is considered irrelevant to the evaluation focus, to enable better control over the data, and to provide more detailed insights into strengths and limitations of existing models. In this work, we develop the evaluation datasets within the ShapeWorld framework. ShapeWorld is a controlled data generation framework consisting of abstract colored shapes (see Figure FIGREF1 for an example). We use ShapeWorld to generate training and evaluation data for two major reasons. ShapeWorld supports customized data generation according to user specification, which enables a variety of model inspections in terms of language construction, visual complexity and reasoning ability. Another benefit is that each training and test instance generated in ShapeWorld is returned as a triplet of $<$image, caption, world model$>$. The world model stores information about the underlying microworld used to generate an image and a descriptive caption, internally represented as a list of entities with their attributes, such as shape, color, position. During data generation, ShapeWorld randomly samples a world model from a set of available entities and attributes. The generated world model is then used to realize a corresponding instance consisting of image and caption. The world model gives the actual semantic information contained in an image, which allows evaluation of caption truthfulness. GTD Evaluation Framework In the following we introduce GTD in more detail, consider it as an evaluation protocol covering necessary aspects of the multifaceted captioning task, rather than a specific metric. GTD Evaluation Framework ::: Grammaticality An essential criterion for an image captioning model is that the captions generated are grammatically well-formed. Fully accurate assessment of grammaticality in a general context is itself a difficult task, but becomes more feasible in a very constrained context like our diagnostic language data. We take parseability with the English Resource Grammar BIBREF23 as a surrogate for grammaticality, meaning that a sentence is considered grammatically well-formed if we obtain a parse using the ERG. The ERG is a broad-coverage grammar based on the head-driven phrase structure grammar (HPSG) framework. It is linguistically precise: sentences only parse if they are valid according to its hand-built rules. It is designed to be general-purpose: verified coverage is around 80% for Wikipedia, and over 90% for corpora with shorter sentences and more limited vocabulary (for details see BIBREF24 flickinger2011accuracy). Since the ShapeWorld training data – the only language source for models to learn from – is generated using the same grammar, the ERG has $\sim $100% coverage of grammaticality in the model output space. GTD Evaluation Framework ::: Truthfulness The second aspect we investigate is truthfulness, that is, whether a candidate caption is compatible with the content of the image it is supposed to describe. We evaluate caption truthfulness on the basis of a linguistically-motivated approach using formal semantics. We convert the output of the ERG parse for a grammatical caption to a Dependency Minimal Recursion Semantics (DMRS) graph using the pydmrs tool BIBREF25. Each converted DMRS is a logical semantic graph representation corresponding to the caption. We construct a logical proposition from the DMRS graph, and evaluate it against the actual world model of the corresponding image. A caption can be said to agree with an image only if the proposition evaluates as true on the basis of the world model. By examining the logical agreement between a caption representation and a world model, we can check whether the semantics of this caption agrees with the visual content which the world model represents. Thus we do not rely on a set of captions as a surrogate for the content of an image, but instead leverage the fact that we have the ground truth, thus enabling the evaluation of true image-caption agreement. GTD Evaluation Framework ::: Diversity While grammaticality and truthfulness are essential requirements for image captions, these criteria alone can easily be “gamed” by specializing on a small set of generic statements which are true most of the time. In the context of abstract shapes, such captions include examples like “There is a shape” or “At least zero shapes are blue” (which is technically true even if there is no blue shape). This motivates the third fundamental requirement of captioning output to be diverse. As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number: Language constructions here correspond to reduced caption representations which only record whether an object is described by shape (e.g., “square”), color (e.g., “red shape”) or color-shape combination (e.g., “red square”). So the statement “A square is red” and “A circle is blue” are considered the same, while “A shape is red” is different. Experimental Setup ::: Datasets We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English. Each dataset variant consists of around 200k training instances and 4,096 validation instances, plus 4,096 test instances. Each training instance consists of an image and a reference caption. At test time, only the test images are available to the evaluated models. Underlying world models are kept from the models and are used for later GTD evaluation. For each test instance, we sample ten reference captions of the same distribution as the training captions to enable the comparison of our proposed metrics to BLEU and SPICE. We fine-tune our model hyperparameters based on the performance on the validation set. All reported results are measured on the test split with the parameters yielding the best validation performance. Experimental Setup ::: Models We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the “predecessor word embedding” to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step. We follow the common practice in image captioning to use a CNN component pretrained on object detection and fine-tune its parameters on the image captioning task. The encoder and decoder components are jointly optimized with respect to the standard cross-entropy sequence loss on the respective ShapeWorldICE dataset. For all our experiments, we train models end-to-end for a fixed number of 100k iterations with a batch size of 64. We use Adam optimization BIBREF26 with a learning rate of 0.001. Word embeddings are randomly initialized and jointly trained during the training. Results We train and evaluate the Show&Tell and LRCN1u models on the ShapeWorldICE datasets. Here we discuss in detail the diagnostic results of these experiments. During training, we periodically record model output on the test images, to be able to analyze the development of our evaluation metrics throughout the process. We also compute BLEU-4 scores and SPICE scores of generated captions for comparison, using 10 reference captions per test image. LRCN1u exhibits clearly superior performance in terms of truthfulness. We start off by comparing performance of the Show&Tell model and the LRCN1u model, see Figure FIGREF8. While both models learn to produce grammatical sentences early on, it can be seen that LRCN1u is clearly superior in terms of truthfulness, achieving 100% halfway through training, whereas Show&Tell only slowly reaches around 90% by the end of 100k iterations. This indicates that incorporating visual features at every generation step is beneficial for producing true captions. The diversity ratios of captions generated by two models both increase substantially as the training progresses, with LRCN1u exhibiting a slightly greater caption diversity at the end of training. We observed similar results on other ShapeWorldICE datasets that we experimented with, validating the superiority of LRCN1u over Show&Tell on ShapeWorldICE. Consequently, we decided to focus on the LRCN1u architecture in subsequent evaluations, where we report detailed results with respect to the GTD framework on a variety of datasets. Correlation between the BLEU/SPICE scores and the ground truth. From the learning curves shown in Figure FIGREF9, we find low or no correlation between the BLEU/SPICE scores and caption truthfulness. On Existential-OneShape, the BLEU curve follows the trend of the truthfulness curve in general, indicating that BLEU is able to capture caption truthfulness well in this simple scenario. However, while BLEU reports equivalent model performance on Existential-MultiShapes and Spatial-MultiShapes, the truthfulness metric demonstrates very different results. The BLEU score for generated Existential-MultiShapes captions increases rapidly at the beginning of training and then plateaus despite the continuous increase in truthfulness ratio. Captions generated on Spatial-MultiShapes attain a relatively high BLEU score from an early stage of training, but exhibit low agreement ($<$0.6 truthfulness ratio) with ground-truth visual scenes. In the case of Spatial-MultiShapes, spatial descriptors for two objects are chosen from a fixed set (“above”, “below”, “to the left of” and “to the right of”). It is very likely for a generated spatial descriptor to match one of the descriptors mentioned in reference captions. In this particular case, the model is apt to infer a caption which has high n-gram overlaps with reference captions, resulting in a relatively high BLEU score. Thus an increased BLEU score does not necessarily indicate improved performance. While the truthfulness and BLEU scores in Figure FIGREF9 both increase rapidly early on and then stay stable at a high rate after training for 20k iterations, the SPICE curve instead shows a downward trend in the later stage of training. We examined the output SPICE score for each test instance. SPICE reports a precision score of 1.0 for most test instances after 20k iterations, which is consistent with the truthfulness and BLEU scores. However, SPICE forms the reference scene graph as the union of the scene graphs extracted from individual reference captions, thus introducing redundancies. SPICE uses the F1 score of scene graph matching between the candidate and reference and hence is lowered by imperfect recall. Comparing SPICE curves for three datasets shown in Figure FIGREF9-FIGREF9, they suggest an increase in task complexity, but they do not reflect the successively closing gap of caption truthfulness scores between two Existential datasets, or the substantial difference in caption truthfulness between captions on Existential-MultiShapes and Spatial-MultiShapes. In the remainder of the paper we discuss in detail the diagnostic results of the LRCN1u model demonstrated by the GTD evaluation framework. Perfect grammaticality for all caption types. As shown in Figure FIGREF15, generated captions for all types of ShapeWorldICE datasets attain quasi-perfect grammaticality scores in fewer than 5,000 iterations, suggesting that the model quickly learns to generate grammatically well-formed sentences. Failure to learn complex spatial relationships. While CNNs can produce rich visual representations that can be used for a variety of vision tasks BIBREF27, it remains an open question whether these condensed visual representations are rich enough for multimodal tasks that require higher-level abilities of scene understanding and visual reasoning. From Figure FIGREF16, we can see that while the model performs rather well on Existential datasets, it exhibits a worse performance on Spatial data. The caption agreement ratio in the simple Spatial-TwoShapes scenario is relatively high, but drops significantly on Spatial-MultiShapes, demonstrating the deficiencies of the model in learning spatial relationships from complex visual scenes. The counting task is non-trivial. Counting has long been considered to be a challenging task in multimodal reasoning BIBREF28, BIBREF29. To explore how well the LRCN1u model copes with counting tasks, we generated two Quantification datasets. The Quant-Count captions describe the number of objects with certain attributes that appear in an image (e.g. “Exactly four shapes are crosses”), while the Quant-Ratio captions describe the ratio of certain objects (e.g. “A third of the shapes are blue squares”). From Figure FIGREF16, we notice that the LRCN1u model performs poorly on these datasets in terms of truthfulness, reflected in the 0.50 and 0.46 scores achieved by the model on the Quant-Count and Quant-Ratio tasks respectively. The learning curve for Quant-Ratio exhibits a more gradual rise as the training progresses, suggesting a greater complexity for the ratio-based task. Caption diversity benefits from varied language constructions in the training data. The diversity ratios of generated captions for different ShapeWorldICE datasets are illustrated in Figure FIGREF17. We can see that the diversity of inferred captions is largely sensitive to the caption variability in the dataset itself. For simple datasets (such as Existential-OneShape) where language constructions in the training set are less diverse, the output captions tend to have uniform sentence structures. The high diversity ratios of generated Spatial and Quantification captions suggest that caption diversity benefits from heterogeneous language constructions in complex datasets. Discussions and Conclusions Evaluation metrics are required as a proxy for performance in real applications. As such, they should, as far as possible, allow measurement of fundamental aspects of the performance of models on tasks. In this work, we propose the GTD evaluation framework as a supplement to standard image captioning evaluation which explicitly focuses on grammaticality, truthfulness and diversity. We developed the ShapeWorldICE evaluation suite to allow in-depth and fine-grained inspection of model behaviors. We have empirically verified that GTD captures different aspects of performance to existing metrics by evaluating image captioning models on the ShapeWorldICE suite. We hope that this framework will shed light on important aspects of model behaviour and that this will help guide future research efforts. While performing the evaluation experiments on the LRCN1u model, we noticed that caption agreement does not always improve as the training loss decreases. Ideally, the training objective should be in accordance with how a model is eventually evaluated. In future work, we plan to investigate the feasibility of deliberately encoding the GTD signal in the training process, for instance, by implementing a GTD-aware loss. We also plan to extend the existing ShapeWorldICE benchmark to include more linguistic constructions (such as relative clauses, compound sentences and coreference). By doing so, we hope to reveal how well existing image captioning models cope with complex generation tasks. Acknowledgments We thank the anonymous reviewers for their constructive feedback. HX is grateful for being supported by the CSC Cambridge Scholarship. TS is supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the EPSRC (grant EP/L016427/1) and the University of Edinburgh. AK is grateful for being supported by a Qualcomm Research Studentship and an EPSRC Doctoral Training Studentship.
Are the images from a specific domain?
Yes
3,472
qasper
4k
Introduction This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ In the spirit of the brevity of social media's messages and reactions, people have got used to express feelings minimally and symbolically, as with hashtags on Twitter and Instagram. On Facebook, people tend to be more wordy, but posts normally receive more simple “likes” than longer comments. Since February 2016, Facebook users can express specific emotions in response to a post thanks to the newly introduced reaction feature (see Section SECREF2 ), so that now a post can be wordlessly marked with an expression of say “joy" or “surprise" rather than a generic “like”. It has been observed that this new feature helps Facebook to know much more about their users and exploit this information for targeted advertising BIBREF0 , but interest in people's opinions and how they feel isn't limited to commercial reasons, as it invests social monitoring, too, including health care and education BIBREF1 . However, emotions and opinions are not always expressed this explicitly, so that there is high interest in developing systems towards their automatic detection. Creating manually annotated datasets large enough to train supervised models is not only costly, but also—especially in the case of opinions and emotions—difficult, due to the intrinsic subjectivity of the task BIBREF2 , BIBREF3 . Therefore, research has focused on unsupervised methods enriched with information derived from lexica, which are manually created BIBREF3 , BIBREF4 . Since go2009twitter have shown that happy and sad emoticons can be successfully used as signals for sentiment labels, distant supervision, i.e. using some reasonably safe signals as proxies for automatically labelling training data BIBREF5 , has been used also for emotion recognition, for example exploiting both emoticons and Twitter hashtags BIBREF6 , but mainly towards creating emotion lexica. mohammad2015using use hashtags, experimenting also with highly fine-grained emotion sets (up to almost 600 emotion labels), to create the large Hashtag Emotion Lexicon. Emoticons are used as proxies also by hallsmarmulti, who use distributed vector representations to find which words are interchangeable with emoticons but also which emoticons are used in a similar context. We take advantage of distant supervision by using Facebook reactions as proxies for emotion labels, which to the best of our knowledge hasn't been done yet, and we train a set of Support Vector Machine models for emotion recognition. Our models, differently from existing ones, exploit information which is acquired entirely automatically, and achieve competitive or even state-of-the-art results for some of the emotion labels on existing, standard evaluation datasets. For explanatory purposes, related work is discussed further and more in detail when we describe the benchmarks for evaluation (Section SECREF3 ) and when we compare our models to existing ones (Section SECREF5 ). We also explore and discuss how choosing different sets of Facebook pages as training data provides an intrinsic domain-adaptation method. Facebook reactions as labels For years, on Facebook people could leave comments to posts, and also “like” them, by using a thumbs-up feature to explicitly express a generic, rather underspecified, approval. A “like” could thus mean “I like what you said", but also “I like that you bring up such topic (though I find the content of the article you linked annoying)". In February 2016, after a short trial, Facebook made a more explicit reaction feature available world-wide. Rather than allowing for the underspecified “like” as the only wordless response to a post, a set of six more specific reactions was introduced, as shown in Figure FIGREF1 : Like, Love, Haha, Wow, Sad and Angry. We use such reactions as proxies for emotion labels associated to posts. We collected Facebook posts and their corresponding reactions from public pages using the Facebook API, which we accessed via the Facebook-sdk python library. We chose different pages (and therefore domains and stances), aiming at a balanced and varied dataset, but we did so mainly based on intuition (see Section SECREF4 ) and with an eye to the nature of the datasets available for evaluation (see Section SECREF5 ). The choice of which pages to select posts from is far from trivial, and we believe this is actually an interesting aspect of our approach, as by using different Facebook pages one can intrinsically tackle the domain-adaptation problem (See Section SECREF6 for further discussion on this). The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney. Note that thankful was only available during specific time spans related to certain events, as Mother's Day in May 2016. For each page, we downloaded the latest 1000 posts, or the maximum available if there are fewer, from February 2016, retrieving the counts of reactions for each post. The output is a JSON file containing a list of dictionaries with a timestamp, the post and a reaction vector with frequency values, which indicate how many users used that reaction in response to the post (Figure FIGREF3 ). The resulting emotion vectors must then be turned into an emotion label. In the context of this experiment, we made the simple decision of associating to each post the emotion with the highest count, ignoring like as it is the default and most generic reaction people tend to use. Therefore, for example, to the first post in Figure FIGREF3 , we would associate the label sad, as it has the highest score (284) among the meaningful emotions we consider, though it also has non-zero scores for other emotions. At this stage, we didn't perform any other entropy-based selection of posts, to be investigated in future work. Emotion datasets Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation. Affective Text dataset Task 14 at SemEval 2007 BIBREF7 was concerned with the classification of emotions and valence in news headlines. The headlines where collected from several news websites including Google news, The New York Times, BBC News and CNN. The used emotion labels were Anger, Disgust, Fear, Joy, Sadness, Surprise, in line with the six basic emotions of Ekman's standard model BIBREF8 . Valence was to be determined as positive or negative. Classification of emotion and valence were treated as separate tasks. Emotion labels were not considered as mututally exclusive, and each emotion was assigned a score from 0 to 100. Training/developing data amounted to 250 annotated headlines (Affective development), while systems were evaluated on another 1000 (Affective test). Evaluation was done using two different methods: a fine-grained evaluation using Pearson's r to measure the correlation between the system scores and the gold standard; and a coarse-grained method where each emotion score was converted to a binary label, and precision, recall, and f-score were computed to assess performance. As it is done in most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , we also treat this as a classification problem (coarse-grained). This dataset has been extensively used for the evaluation of various unsupervised methods BIBREF2 , but also for testing different supervised learning techniques and feature portability BIBREF10 . Fairy Tales dataset This is a dataset collected by alm2008affect, where about 1,000 sentences from fairy tales (by B. Potter, H.C. Andersen and Grimm) were annotated with the same six emotions of the Affective Text dataset, though with different names: Angry, Disgusted, Fearful, Happy, Sad, and Surprised. In most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , only sentences where all annotators agreed are used, and the labels angry and disgusted are merged. We adopt the same choices. ISEAR The ISEAR (International Survey on Emotion Antecedents and Reactions BIBREF11 , BIBREF12 ) is a dataset created in the context of a psychology project of the 1990s, by collecting questionnaires answered by people with different cultural backgrounds. The main aim of this project was to gather insights in cross-cultural aspects of emotional reactions. Student respondents, both psychologists and non-psychologists, were asked to report situations in which they had experienced all of seven major emotions (joy, fear, anger, sadness, disgust, shame and guilt). In each case, the questions covered the way they had appraised a given situation and how they reacted. The final dataset contains reports by approximately 3000 respondents from all over the world, for a total of 7665 sentences labelled with an emotion, making this the largest dataset out of the three we use. Overview of datasets and emotions We summarise datasets and emotion distribution from two viewpoints. First, because there are different sets of emotions labels in the datasets and Facebook data, we need to provide a mapping and derive a subset of emotions that we are going to use for the experiments. This is shown in Table TABREF8 , where in the “Mapped” column we report the final emotions we use in this paper: anger, joy, sadness, surprise. All labels in each dataset are mapped to these final emotions, which are therefore the labels we use for training and testing our models. Second, the distribution of the emotions for each dataset is different, as can be seen in Figure FIGREF9 . In Figure FIGREF9 we also provide the distribution of the emotions anger, joy, sadness, surprise per Facebook page, in terms of number of posts (recall that we assign to a post the label corresponding to the majority emotion associated to it, see Section SECREF2 ). We can observe that for example pages about news tend to have more sadness and anger posts, while pages about cooking and tv-shows have a high percentage of joy posts. We will use this information to find the best set of pages for a given target domain (see Section SECREF5 ). Model There are two main decisions to be taken in developing our model: (i) which Facebook pages to select as training data, and (ii) which features to use to train the model, which we discuss below. Specifically, we first set on a subset of pages and then experiment with features. Further exploration of the interaction between choice of pages and choice of features is left to future work, and partly discussed in Section SECREF6 . For development, we use a small portion of the Affective data set described in Section SECREF4 , that is the portion that had been released as development set for SemEval's 2007 Task 14 BIBREF7 , which contains 250 annotated sentences (Affective development, Section SECREF4 ). All results reported in this section are on this dataset. The test set of Task 14 as well as the other two datasets described in Section SECREF3 will be used to evaluate the final models (Section SECREF4 ). Selecting Facebook pages Although page selection is a crucial ingredient of this approach, which we believe calls for further and deeper, dedicated investigation, for the experiments described here we took a rather simple approach. First, we selected the pages that would provide training data based on intuition and availability, then chose different combinations according to results of a basic model run on development data, and eventually tested feature combinations, still on the development set. For the sake of simplicity and transparency, we first trained an SVM with a simple bag-of-words model and default parameters as per the Scikit-learn implementation BIBREF13 on different combinations of pages. Based on results of the attempted combinations as well as on the distribution of emotions in the development dataset (Figure FIGREF9 ), we selected a best model (B-M), namely the combined set of Time, The Guardian and Disney, which yields the highest results on development data. Time and The Guardian perform well on most emotions but Disney helps to boost the performance for the Joy class. Features In selecting appropriate features, we mainly relied on previous work and intuition. We experimented with different combinations, and all tests were still done on Affective development, using the pages for the best model (B-M) described above as training data. Results are in Table TABREF20 . Future work will further explore the simultaneous selection of features and page combinations. We use a set of basic text-based features to capture the emotion class. These include a tf-idf bag-of-words feature, word (2-3) and character (2-5) ngrams, and features related to the presence of negation words, and to the usage of punctuation. This feature is used in all unsupervised models as a source of information, and we mainly include it to assess its contribution, but eventually do not use it in our final model. We used the NRC10 Lexicon because it performed best in the experiments by BIBREF10 , which is built around the emotions anger, anticipation, disgust, fear, joy, sadness, and surprise, and the valence values positive and negative. For each word in the lexicon, a boolean value indicating presence or absence is associated to each emotion. For a whole sentence, a global score per emotion can be obtained by summing the vectors for all content words of that sentence included in the lexicon, and used as feature. As additional feature, we also included Word Embeddings, namely distributed representations of words in a vector space, which have been exceptionally successful in boosting performance in a plethora of NLP tasks. We use three different embeddings: Google embeddings: pre-trained embeddings trained on Google News and obtained with the skip-gram architecture described in BIBREF14 . This model contains 300-dimensional vectors for 3 million words and phrases. Facebook embeddings: embeddings that we trained on our scraped Facebook pages for a total of 20,000 sentences. Using the gensim library BIBREF15 , we trained the embeddings with the following parameters: window size of 5, learning rate of 0.01 and dimensionality of 100. We filtered out words with frequency lower than 2 occurrences. Retrofitted embeddings: Retrofitting BIBREF16 has been shown as a simple but efficient way of informing trained embeddings with additional information derived from some lexical resource, rather than including it directly at the training stage, as it's done for example to create sense-aware BIBREF17 or sentiment-aware BIBREF18 embeddings. In this work, we retrofit general embeddings to include information about emotions, so that emotion-similar words can get closer in space. Both the Google as well as our Facebook embeddings were retrofitted with lexical information obtained from the NRC10 Lexicon mentioned above, which provides emotion-similarity for each token. Note that differently from the previous two types of embeddings, the retrofitted ones do rely on handcrafted information in the form of a lexical resource. Results on development set We report precision, recall, and f-score on the development set. The average f-score is reported as micro-average, to better account for the skewed distribution of the classes as well as in accordance to what is usually reported for this task BIBREF19 . From Table TABREF20 we draw three main observations. First, a simple tf-idf bag-of-word mode works already very well, to the point that the other textual and lexicon-based features don't seem to contribute to the overall f-score (0.368), although there is a rather substantial variation of scores per class. Second, Google embeddings perform a lot better than Facebook embeddings, and this is likely due to the size of the corpus used for training. Retrofitting doesn't seem to help at all for the Google embeddings, but it does boost the Facebook embeddings, leading to think that with little data, more accurate task-related information is helping, but corpus size matters most. Third, in combination with embeddings, all features work better than just using tf-idf, but removing the Lexicon feature, which is the only one based on hand-crafted resources, yields even better results. Then our best model (B-M) on development data relies entirely on automatically obtained information, both in terms of training data as well as features. Results In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 . Our B-M model relies on subsets of Facebook pages for training, which were chosen according to their performance on the development set as well as on the observation of emotions distribution on different pages and in the different datasets, as described in Section SECREF4 . The feature set we use is our best on the development set, namely all the features plus Google-based embeddings, but excluding the lexicon. This makes our approach completely independent of any manual annotation or handcrafted resource. Our model's performance is compared to the following systems, for which results are reported in the referred literature. Please note that no other existing model was re-implemented, and results are those reported in the respective papers. Discussion, conclusions and future work We have explored the potential of using Facebook reactions in a distant supervised setting to perform emotion classification. The evaluation on standard benchmarks shows that models trained as such, especially when enhanced with continuous vector representations, can achieve competitive results without relying on any handcrafted resource. An interesting aspect of our approach is the view to domain adaptation via the selection of Facebook pages to be used as training data. We believe that this approach has a lot of potential, and we see the following directions for improvement. Feature-wise, we want to train emotion-aware embeddings, in the vein of work by tang:14, and iacobacci2015sensembed. Retrofitting FB-embeddings trained on a larger corpus might also be successful, but would rely on an external lexicon. The largest room for yielding not only better results but also interesting insights on extensions of this approach lies in the choice of training instances, both in terms of Facebook pages to get posts from, as well as in which posts to select from the given pages. For the latter, one could for example only select posts that have a certain length, ignore posts that are only quotes or captions to images, or expand posts by including content from linked html pages, which might provide larger and better contexts BIBREF23 . Additionally, and most importantly, one could use an entropy-based measure to select only posts that have a strong emotion rather than just considering the majority emotion as training label. For the former, namely the choice of Facebook pages, which we believe deserves the most investigation, one could explore several avenues, especially in relation to stance-based issues BIBREF24 . In our dataset, for example, a post about Chile beating Colombia in a football match during the Copa America had very contradictory reactions, depending on which side readers would cheer for. Similarly, the very same political event, for example, would get very different reactions from readers if it was posted on Fox News or The Late Night Show, as the target audience is likely to feel very differently about the same issue. This also brings up theoretical issues related more generally to the definition of the emotion detection task, as it's strongly dependent on personal traits of the audience. Also, in this work, pages initially selected on availability and intuition were further grouped into sets to make training data according to performance on development data, and label distribution. Another criterion to be exploited would be vocabulary overlap between the pages and the datasets. Lastly, we could develop single models for each emotion, treating the problem as a multi-label task. This would even better reflect the ambiguity and subjectivity intrinsic to assigning emotions to text, where content could be at same time joyful or sad, depending on the reader. Acknowledgements In addition to the anonymous reviewers, we want to thank Lucia Passaro and Barbara Plank for insightful discussions, and for providing comments on draft versions of this paper.
What was their performance on emotion detection?
Answer with content missing: (Table 3) Best author's model B-M average micro f-score is 0.409, 0.459, 0.411 on Affective, Fairy Tales and ISEAR datasets respectively.
3,410
qasper
4k
Introduction We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification. A key problem arisen here, is how to leverage such knowledge to guide the learning process, an interesting problem for both NLP and machine learning communities. Previous studies addressing the problem fall into several lines. First, to leverage prior knowledge to label data BIBREF1 , BIBREF2 . Second, to encode prior knowledge with a prior on parameters, which can be commonly seen in many Bayesian approaches BIBREF3 , BIBREF4 . Third, to formalise prior knowledge with additional variables and dependencies BIBREF5 . Last, to use prior knowledge to control the distributions over latent output variables BIBREF6 , BIBREF7 , BIBREF8 , which makes the output variables easily interpretable. However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable. In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust. More attractively, we do not need manual annotation to label these neutral feature words in our proposed approach. More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later. To summarize, the main contributions of this work are as follows: The rest of the paper is structured as follows: In Section 2, we briefly describe the generalized expectation criteria and present the proposed regularization terms. In Section 3, we conduct extensive experiments to justify the proposed methods. We survey related work in Section 4, and summarize our work in Section 5. Method We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification. Generalized Expectation Criteria Generalized expectation (GE) criteria BIBREF7 provides us a natural way to directly constrain the model in the preferred direction. For example, when we know the proportion of each class of the dataset in a classification task, we can guide the model to predict out a pre-specified class distribution. Formally, in a parameter estimation objective function, a GE term expresses preferences on the value of some constraint functions about the model's expectation. Given a constraint function $G({\rm x}, y)$ , a conditional model distribution $p_\theta (y|\rm x)$ , an empirical distribution $\tilde{p}({\rm x})$ over input samples and a score function $S$ , a GE term can be expressed as follows: $$S(E_{\tilde{p}({\rm x})}[E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]])$$ (Eq. 4) Learning from Labeled Features Druck et al. ge-fl proposed GE-FL to learn from labeled features using generalized expectation criteria. When given a set of labeled features $K$ , the reference distribution over classes of these features is denoted by $\hat{p}(y| x_k), k \in K$ . GE-FL introduces the divergence between this reference distribution and the model predicted distribution $p_\theta (y | x_k)$ , as a term of the objective function: $$\mathcal {O} = \sum _{k \in K} KL(\hat{p}(y|x_k) || p_\theta (y | x_k)) + \sum _{y,i} \frac{\theta _{yi}^2}{2 \sigma ^2}$$ (Eq. 6) where $\theta _{yi}$ is the model parameter which indicates the importance of word $i$ to class $y$ . The predicted distribution $p_\theta (y | x_k)$ can be expressed as follows: $ p_\theta (y | x_k) = \frac{1}{C_k} \sum _{\rm x} p_\theta (y|{\rm x})I(x_k) $ in which $I(x_k)$ is 1 if feature $k$ occurs in instance ${\rm x}$ and 0 otherwise, $C_k = \sum _{\rm x} I(x_k)$ is the number of instances with a non-zero value of feature $k$ , and $p_\theta (y|{\rm x})$ takes a softmax form as follows: $ p_\theta (y|{\rm x}) = \frac{1}{Z(\rm x)}\exp (\sum _i \theta _{yi}x_i). $ To solve the optimization problem, L-BFGS can be used for parameter estimation. In the framework of GE, this term can be obtained by setting the constraint function $G({\rm x}, y) = \frac{1}{C_k} \vec{I} (y)I(x_k)$ , where $\vec{I}(y)$ is an indicator vector with 1 at the index corresponding to label $y$ and 0 elsewhere. Regularization Terms GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust. Neutral features are features that are not informative indicator of any classes, for instance, word player to the baseball-hockey classification task. Such features are usually frequent words across all categories. When we set the preference distribution of the neutral features to be uniform distributed, these neutral features will prevent the model from biasing to the class that has a dominate number of labeled features. Formally, given a set of neutral features $K^{^{\prime }}$ , the uniform distribution is $\hat{p}_u(y|x_k) = \frac{1}{|C|}, k \in K^{^{\prime }}$ , where $|C|$ is the number of classes. The objective function with the new term becomes $$\mathcal {O}_{NE} = \mathcal {O} + \sum _{k \in K^{^{\prime }}} KL(\hat{p}_u(y|x_k) || p_\theta (y | x_k)).$$ (Eq. 9) Note that we do not need manual annotation to provide neutral features. One simple way is to take the most common features as neutral features. Experimental results show that this strategy works successfully. Another way to prevent the model from drifting from the desired direction is to constrain the predicted class distribution on unlabeled data. When lacking knowledge about the class distribution of the data, one feasible way is to take maximum entropy principle, as below: $$\mathcal {O}_{ME} = \mathcal {O} + \lambda \sum _{y} p(y) \log p(y)$$ (Eq. 11) where $p(y)$ is the predicted class distribution, given by $ p(y) = \frac{1}{|X|} \sum _{\rm x} p_\theta (y | \rm x). $ To control the influence of this term on the overall objective function, we can tune $\lambda $ according to the difference in the number of labeled features of each class. In this paper, we simply set $\lambda $ to be proportional to the total number of labeled features, say $\lambda = \beta |K|$ . This maximum entropy term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ . Therefore, $E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]$ is just the model distribution $p_\theta (y|{\rm x})$ and its expectation with the empirical distribution $\tilde{p}(\rm x)$ is simply the average over input samples, namely $p(y)$ . When $S$ takes the maximum entropy form, we can derive the objective function as above. Sometimes, we have already had much knowledge about the corpus, and can estimate the class distribution roughly without labeling instances. Therefore, we introduce the KL divergence between the predicted and reference class distributions into the objective function. Given the preference class distribution $\hat{p}(y)$ , we modify the objective function as follows: $$\mathcal {O}_{KL} &= \mathcal {O} + \lambda KL(\hat{p}(y) || p(y))$$ (Eq. 13) Similarly, we set $\lambda = \beta |K|$ . This divergence term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ and setting the score function to $S(\hat{p}, p) = \sum _i \hat{p}_i \log \frac{\hat{p}_i}{p_i}$ , where $p$ and $\hat{p}$ are distributions. Note that this regularization term involves the reference class distribution which will be discussed later. Experiments In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria. Data Preparation We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features. The movie dataset, in which the task is to classify the movie reviews as positive or negtive, is used for testing the proposed approaches with unbalanced labeled features, unbalanced datasets or different $\lambda $ parameters. All unbalanced datasets are constructed based on the movie dataset by randomly removing documents of the positive class. For each experiment, we conduct 10-fold cross validation. As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ). Similar to BIBREF10 , BIBREF0 , we estimate the reference distribution of the labeled features using a heuristic strategy. If there are $|C|$ classes in total, and $n$ classes are associated with a feature $k$ , the probability that feature $k$ is related with any one of the $n$ classes is $\frac{0.9}{n}$ and with any other class is $\frac{0.1}{|C| - n}$ . Neutral features are the most frequent words after removing stop words, and their reference distributions are uniformly distributed. We use the top 10 frequent words as neutral features in all experiments. With Unbalanced Labeled Features In this section, we evaluate our approach when there is unbalanced knowledge on the categories to be classified. The labeled features are obtained through information gain. Two settings are chosen: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). As shown in Figure 1 , Maximum entropy principle shows improvement only on the balanced case. An obvious reason is that maximum entropy only favors uniform distribution. Incorporating Neutral features performs similarly to maximum entropy since we assume that neutral words are uniformly distributed. Its accuracy decreases slowly when the number of labeled features becomes larger ( $t>4$ ) (Figure 1 (a)), suggesting that the model gradually biases to the class with more labeled features, just like GE-FL. Incorporating the KL divergence of class distribution performs much better than GE-FL on both balanced and unbalanced datasets. This shows that it is effective to control the unbalance in labeled features and in the dataset. With Balanced Labeled Features We also compare with the baseline when the labeled features are balanced. Similar to the experiment above, the labeled features are obtained by information gain. Two settings are experimented with: (a) We randomly select $t \in [1, 20]$ features from the feature pool for each class, and conduct comparisons on the original balanced movie dataset (positive:negtive=1:1). (b) Similar to (a), but the class distribution is unbalanced, by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 2 . When the dataset is balanced (Figure 2 (a)), there is little difference between GE-FL and our methods. The reason is that the proposed regularization terms provide no additional knowledge to the model and there is no bias in the labeled features. On the unbalanced dataset (Figure 2 (b)), incorporating KL divergence is much better than GE-FL since we provide additional knowledge(the true class distribution), but maximum entropy and neutral features are much worse because forcing the model to approach the uniform distribution misleads it. With Unbalanced Class Distributions Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 . Figure 3 (a) shows that when the dataset and the labeled features are both balanced, there is little difference between our methods and GE-FL(also see Figure 2 (a)). But when the class distribution becomes more unbalanced, the difference becomes more remarkable. Performance of neutral features and maximum entropy decrease significantly but incorporating KL divergence increases remarkably. This suggests if we have more accurate knowledge about class distribution, KL divergence can guide the model to the right direction. Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive. The Influence of λ\lambda We present the influence of $\lambda $ on the method that incorporates KL divergence in this section. Since we simply set $\lambda = \beta |K|$ , we just tune $\beta $ here. Note that when $\beta = 0$ , the newly introduced regularization term is disappeared, and thus the model is actually GE-FL. Again, we test the method with different $\lambda $ in two settings: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other class. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 4 . As expected, $\lambda $ reflects how strong the regularization is. The model tends to be closer to our preferences with the increasing of $\lambda $ on both cases. Using LDA Selected Features We compare our methods with GE-FL on all the 9 datasets in this section. Instead of using features obtained by information gain, we use LDA to select labeled features. Unlike information gain, LDA does not employ any instance labels to find labeled features. In this setting, we can build classification models without any instance annotation, but just with labeled features. Table 1 shows that our three methods significantly outperform GE-FL. Incorporating neutral features performs better than GE-FL on 7 of the 9 datasets, maximum entropy is better on 8 datasets, and KL divergence better on 7 datasets. LDA selects out the most predictive features as labeled features without considering the balance among classes. GE-FL does not exert any control on such an issue, so the performance is severely suffered. Our methods introduce auxiliary regularization terms to control such a bias problem and thus promote the model significantly. Related Work There have been much work that incorporate prior knowledge into learning, and two related lines are surveyed here. One is to use prior knowledge to label unlabeled instances and then apply a standard learning algorithm. The other is to constrain the model directly with prior knowledge. Liu et al.text manually labeled features which are highly predictive to unsupervised clustering assignments and use them to label unlabeled data. Chang et al.guiding proposed constraint driven learning. They first used constraints and the learned model to annotate unlabeled instances, and then updated the model with the newly labeled data. Daumé daume2008cross proposed a self training method in which several models are trained on the same dataset, and only unlabeled instances that satisfy the cross task knowledge constraints are used in the self training process. MaCallum et al.gec proposed generalized expectation(GE) criteria which formalised the knowledge as constraint terms about the expectation of the model into the objective function.Graça et al.pr proposed posterior regularization(PR) framework which projects the model's posterior onto a set of distributions that satisfy the auxiliary constraints. Druck et al.ge-fl explored constraints of labeled features in the framework of GE by forcing the model's predicted feature distribution to approach the reference distribution. Andrzejewski et al.andrzejewski2011framework proposed a framework in which general domain knowledge can be easily incorporated into LDA. Altendorf et al.altendorf2012learning explored monotonicity constraints to improve the accuracy while learning from sparse data. Chen et al.chen2013leveraging tried to learn comprehensible topic models by leveraging multi-domain knowledge. Mann and McCallum simple,generalized incorporated not only labeled features but also other knowledge like class distribution into the objective function of GE-FL. But they discussed only from the semi-supervised perspective and did not investigate into the robustness problem, unlike what we addressed in this paper. There are also some active learning methods trying to use prior knowledge. Raghavan et al.feedback proposed to use feedback on instances and features interlacedly, and demonstrated that feedback on features boosts the model much. Druck et al.active proposed an active learning method which solicits labels on features rather than on instances and then used GE-FL to train the model. Conclusion and Discussions This paper investigates into the problem of how to leverage prior knowledge robustly in learning models. We propose three regularization terms on top of generalized expectation criteria. As demonstrated by the experimental results, the performance can be considerably improved when taking into account these factors. Comparative results show that our proposed methods is more effective and works more robustly against baselines. To the best of our knowledge, this is the first work to address the robustness problem of leveraging knowledge, and may inspire other research. We then present more detailed discussions about the three regularization methods. Incorporating neutral features is the simplest way of regularization, which doesn't require any modification of GE-FL but just finding out some common features. But as Figure 1 (a) shows, only using neutral features are not strong enough to handle extremely unbalanced labeled features. The maximum entropy regularization term shows the strong ability of controlling unbalance. This method doesn't need any extra knowledge, and is thus suitable when we know nothing about the corpus. But this method assumes that the categories are uniformly distributed, which may not be the case in practice, and it will have a degraded performance if the assumption is violated (see Figure 1 (b), Figure 2 (b), Figure 3 (a)). The KL divergence performs much better on unbalanced corpora than other methods. The reason is that KL divergence utilizes the reference class distribution and doesn't make any assumptions. The fact suggests that additional knowledge does benefit the model. However, the KL divergence term requires providing the true class distribution. Sometimes, we may have the exact knowledge about the true distribution, but sometimes we may not. Fortunately, the model is insensitive to the true distribution and therefore a rough estimation of the true distribution is sufficient. In our experiments, when the true class distribution is 1:2, where the reference class distribution is set to 1:1.5/1:2/1:2.5, the accuracy is 0.755/0.756/0.760 respectively. This provides us the possibility to perform simple computing on the corpus to obtain the distribution in reality. Or, we can set the distribution roughly with domain expertise.
How do they define robustness of a model?
ability to accurately classify texts even when the amount of prior knowledge for different classes is unbalanced, and when the class distribution of the dataset is unbalanced
3,609
qasper
4k
Introduction In this publication, we present Sentence-BERT (SBERT), a modification of the BERT network using siamese and triplet networks that is able to derive semantically meaningful sentence embeddings. This enables BERT to be used for certain new tasks, which up-to-now were not applicable for BERT. These tasks include large-scale semantic similarity comparison, clustering, and information retrieval via semantic search. BERT set new state-of-the-art performance on various sentence classification and sentence-pair regression tasks. BERT uses a cross-encoder: Two sentences are passed to the transformer network and the target value is predicted. However, this setup is unsuitable for various pair regression tasks due to too many possible combinations. Finding in a collection of $n=10\,000$ sentences the pair with the highest similarity requires with BERT $n\cdot (n-1)/2=49\,995\,000$ inference computations. On a modern V100 GPU, this requires about 65 hours. Similar, finding which of the over 40 million existent questions of Quora is the most similar for a new question could be modeled as a pair-wise comparison with BERT, however, answering a single query would require over 50 hours. A common method to address clustering and semantic search is to map each sentence to a vector space such that semantically similar sentences are close. Researchers have started to input individual sentences into BERT and to derive fixed-size sentence embeddings. The most commonly used approach is to average the BERT output layer (known as BERT embeddings) or by using the output of the first token (the [CLS] token). As we will show, this common practice yields rather bad sentence embeddings, often worse than averaging GloVe embeddings BIBREF2. To alleviate this issue, we developed SBERT. The siamese network architecture enables that fixed-sized vectors for input sentences can be derived. Using a similarity measure like cosine-similarity or Manhatten / Euclidean distance, semantically similar sentences can be found. These similarity measures can be performed extremely efficient on modern hardware, allowing SBERT to be used for semantic similarity search as well as for clustering. The complexity for finding the most similar sentence pair in a collection of 10,000 sentences is reduced from 65 hours with BERT to the computation of 10,000 sentence embeddings (5 seconds with SBERT) and computing cosine-similarity (0.01 seconds). By using optimized index structures, finding the most similar Quora question can be reduced from 50 hours to a few milliseconds BIBREF3. We fine-tune SBERT on NLI data, which creates sentence embeddings that significantly outperform other state-of-the-art sentence embedding methods like InferSent BIBREF4 and Universal Sentence Encoder BIBREF5. On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder. On SentEval BIBREF6, an evaluation toolkit for sentence embeddings, we achieve an improvement of 2.1 and 2.6 points, respectively. SBERT can be adapted to a specific task. It sets new state-of-the-art performance on a challenging argument similarity dataset BIBREF7 and on a triplet dataset to distinguish sentences from different sections of a Wikipedia article BIBREF8. The paper is structured in the following way: Section SECREF3 presents SBERT, section SECREF4 evaluates SBERT on common STS tasks and on the challenging Argument Facet Similarity (AFS) corpus BIBREF7. Section SECREF5 evaluates SBERT on SentEval. In section SECREF6, we perform an ablation study to test some design aspect of SBERT. In section SECREF7, we compare the computational efficiency of SBERT sentence embeddings in contrast to other state-of-the-art sentence embedding methods. Related Work We first introduce BERT, then, we discuss state-of-the-art sentence embedding methods. BERT BIBREF0 is a pre-trained transformer network BIBREF9, which set for various NLP tasks new state-of-the-art results, including question answering, sentence classification, and sentence-pair regression. The input for BERT for sentence-pair regression consists of the two sentences, separated by a special [SEP] token. Multi-head attention over 12 (base-model) or 24 layers (large-model) is applied and the output is passed to a simple regression function to derive the final label. Using this setup, BERT set a new state-of-the-art performance on the Semantic Textual Semilarity (STS) benchmark BIBREF10. RoBERTa BIBREF1 showed, that the performance of BERT can further improved by small adaptations to the pre-training process. We also tested XLNet BIBREF11, but it led in general to worse results than BERT. A large disadvantage of the BERT network structure is that no independent sentence embeddings are computed, which makes it difficult to derive sentence embeddings from BERT. To bypass this limitations, researchers passed single sentences through BERT and then derive a fixed sized vector by either averaging the outputs (similar to average word embeddings) or by using the output of the special CLS token (for example: bertsentenceembeddings1,bertsentenceembeddings2,bertsentenceembeddings3). These two options are also provided by the popular bert-as-a-service-repository. Up to our knowledge, there is so far no evaluation if these methods lead to useful sentence embeddings. Sentence embeddings are a well studied area with dozens of proposed methods. Skip-Thought BIBREF12 trains an encoder-decoder architecture to predict the surrounding sentences. InferSent BIBREF4 uses labeled data of the Stanford Natural Language Inference dataset BIBREF13 and the Multi-Genre NLI dataset BIBREF14 to train a siamese BiLSTM network with max-pooling over the output. Conneau et al. showed, that InferSent consistently outperforms unsupervised methods like SkipThought. Universal Sentence Encoder BIBREF5 trains a transformer network and augments unsupervised learning with training on SNLI. hill-etal-2016-learning showed, that the task on which sentence embeddings are trained significantly impacts their quality. Previous work BIBREF4, BIBREF5 found that the SNLI datasets are suitable for training sentence embeddings. yang-2018-learning presented a method to train on conversations from Reddit using siamese DAN and siamese transformer networks, which yielded good results on the STS benchmark dataset. polyencoders addresses the run-time overhead of the cross-encoder from BERT and present a method (poly-encoders) to compute a score between $m$ context vectors and pre-computed candidate embeddings using attention. This idea works for finding the highest scoring sentence in a larger collection. However, poly-encoders have the drawback that the score function is not symmetric and the computational overhead is too large for use-cases like clustering, which would require $O(n^2)$ score computations. Previous neural sentence embedding methods started the training from a random initialization. In this publication, we use the pre-trained BERT and RoBERTa network and only fine-tune it to yield useful sentence embeddings. This reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods. Model SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN. In order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity. The network structure depends on the available training data. We experiment with the following structures and objective functions. Classification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \in \mathbb {R}^{3n \times k}$: where $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4. Regression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function. Triplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function: with $s_x$ the sentence embedding for $a$/$n$/$p$, $||\cdot ||$ a distance metric and margin $\epsilon $. Margin $\epsilon $ ensures that $s_p$ is at least $\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\epsilon =1$ in our experiments. Model ::: Training Details We train SBERT on the combination of the SNLI BIBREF13 and the Multi-Genre NLI BIBREF14 dataset. The SNLI is a collection of 570,000 sentence pairs annotated with the labels contradiction, eintailment, and neutral. MultiNLI contains 430,000 sentence pairs and covers a range of genres of spoken and written text. We fine-tune SBERT with a 3-way softmax-classifier objective function for one epoch. We used a batch-size of 16, Adam optimizer with learning rate $2\mathrm {e}{-5}$, and a linear learning rate warm-up over 10% of the training data. Our default pooling strategy is MEAN. Evaluation - Semantic Textual Similarity We evaluate the performance of SBERT for common Semantic Textual Similarity (STS) tasks. State-of-the-art methods often learn a (complex) regression function that maps sentence embeddings to a similarity score. However, these regression functions work pair-wise and due to the combinatorial explosion those are often not scalable if the collection of sentences reaches a certain size. Instead, we always use cosine-similarity to compare the similarity between two sentence embeddings. We ran our experiments also with negative Manhatten and negative Euclidean distances as similarity measures, but the results for all approaches remained roughly the same. Evaluation - Semantic Textual Similarity ::: Unsupervised STS We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6. The results shows that directly using the output of BERT leads to rather poor performances. Averaging the BERT embeddings achieves an average correlation of only 54.81, and using the CLS-token output only achieves an average correlation of 29.19. Both are worse than computing average GloVe embeddings. Using the described siamese network structure and fine-tuning mechanism substantially improves the correlation, outperforming both InferSent and Universal Sentence Encoder substantially. The only dataset where SBERT performs worse than Universal Sentence Encoder is SICK-R. Universal Sentence Encoder was trained on various datasets, including news, question-answer pages and discussion forums, which appears to be more suitable to the data of SICK-R. In contrast, SBERT was pre-trained only on Wikipedia (via BERT) and on NLI data. While RoBERTa was able to improve the performance for several supervised tasks, we only observe minor difference between SBERT and SRoBERTa for generating sentence embeddings. Evaluation - Semantic Textual Similarity ::: Supervised STS The STS benchmark (STSb) BIBREF10 provides is a popular dataset to evaluate supervised STS systems. The data includes 8,628 sentence pairs from the three categories captions, news, and forums. It is divided into train (5,749), dev (1,500) and test (1,379). BERT set a new state-of-the-art performance on this dataset by passing both sentences to the network and using a simple regression method for the output. We use the training set to fine-tune SBERT using the regression objective function. At prediction time, we compute the cosine-similarity between the sentence embeddings. All systems are trained with 10 random seeds to counter variances BIBREF23. The results are depicted in Table TABREF10. We experimented with two setups: Only training on STSb, and first training on NLI, then training on STSb. We observe that the later strategy leads to a slight improvement of 1-2 points. This two-step approach had an especially large impact for the BERT cross-encoder, which improved the performance by 3-4 points. We do not observe a significant difference between BERT and RoBERTa. Evaluation - Semantic Textual Similarity ::: Argument Facet Similarity We evaluate SBERT on the Argument Facet Similarity (AFS) corpus by MisraEW16. The AFS corpus annotated 6,000 sentential argument pairs from social media dialogs on three controversial topics: gun control, gay marriage, and death penalty. The data was annotated on a scale from 0 (“different topic") to 5 (“completely equivalent"). The similarity notion in the AFS corpus is fairly different to the similarity notion in the STS datasets from SemEval. STS data is usually descriptive, while AFS data are argumentative excerpts from dialogs. To be considered similar, arguments must not only make similar claims, but also provide a similar reasoning. Further, the lexical gap between the sentences in AFS is much larger. Hence, simple unsupervised methods as well as state-of-the-art STS systems perform badly on this dataset BIBREF24. We evaluate SBERT on this dataset in two scenarios: 1) As proposed by Misra et al., we evaluate SBERT using 10-fold cross-validation. A draw-back of this evaluation setup is that it is not clear how well approaches generalize to different topics. Hence, 2) we evaluate SBERT in a cross-topic setup. Two topics serve for training and the approach is evaluated on the left-out topic. We repeat this for all three topics and average the results. SBERT is fine-tuned using the Regression Objective Function. The similarity score is computed using cosine-similarity based on the sentence embeddings. We also provide the Pearson correlation $r$ to make the results comparable to Misra et al. However, we showed BIBREF22 that Pearson correlation has some serious drawbacks and should be avoided for comparing STS systems. The results are depicted in Table TABREF12. Unsupervised methods like tf-idf, average GloVe embeddings or InferSent perform rather badly on this dataset with low scores. Training SBERT in the 10-fold cross-validation setup gives a performance that is nearly on-par with BERT. However, in the cross-topic evaluation, we observe a performance drop of SBERT by about 7 points Spearman correlation. To be considered similar, arguments should address the same claims and provide the same reasoning. BERT is able to use attention to compare directly both sentences (e.g. word-by-word comparison), while SBERT must map individual sentences from an unseen topic to a vector space such that arguments with similar claims and reasons are close. This is a much more challenging task, which appears to require more than just two topics for training to work on-par with BERT. Evaluation - Semantic Textual Similarity ::: Wikipedia Sections Distinction ein-dor-etal-2018-learning use Wikipedia to create a thematically fine-grained train, dev and test set for sentence embeddings methods. Wikipedia articles are separated into distinct sections focusing on certain aspects. Dor et al. assume that sentences in the same section are thematically closer than sentences in different sections. They use this to create a large dataset of weakly labeled sentence triplets: The anchor and the positive example come from the same section, while the negative example comes from a different section of the same article. For example, from the Alice Arnold article: Anchor: Arnold joined the BBC Radio Drama Company in 1988., positive: Arnold gained media attention in May 2012., negative: Balding and Arnold are keen amateur golfers. We use the dataset from Dor et al. We use the Triplet Objective, train SBERT for one epoch on the about 1.8 Million training triplets and evaluate it on the 222,957 test triplets. Test triplets are from a distinct set of Wikipedia articles. As evaluation metric, we use accuracy: Is the positive example closer to the anchor than the negative example? Results are presented in Table TABREF14. Dor et al. fine-tuned a BiLSTM architecture with triplet loss to derive sentence embeddings for this dataset. As the table shows, SBERT clearly outperforms the BiLSTM approach by Dor et al. Evaluation - SentEval SentEval BIBREF6 is a popular toolkit to evaluate the quality of sentence embeddings. Sentence embeddings are used as features for a logistic regression classifier. The logistic regression classifier is trained on various tasks in a 10-fold cross-validation setup and the prediction accuracy is computed for the test-fold. The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks. We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks: MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25. CR: Sentiment prediction of customer product reviews BIBREF26. SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27. MPQA: Phrase level opinion polarity classification from newswire BIBREF28. SST: Stanford Sentiment Treebank with binary labels BIBREF29. TREC: Fine grained question-type classification from TREC BIBREF30. MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31. The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task. It appears that the sentence embeddings from SBERT capture well sentiment information: We observe large improvements for all sentiment tasks (MR, CR, and SST) from SentEval in comparison to InferSent and Universal Sentence Encoder. The only dataset where SBERT is significantly worse than Universal Sentence Encoder is the TREC dataset. Universal Sentence Encoder was pre-trained on question-answering data, which appears to be beneficial for the question-type classification task of the TREC dataset. Average BERT embeddings or using the CLS-token output from a BERT network achieved bad results for various STS tasks (Table TABREF6), worse than average GloVe embeddings. However, for SentEval, average BERT embeddings and the BERT CLS-token output achieves decent results (Table TABREF15), outperforming average GloVe embeddings. The reason for this are the different setups. For the STS tasks, we used cosine-similarity to estimate the similarities between sentence embeddings. Cosine-similarity treats all dimensions equally. In contrast, SentEval fits a logistic regression classifier to the sentence embeddings. This allows that certain dimensions can have higher or lower impact on the classification result. We conclude that average BERT embeddings / CLS-token output from BERT return sentence embeddings that are infeasible to be used with cosine-similarity or with Manhatten / Euclidean distance. For transfer learning, they yield slightly worse results than InferSent or Universal Sentence Encoder. However, using the described fine-tuning setup with a siamese network structure on NLI datasets yields sentence embeddings that achieve a new state-of-the-art for the SentEval toolkit. Ablation Study We have demonstrated strong empirical results for the quality of SBERT sentence embeddings. In this section, we perform an ablation study of different aspects of SBERT in order to get a better understanding of their relative importance. We evaluated different pooling strategies (MEAN, MAX, and CLS). For the classification objective function, we evaluate different concatenation methods. For each possible configuration, we train SBERT with 10 different random seeds and average the performances. The objective function (classification vs. regression) depends on the annotated dataset. For the classification objective function, we train SBERT-base on the SNLI and the Multi-NLI dataset. For the regression objective function, we train on the training set of the STS benchmark dataset. Performances are measured on the development split of the STS benchmark dataset. Results are shown in Table TABREF23. When trained with the classification objective function on NLI data, the pooling strategy has a rather minor impact. The impact of the concatenation mode is much larger. InferSent BIBREF4 and Universal Sentence Encoder BIBREF5 both use $(u, v, |u-v|, u*v)$ as input for a softmax classifier. However, in our architecture, adding the element-wise $u*v$ decreased the performance. The most important component is the element-wise difference $|u-v|$. Note, that the concatenation mode is only relevant for training the softmax classifier. At inference, when predicting similarities for the STS benchmark dataset, only the sentence embeddings $u$ and $v$ are used in combination with cosine-similarity. The element-wise difference measures the distance between the dimensions of the two sentence embeddings, ensuring that similar pairs are closer and dissimilar pairs are further apart. When trained with the regression objective function, we observe that the pooling strategy has a large impact. There, the MAX strategy perform significantly worse than MEAN or CLS-token strategy. This is in contrast to BIBREF4, who found it beneficial for the BiLSTM-layer of InferSent to use MAX instead of MEAN pooling. Computational Efficiency Sentence embeddings need potentially be computed for Millions of sentences, hence, a high computation speed is desired. In this section, we compare SBERT to average GloVe embeddings, InferSent BIBREF4, and Universal Sentence Encoder BIBREF5. For our comparison we use the sentences from the STS benchmark BIBREF10. We compute average GloVe embeddings using a simple for-loop with python dictionary lookups and NumPy. InferSent is based on PyTorch. For Universal Sentence Encoder, we use the TensorFlow Hub version, which is based on TensorFlow. SBERT is based on PyTorch. For improved computation of sentence embeddings, we implemented a smart batching strategy: Sentences with similar lengths are grouped together and are only padded to the longest element in a mini-batch. This drastically reduces computational overhead from padding tokens. Performances were measured on a server with Intel i7-5820K CPU @ 3.30GHz, Nvidia Tesla V100 GPU, CUDA 9.2 and cuDNN. The results are depicted in Table TABREF26. On CPU, InferSent is about 65% faster than SBERT. This is due to the much simpler network architecture. InferSent uses a single BiLSTM layer, while BERT uses 12 stacked transformer layers. However, an advantage of transformer networks is the computational efficiency on GPUs. There, SBERT with smart batching is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. Smart batching achieves a speed-up of 89% on CPU and 48% on GPU. Average GloVe embeddings is obviously by a large margin the fastest method to compute sentence embeddings. Conclusion We showed that BERT out-of-the-box maps sentences to a vector space that is rather unsuitable to be used with common similarity measures like cosine-similarity. The performance for seven STS tasks was below the performance of average GloVe embeddings. To overcome this shortcoming, we presented Sentence-BERT (SBERT). SBERT fine-tunes BERT in a siamese / triplet network architecture. We evaluated the quality on various common benchmarks, where it could achieve a significant improvement over state-of-the-art sentence embeddings methods. Replacing BERT with RoBERTa did not yield a significant improvement in our experiments. SBERT is computationally efficient. On a GPU, it is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. SBERT can be used for tasks which are computationally not feasible to be modeled with BERT. For example, clustering of 10,000 sentences with hierarchical clustering requires with BERT about 65 hours, as around 50 Million sentence combinations must be computed. With SBERT, we were able to reduce the effort to about 5 seconds. Acknowledgments This work has been supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1). It has been co-funded by the German Federal Ministry of Education and Research (BMBF) under the promotional references 03VP02540 (ArgumenText).
What other sentence embeddings methods are evaluated?
GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent
3,862
qasper
4k
Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks.
What are method's improvements of F1 for NER task for English and Chinese datasets?
English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively, Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively
3,586
qasper
4k
Introduction In the field of natural language processing (NLP), the most prevalent neural approach to obtaining sentence representations is to use recurrent neural networks (RNNs), where words in a sentence are processed in a sequential and recurrent manner. Along with their intuitive design, RNNs have shown outstanding performance across various NLP tasks e.g. language modeling BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 , BIBREF4 , text classification BIBREF5 , BIBREF6 , and parsing BIBREF7 , BIBREF8 . Among several variants of the original RNN BIBREF9 , gated recurrent architectures such as long short-term memory (LSTM) BIBREF10 and gated recurrent unit (GRU) BIBREF2 have been accepted as de-facto standard choices for RNNs due to their capability of addressing the vanishing and exploding gradient problem and considering long-term dependencies. Gated RNNs achieve these properties by introducing additional gating units that learn to control the amount of information to be transferred or forgotten BIBREF11 , and are proven to work well without relying on complex optimization algorithms or careful initialization BIBREF12 . Meanwhile, the common practice for further enhancing the expressiveness of RNNs is to stack multiple RNN layers, each of which has distinct parameter sets (stacked RNN) BIBREF13 , BIBREF14 . In stacked RNNs, the hidden states of a layer are fed as input to the subsequent layer, and they are shown to work well due to increased depth BIBREF15 or their ability to capture hierarchical time series BIBREF16 which are inherent to the nature of the problem being modeled. However this setting of stacking RNNs might hinder the possibility of more sophisticated recurrence-based structures since the information from lower layers is simply treated as input to the next layer, rather than as another class of state that participates in core RNN computations. Especially for gated RNNs such as LSTMs and GRUs, this means that layer-to-layer connections cannot fully benefit from the carefully constructed gating mechanism used in temporal transitions. Some recent work on stacking RNNs suggests alternative methods that encourage direct and effective interaction between RNN layers by adding residual connections BIBREF17 , BIBREF18 , by shortcut connections BIBREF18 , BIBREF19 , or by using cell states of LSTMs BIBREF20 , BIBREF21 . In this paper, we propose a method of constructing multi-layer LSTMs where cell states are used in controlling the vertical information flow. This system utilizes states from the left and the lower context equally in computation of the new state, thus the information from lower layers is elaborately filtered and reflected through a soft gating mechanism. Our method is easy-to-implement, effective, and can replace conventional stacked LSTMs without much modification of the overall architecture. We call the proposed architecture Cell-aware Stacked LSTM, or CAS-LSTM, and evaluate our method on multiple benchmark datasets: SNLI BIBREF22 , MultiNLI BIBREF23 , Quora Question Pairs BIBREF24 , and SST BIBREF25 . From experiments we show that the CAS-LSTMs consistently outperform typical stacked LSTMs, opening the possibility of performance improvement of architectures that use stacked LSTMs. Our contribution is summarized as follows. This paper is organized as follows. We give a detailed description about the proposed method in § SECREF2 . Experimental results are given in § SECREF3 . We study prior work related to our objective in § SECREF4 and conclude in § SECREF5 . Model Description In this section, we give a detailed formulation of the architectures used in experiments. Notation Throughout this paper, we denote matrices as boldface capital letters ( INLINEFORM0 ), vectors as boldface lowercase letters ( INLINEFORM1 ), and scalars as normal italic letters ( INLINEFORM2 ). For LSTM states, we denote a hidden state as INLINEFORM3 and a cell state as INLINEFORM4 . Also, a layer index of INLINEFORM5 or INLINEFORM6 is denoted by superscript and a time index is denoted by a subscript, i.e. INLINEFORM7 indicates the hidden state at time INLINEFORM8 and layer INLINEFORM9 . INLINEFORM10 means the element-wise multiplication between two vectors. We write INLINEFORM11 -th component of vector INLINEFORM12 as INLINEFORM13 . All vectors are assumed to be column vectors. Stacked LSTMs While there exist various versions of LSTM formulation, in this work we use the following, one of the most common versions: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 , INLINEFORM4 are trainable parameters. INLINEFORM5 and INLINEFORM6 are the sigmoid activation and the hyperbolic tangent activation function respectively. Also we assume that INLINEFORM7 where INLINEFORM8 is the INLINEFORM9 -th input to the network. The input gate INLINEFORM0 and the forget gate INLINEFORM1 control the amount of information transmitted from INLINEFORM2 and INLINEFORM3 , the candidate cell state and the previous cell state, to the new cell state INLINEFORM4 . Similarly the output gate INLINEFORM5 soft-selects which portion of the cell state INLINEFORM6 is to be used in the final hidden state. We can clearly see that cell states ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) play a crucial role in forming horizontal recurrence. However the current formulation does not consider INLINEFORM3 , the cell state from INLINEFORM4 -th layer, in computation and thus the lower context is reflected only through the rudimentary way, hindering the possibility of controlling vertical information flow. Cell-aware Stacked LSTMs Now we extend the stacked LSTM formulation defined above to address the problem noted in the previous subsection. To enhance the interaction between layers in a way similar to how LSTMs keep and forget the information from the previous time step, we introduce the additional forget gate INLINEFORM0 that determines whether to accept or ignore the signals coming from the previous layer. Therefore the proposed Cell-aware Stacked LSTM is formulated as follows: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 and INLINEFORM1 . INLINEFORM2 can either be a vector of constants or parameters. When INLINEFORM3 , the equations defined in the previous subsection are used. Therefore, it can be said that each non-bottom layer of CAS-LSTM accepts two sets of hidden and cell states—one from the left context and the other from the below context. The left and the below context participate in computation with the equivalent procedure so that the information from lower layers can be efficiently propagated. Fig. FIGREF1 compares CAS-LSTM to the conventional stacked LSTM architecture, and Fig. FIGREF8 depicts the computation flow of the CAS-LSTM. We argue that considering INLINEFORM0 in computation is beneficial for the following reasons. First, INLINEFORM1 contains additional information compared to INLINEFORM2 since it is not filtered by INLINEFORM3 . Thus a model that directly uses INLINEFORM4 does not rely solely on INLINEFORM5 for extracting information, due to the fact that it has access to the raw information INLINEFORM6 , as in temporal connections. In other words, INLINEFORM7 no longer has to take all responsibility for selecting useful features for both horizontal and vertical transitions, and the burden of selecting information is shared with INLINEFORM8 . Another advantage of using the INLINEFORM0 lies in the fact that it directly connects INLINEFORM1 and INLINEFORM2 . This direct connection helps and stabilizes training, since the terminal error signals can be easily backpropagated to model parameters. Fig. FIGREF23 illustrates paths between the two cell states. We find experimentally that there is little difference between letting INLINEFORM0 be constant and letting it be trainable parameters, thus we set INLINEFORM1 in all experiments. We also experimented with the architecture without INLINEFORM2 i.e. two cell states are combined by unweighted summation similar to multidimensional RNNs BIBREF27 , and found that it leads to performance degradation and unstable convergence, likely due to mismatch in the range of cell state values between layers ( INLINEFORM3 for the first layer and INLINEFORM4 for the others). Experimental results on various INLINEFORM5 are presented in § SECREF3 . The idea of having multiple states is also related to tree-structured RNNs BIBREF29 , BIBREF30 . Among them, tree-structured LSTMs (Tree-LSTMs) BIBREF31 , BIBREF32 , BIBREF33 are similar to ours in that they use both hidden and cell states from children nodes. In Tree-LSTMs, states for all children nodes are regarded as input, and they participate in the computation equally through weight-shared (in Child-Sum Tree-LSTMs) or weight-unshared (in INLINEFORM0 -ary Tree-LSTMs) projection. From this perspective, each CAS-LSTM layer (where INLINEFORM1 ) can be seen as a binary Tree-LSTM where the structures it operates on are fixed to right-branching trees. The use of cell state in computation could be one reason that Tree-LSTMs perform better than sequential LSTMs even when trivial trees (strictly left- or right-branching) are given BIBREF34 . Multidimensional RNNs (MDRNN) are an extension of 1D sequential RNNs that can accept multidimensional input e.g. images, and have been successfully applied to image segmentation BIBREF26 and handwriting recognition BIBREF27 . Notably multidimensional LSTMs (MDLSTM) BIBREF27 have an analogous formulation to ours except the INLINEFORM0 term and the fact that we use distinct weights per column (or `layer' in our case). From this view, CAS-LSTM can be seen as a certain kind of MDLSTM that accepts a 2D input INLINEFORM1 . Grid LSTMs BIBREF21 also take INLINEFORM2 inputs but emit INLINEFORM3 outputs, which is different from our case where a single set of hidden and cell states is produced. Sentence Encoders The sentence encoder network we use in our experiments takes INLINEFORM0 words (assumed to be one-hot vectors) as input. The words are projected to corresponding word representations: INLINEFORM1 where INLINEFORM2 . Then INLINEFORM3 is fed to a INLINEFORM4 -layer CAS-LSTM model, resulting in the representations INLINEFORM5 . The sentence representation, INLINEFORM6 , is computed by max-pooling INLINEFORM7 over time as in the work of BIBREF35 . Similar to their results, from preliminary experiments we found that the max-pooling performs consistently better than mean- and last-pooling. To make models more expressive, a bidirectional CAS-LSTM network may also be used. In the bidirectional case, the forward representations INLINEFORM0 and the backward representations INLINEFORM1 are concatenated and max-pooled to yield the sentence representation INLINEFORM2 . We call this bidirectional architecture Bi-CAS-LSTM in experiments. Top-layer Classifiers For the natural language inference experiments, we use the following heuristic function proposed by BIBREF36 in feature extraction: DISPLAYFORM0 where INLINEFORM0 means vector concatenation, and INLINEFORM1 and INLINEFORM2 are applied element-wise. And we use the following function in paraphrase identification experiments: DISPLAYFORM0 as in the work of BIBREF37 . For sentiment classification, we use the sentence representation itself. DISPLAYFORM0 We feed the feature extracted from INLINEFORM0 as input to the MLP classifier with ReLU activation followed by the fully-connected softmax layer to predict the label distribution: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 is the number of label classes, and INLINEFORM2 the dimension of the MLP output, Experiments We evaluate our method on natural language inference (NLI), paraphrase identification (PI), and sentiment classification. We also conduct analysis on gate values and experiments on model variants. For detailed experimental settings, we refer readers to the supplemental material. For the NLI and PI tasks, there exists recent work specializing in sentence pair classification. However in this work we confine our model to the architecture that encodes each sentence using a shared encoder without any inter-sentence interaction, in order to focus on the effectiveness of the models in extracting semantics. But note that the applicability of CAS-LSTM is not limited to sentence encoding based approaches. Natural Language Inference For the evaluation of performance of the proposed method on the NLI task, SNLI BIBREF22 and MultiNLI BIBREF23 datasets are used. The objective of both datasets is to predict the relationship between a premise and a hypothesis sentence: entailment, contradiction, and neutral. SNLI and MultiNLI datasets are composed of about 570k and 430k premise-hypothesis pairs respectively. GloVe pretrained word embeddings BIBREF49 are used and remain fixed during training. The dimension of encoder states ( INLINEFORM0 ) is set to 300 and a 1024D MLP with one or two hidden layers is used. We apply dropout BIBREF50 to the word embeddings and the MLP layers. The features used as input to the MLP classifier are extracted following Eq. EQREF28 . Table TABREF32 and TABREF33 contain results of the models on SNLI and MultiNLI datasets. In SNLI, our best model achieves the new state-of-the-art accuracy of 87.0% with relatively fewer parameters. Similarly in MultiNLI, our models match the accuracy of state-of-the-art models in both in-domain (matched) and cross-domain (mismatched) test sets. Note that only the GloVe word vectors are used as word representations, as opposed to some models that introduce character-level features. It is also notable that our proposed architecture does not restrict the selection of pooling method; the performance could further be improved by replacing max-pooling with other advanced algorithms e.g. intra-sentence attention BIBREF39 and generalized pooling BIBREF19 . Paraphrase Identification We use Quora Question Pairs dataset BIBREF24 in evaluating the performance of our method on the PI task. The dataset consists of over 400k question pairs, and each pair is annotated with whether the two sentences are paraphrase of each other or not. Similar to the NLI experiments, GloVe pretrained vectors, 300D encoders, and 1024D MLP are used. The number of CAS-LSTM layers is fixed to 2 in PI experiments. Two sentence vectors are aggregated using Eq. EQREF29 and fed as input to the MLP. The results on the Quora Question Pairs dataset are summarized in Table TABREF34 . Again we can see that our models outperform other models by large margin, achieving the new state of the art. Sentiment Classification In evaluating sentiment classification performance, the Stanford Sentiment Treebank (SST) BIBREF25 is used. It consists of about 12,000 binary-parsed sentences where constituents (phrases) of each parse tree are annotated with a sentiment label (very positive, positive, neutral, negative, very negative). Following the convention of prior work, all phrases and their labels are used in training but only the sentence-level data are used in evaluation. In evaluation we consider two settings, namely SST-2 and SST-5, the two differing only in their level of granularity with regard to labels. In SST-2, data samples annotated with `neutral' are ignored from training and evaluation. The two positive labels (very positive, positive) are considered as the same label, and similarly for the two negative labels. As a result 98,794/872/1,821 data samples are used in training/validation/test, and the task is considered as a binary classification problem. In SST-5, data are used as-is and thus the task is a 5-class classification problem. All 318,582/1,101/2,210 data samples for training/validation/test are used in the SST-5 setting. We use 300D GloVe vectors, 2-layer 150D or 300D encoders, and a 300D MLP classifier for the models, however unlike previous experiments we tune the word embeddings during training. The results on SST are listed in Table TABREF35 . Our models achieve the new state-of-the-art accuracy on SST-2 and competitive accuracy on SST-5, without utilizing parse tree information. Forget Gate Analysis To inspect the effect of the additional forget gate, we investigate how the values of vertical forget gates are distributed. We sample 1,000 random sentences from the development set of the SNLI dataset, and use the 3-layer CAS-LSTM model trained on the SNLI dataset to compute gate values. If all values from a vertical forget gate INLINEFORM0 were to be 0, this would mean that the introduction of the additional forget gate is meaningless and the model would reduce to a plain stacked LSTM. On the contrary if all values were 1, meaning that the vertical forget gates were always open, it would be impossible to say that the information is modulated effectively. Fig. FIGREF40 and FIGREF40 represent histograms of the vertical forget gate values from the second and the third layer. From the figures we can validate that the trained model does not fall into the degenerate case where vertical forget gates are ignored. Also the figures show that the values are right-skewed, which we conjecture to be a result of focusing more on a strong interaction between adjacent layers. To further verify that the gate values are diverse enough within each time step, we compute the distribution of the range of values per time step, INLINEFORM0 , where INLINEFORM1 . We plot the histograms in Fig. FIGREF40 and FIGREF40 . From the figure we see that a vertical forget gate controls the amount of information flow effectively, making the decision of retaining or discarding signals. Finally, to investigate the argument presented in § SECREF2 that the additional forget gate helps the previous output gate with reducing the burden of extracting all needed information, we inspect the distribution of the values from INLINEFORM0 . This distribution indicates how differently the vertical forget gate and the previous output gate select information from INLINEFORM1 . From Fig. FIGREF40 and FIGREF40 we can see that the two gates make fairly different decisions, from which we demonstrate that the direct path between INLINEFORM2 and INLINEFORM3 enables a model to utilize signals overlooked by INLINEFORM4 . Model Variations In this subsection, we see the influence of each component of a model on performance by removing or replacing its components. the SNLI dataset is used for experiments, and the best performing configuration is used as a baseline for modifications. We consider the following variants: (i) models that use plain stacked LSTMs, (ii) models with different INLINEFORM0 , (iii) models without INLINEFORM1 , and (iv) models that integrate lower contexts via peephole connections. Variant (iv) integrates lower contexts via the following equations: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 represent peephole weights that take cell states into account. Among the above equations, those that use the lower cell state INLINEFORM1 are Eq. EQREF52 and EQREF55 . We can see that INLINEFORM2 affects the value of INLINEFORM3 only via peephole connections, which makes INLINEFORM4 independent of INLINEFORM5 . Table TABREF36 summarizes the results of model variants. We can again see that the use of cell states clearly improves sentence modeling performance (baseline vs. (i) and (iv) vs. (i)). Also from the results of baseline and (ii), we validate that the selection of INLINEFORM0 does not significantly affect performance but introducing INLINEFORM1 is beneficial (baseline vs. (iii)) possibly due to its effect on normalizing information from multiple sources, as mentioned in § SECREF2 . Finally, from the comparison between baseline and (iv), we show that the proposed way of combining the left and the lower contexts leads to better modeling of sentence representations than that of BIBREF20 in encoding sentences. Conclusion In this paper, we proposed a method of stacking multiple LSTM layers for modeling sentences, dubbed CAS-LSTM. It uses not only hidden states but also cell states from the previous layer, for the purpose of controlling the vertical information flow in a more elaborate way. We evaluated the proposed method on various benchmark tasks: natural language inference, paraphrase identification, and sentiment classification. Our models achieve the new state-of-the-art accuracy on SNLI and Quora Question Pairs datasets and obtain comparable results on MultiNLI and SST datasets. The proposed architecture can replace any stacked LSTM under one weak restriction—the size of states should be identical across all layers. For future work we plan to apply the CAS-LSTM architecture beyond sentence modeling tasks. Various problems e.g. sequence labeling, sequence generation, and language modeling might benefit from sophisticated modulation on context integration. Aggregating diverse contexts from sequential data, e.g. those from forward and backward reading of text, could also be an intriguing research direction. Acknowledgments We thank Dan Edmiston for the review of the manuscript.
Which models did they experiment with?
Stacked LSTMs, Cell-aware Stacked LSTMs, Sentence Encoders, Top-layer Classifiers
3,210
qasper
4k
Introduction Massive Open Online Courses (MOOCs) have strived to bridge the social gap in higher education by bringing quality education from reputed universities to students at large. Such massive scaling through online classrooms, however, disrupt co-located, synchronous two-way communication between the students and the instructor. MOOC platforms provide discussion forums for students to talk to their classmates about the lectures, homeworks, quizzes and provide a venue to socialise. Instructors (defined here as the course instructors, their teaching assistants and the MOOC platform's technical staff) monitor the discussion forum to post (reply to their message) in discussion threads among students. We refer to this posting as intervention, following prior work BIBREF0 . However, due to large student enrolment, the student–instructor ratio in MOOCs is very high Therefore, instructors are not able to monitor and participate in all student discussions. To address this problem, a number of works have proposed systems e.g., BIBREF0 , BIBREF1 to aid instructors to selectively intervene on student discussions where they are needed the most. In this paper, we improve the state-of-the-art for instructor intervention in MOOC forums. We propose the first neural models for this prediction problem. We show that modelling the thread structure and the sequence of posts explicitly improves performance. Instructors in different MOOCs from different subject areas intervene differently. For example, on a Science, Technology, Engineering and Mathematics (STEM) MOOC, instructors may often intervene early as possible to resolve misunderstanding of the subject material and prevent confusion. However, in a Humanities MOOC, instructors allow for the students to explore open-ended discussions and debate among themselves. Such instructors may prefer to intervene later in the discussion to encourage further discussion or resolve conflicts among students. We therefore propose attention models to infer the latent context, i.e., the series of posts that trigger an intervention. Earlier studies on MOOC forum intervention either model the entire context or require the context size to be specified explicitly. Problem Statement A thread INLINEFORM0 consists of a series of posts INLINEFORM1 through INLINEFORM2 where INLINEFORM3 is an instructor's post when INLINEFORM4 is intervened, if applicable. INLINEFORM5 is considered intervened if an instructor had posted at least once. The problem of predicting instructor intervention is cast as a binary classification problem. Intervened threads are predicted as 1 given while non-intervened threads are predicted as 0 given posts INLINEFORM6 through INLINEFORM7 . The primary problem leads to a secondary problem of inferring the appropriate amount of context to intervene. We define a context INLINEFORM0 of a post INLINEFORM1 as a series of linear contiguous posts INLINEFORM2 through INLINEFORM3 where INLINEFORM4 . The problem of inferring context is to identify context INLINEFORM5 from a set of candidate contexts INLINEFORM6 . Modelling Context in Forums Context has been used and modelled in various ways for different problems in discussion forums. In a work on a closely related problem of forum thread retrieval BIBREF2 models context using inter-post discourse e.g., Question-Answer. BIBREF3 models the structural dependencies and relationships between forum posts using a conditional random field in their problem to infer the reply structure. Unlike BIBREF2 , BIBREF3 can be used to model any structural dependency and is, therefore, more general. In this paper, we seek to infer general dependencies between a reply and its previous context whereas BIBREF3 inference is limited to pairs of posts. More recently BIBREF4 proposed a context based model which factorises attention over threads of different lengths. Differently, we do not model length but the context before a post. However, our attention models cater to threads of all lengths. BIBREF5 proposed graph structured LSTM to model the explicit reply structure in Reddit forums. Our work does not assume access to such a reply structure because 1) Coursera forums do not provide one and 2) forum participants often err by posting their reply to a different post than that they intended. At the other end of the spectrum are document classification models that do not assume structure in the document layout but try to infer inherent structure in the natural language, viz, words, sentences, paragraphs and documents. Hierarchical attention BIBREF6 is a well know recent work that classifies documents using a multi-level LSTMs with attention mechanism to select important units at each hierarchical level. Differently, we propose a hierarchical model that encodes layout hierarchy between a post and a thread but also infers reply structure using a attention mechanism since the layout does not reliably encode it. Instructor Intervention in MOOC forums The problem of predicting instructor intervention in MOOCs was proposed by BIBREF0 . Later BIBREF7 evaluated baseline models by BIBREF0 over a larger corpus and found the results to vary widely across MOOCs. Since then subsequent works have used similar diverse evaluations on the same prediction problem BIBREF1 , BIBREF8 . BIBREF1 proposed models with discourse features to enable better prediction over unseen MOOCs. BIBREF8 recently showed interventions on Coursera forums to be biased by the position at which a thread appears to an instructor viewing the forum interface and proposed methods for debiased prediction. While all works since BIBREF0 address key limitations in this line of research, they have not investigated the role of structure and sequence in the threaded discussion in predicting instructor interventions. BIBREF0 proposed probabilistic graphical models to model structure and sequence. They inferred vocabulary dependent latent post categories to model the thread sequence and infer states that triggered intervention. Their model, however, requires a hyperparameter for the number of latent states. It is likely that their empirically reported setting will not generalise due to their weak evaluation BIBREF7 . In this paper, we propose models to infer the context that triggers instructor intervention that does not require context lengths to be set apriori. All our proposed models generalise over modelling assumptions made by BIBREF0 . For the purpose of comparison against a state-of-the-art and competing baselines we choose BIBREF7 since BIBREF0 's system and data are not available for replication. Data and Preprocessing We evaluate our proposed models over a corpus of 12 MOOC iterations (offerings) on Coursera.org In partnership with Coursera and in line with its Terms of Service, we obtained the data for use in our academic research. Following prior work BIBREF7 we evaluate over a diverse dataset to represent MOOCs of varying sizes, instructor styles, instructor team sizes and number of threads intervened. We only include threads from sub-forums on Lecture, Homework, Quiz and Exam. We also normalise and label sub-forums with other non-standard names (e.g., Assignments instead of Homework) into of the four said sub-forums. Threads on general discussion, meet and greet and other custom sub-forums for social chitchat are omitted as our focus is to aid instructors on intervening on discussion on the subject matter. We also exclude announcement threads and other threads started by instructors since they are not interventions. We preprocess each thread by replacing URLs, equations and other mathematical formulae and references to timestamps in lecture videos by tokens INLINEFORM0 URL INLINEFORM1 , INLINEFORM2 MATH INLINEFORM3 , INLINEFORM4 TIMEREF INLINEFORM5 respectively. We also truncate intervened threads to only include posts before the first instructor post since the instructor's and subsequent posts will bias the prediction due to the instructor's post. Model The key innovation of our work is to decompose the intervention prediction problem into a two-stage model that first explicitly tries to discover the proper context to which a potential intervention could be replying to, and then, predict the intervention status. This model implicitly assesses the importance (or urgency) of the existing thread's context to decide whether an intervention is necessary. For example in Figure SECREF1 , prior to the instructor's intervention, the ultimate post (Post #6) by Student 2 already acknowledged the OP's gratitude for his answer. In this regard, the instructor may have decided to use this point to summarize the entire thread to consolidate all the pertinent positions. Here, we might assume that the instructor's reply takes the entire thread (Posts #1–6) as the context for her reply. This subproblem of inferring the context scope is where our innovation centers on. To be clear, in order to make the prediction that a instruction intervention is now necessary on a thread, the instructor's reply is not yet available — the model predicts whether a reply is necessary — so in the example, only Posts #1–6 are available in the problem setting. To infer the context, we have to decide which subsequence of posts are the most plausible motivation for an intervention. Recent work in deep neural modeling has used an attention mechanism as a focusing query to highlight specific items within the input history that significantly influence the current decision point. Our work employs this mechanism – but with a twist: due to the fact that the actual instructor intervention is not (yet) available at the decision timing, we cannot use any actual intervention to decide the context. To employ attention, we must then employ a surrogate text as the query to train our prediction model. Our model variants model assess the suitability of such surrogate texts for the attention mechanism basis. Congruent with the representation of the input forums, in all our proposed models, we encode the discussion thread hierarchically. We first build representations for each post by passing pre-trained word vector representations from GloVe BIBREF9 for each word through an LSTM BIBREF10 , INLINEFORM0 . We use the last layer output of the LSTM as a representation of the post. We refer this as the post vector INLINEFORM1 . Then each post INLINEFORM0 is passed through another LSTM, INLINEFORM1 , whose last layer output forms the encoding of the entire thread. Hidden unit outputs of INLINEFORM2 represent the contexts INLINEFORM3 ; that is, snapshots of the threads after each post, as shown in Figure FIGREF1 . The INLINEFORM0 and INLINEFORM1 together constitute the hierarchical LSTM (hLSTM) model. This general hLSTM model serves as the basis for our model exploration in the rest of this section. Contextual Attention Models When they intervene, instructors either pay attention to a specific post or a series of posts, which trigger their reply. However, instructors rarely explicitly indicate to which post(s) their intervention is in relation to. This is the case in our corpus, party due to Coursera's user interface which only allows for single level comments (see Figure FIGREF2 ). Based solely on the binary, thread-level intervention signal, our secondary objective seeks to infer the appropriate context – represented by a sequence of posts – as the basis for the intervention. We only consider linear contiguous series of posts starting with the thread's original post to constitute to a context; e.g., INLINEFORM0 . This is a reasonable as MOOC forum posts always reply to the original post or to a subsequent post, which in turn replies to the original post. This is in contrast to forums such as Reddit that have a tree or graph-like structure that require forum structure to be modelled explicitly, such as in BIBREF5 . We propose three neural attention BIBREF11 variants based on how an instructor might attend and reply to a context in a thread: the ultimate, penultimate and any post attention models. We review each of these in turn. Ultimate Post Attention (UPA) Model. In this model we attend to the context represented by hidden state of the INLINEFORM0 . We use the post prior to the instructor's reply as a query over the contexts INLINEFORM1 to compute attention weights INLINEFORM2 , which are then used to compute the attended context representation INLINEFORM3 (recall again that the intervention text itself is not available for this purpose). This attention formulation makes an equivalence between the final INLINEFORM4 post and the prospective intervention, using Post INLINEFORM5 as the query for finding the appropriate context INLINEFORM6 , inclusive of itself INLINEFORM7 . Said in another way, UPA uses the most recent content in the thread as the attentional query for context. For example, if post INLINEFORM0 is the instructor's reply, post INLINEFORM1 will query over the contexts INLINEFORM2 and INLINEFORM3 . The model schematic is shown in Figure FIGREF12 . The attended context representations are computed as: DISPLAYFORM0 The INLINEFORM0 representation is then passed through a fully connected softmax layer to yield the binary prediction. Penultimate Post Attention (PPA) Model. While the UPA model uses the most recent text and makes the ultimate post itself available as potential context, our the ultimate post may be better modeled as having any of its prior posts as potential context. Penultimate Post Attention (PPA) variant does this. The schematic and the equations for the PPA model are obtained by summing over contexts INLINEFORM0 in Equation EQREF10 and Figure FIGREF12 . While we could properly model such a context inference decision with any post INLINEFORM1 and prospective contexts INLINEFORM2 (where INLINEFORM3 is a random post), it makes sense to use the penultimate post, as we can make the most information available to the model for the context inference. The attended context representations are computed as: DISPLAYFORM0 Any Post Attention (APA) Model. APA further relaxes both UPA and PPA, allowing APA to generalize and hypothesize that the prospective instructor intervention is based on the context that any previous post INLINEFORM0 replied to. In this model, each post INLINEFORM1 is set as a query to attend to its previous context INLINEFORM2 . For example, INLINEFORM3 will attend to INLINEFORM4 . Different from standard attention mechanisms, APA attention weights INLINEFORM5 are obtained by normalising interaction matrix over the different queries. In APA, the attention context INLINEFORM0 is computed via: DISPLAYFORM0 Evaluation The baseline and the models are evaluated on a corpus of 12 MOOC discussion forums. We train on 80% of the training data and report evaluation results on the held-out 20% of test data. We report INLINEFORM0 scores on the positive class (interventions), in line with prior work. We also argue that recall of the positive class is more important than precision, since it is costlier for instructors to miss intervening on a thread than spending irrelevant time intervening on a less critical threads due to false positives. Model hyperpameter settings. All proposed and baseline neural models are trained using Adam optimizer with a learning rate of 0.001. We used cross-entropy as loss function. Importantly we updated the model parameters during training after each instance as in vanilla stochastic gradient descent; this setting was practical since data on most courses had only a few hundred instances enabling convergence within a reasonable training time of a few hours (see Table TABREF15 , column 2). Models were trained for a single epoch as most of our courses with a few hundred thread converged after a single epoch. We used 300-dimensional GloVe vectors and permitted the embeddings to be updated during the model's end-to-end training. The hidden dimension size of both INLINEFORM0 and INLINEFORM1 are set to 128 for all the models. Baselines. We compare our models against a neural baseline models, hierarchical LSTM (hLSTM), with the attention ablated but with access to the complete context, and a strong, open-sourced feature-rich baseline BIBREF7 . We choose BIBREF7 over other prior works such as BIBREF0 since we do not have access to the dataset or the system used in their papers for replication. BIBREF7 is a logistic regression classifier with features inclusive of bag-of-words representation of the unigrams and thread length, normalised counts of agreements to previous posts, counts of non-lexical reference items such as URLs, and the Coursera forum type in which a thread appeared. We also report aggregated results from a hLSTM model with access only to the last post as context for comparison. Table TABREF17 compares the performance of these baselines against our proposed methods. Results Table TABREF15 shows performance of all our proposed models and the neural baseline over our 12 MOOC dataset. Our models of UPA, PPA individually better the baseline by 5 and 2% on INLINEFORM0 and 3 and 6% on recall respectively. UPA performs the best in terms of INLINEFORM1 on average while PPA performs the best in terms of recall on average. At the individual course level, however, the results are mixed. UPA performs the best on INLINEFORM2 on 5 out of 12 courses, PPA on 3 out 12 courses, APA 1 out of 12 courses and the baseline hLSTM on 1. PPA performs the best on recall on 7 out of the 12 courses. We also note that course level performance differences correlate with the course size and intervention ratio (hereafter, i.ratio), which is the ratio of intervened to non-intervened threads. UPA performs better than PPA and APA on low intervention courses (i.ratio INLINEFORM3 0.25) mainly because PPA and APA's performance drops steeply when i.ratio drops (see col.2 parenthesis and INLINEFORM4 of PPA and APA). While all the proposed models beat the baseline on every course except casebased-2. On medicalneuro-2 and compilers-4 which have the lowest i.ratio among the 12 courses none of the neural models better the reported baseline BIBREF7 (course level not scores not shown in this paper). The effect is pronounced in compilers-4 course where none of the neural models were able to predict any intervened threads. This is due to the inherent weakness of standard neural models, which are unable to learn features well enough when faced with sparse data. The best performance of UPA indicates that the reply context of the instructor's post INLINEFORM0 correlates strongly with that of the previous post INLINEFORM1 . This is not surprising since normal conversations are typically structured that way. Discussion In order to further understand the models' ability to infer the context and its effect on intervention prediction, we further investigate the following research questions. RQ1. Does context inference help intervention prediction? In order to understand if context inference is useful to intervention prediction, we ablate the attention components and experiment with the vanilla hierarchical LSTM model. Row 3 of Table TABREF17 shows the macro averaged result from this experiment. The UPA and PPA attention models better the vanilla hLSTM by 5% and 2% on average in INLINEFORM0 respectively. Recall that the vanilla hLSTM already has access to a context consisting of all posts (from INLINEFORM1 through INLINEFORM2 ). In contrast, the UPA and PPA models selectively infers a context for INLINEFORM3 and INLINEFORM4 posts, respectively, and use it to predict intervention. The improved performance of our attention models that actively select their optimal context, over a model with the complete thread as context, hLSTM, shows that the context inference improves intervention prediction over using the default full context. RQ2. How well do the models perform across threads of different lengths? To understand the models' prediction performance across threads of different lengths, we bin threads by length and study the models' recall. We choose three courses, ml-5, rprog-3 and calc-1, from our corpus of 12 with the highest number of positive instances ( INLINEFORM0 100 threads). We limit our analysis to these since binning renders courses with fewer positive instances sparse. Figure FIGREF18 shows performance across thread lengths from 1 through 7 posts and INLINEFORM1 posts. Clearly, the UPA model performs much better on shorter threads than on longer threads while PPA and APA works better on longer threads. Although, UPA is the best performing model in terms of overall INLINEFORM2 its performance drops steeply on threads of length INLINEFORM3 . UPA's overall best performance is because most of the interventions in the corpus happen after one post. To highlight the performance of APA we show an example from smac-1 in Figure FIGREF22 with nine posts which was predicted correctly as intervened by APA but not by other models. Threads shows students confused over a missing figure in a homework. The instructor finally shows up, though late, to resolve the confusion. RQ3. Do models trained with different context lengths perform better than when trained on a single context length? We find that context length has a regularising effect on the model's performance at test time. This is not surprising since models trained with threads of single context length will not generalise to infer different context lengths. Row 4 of Table TABREF17 shows a steep performance drop in training by classifier with all threads truncated to a context of just one post, INLINEFORM0 , the post immediately preceding the intervened post. We also conducted an experiment with a multi-objective loss function with an additive cross-entropy term where each term computes loss from a model with context limited to a length of 3. We chose 3 since intervened threads in all the courses had a median length between 3 and 4. We achieved an INLINEFORM1 of 0.45 with a precision of 0.47 and recall of 0.43. This achieves a performance comparable to that of the BIBREF7 with context length set to only to 3. This approach of using infinitely many loss terms for each context length from 1 through the maximum thread length in a course is naive and not practical. We only use this model to show the importance of training the model with loss from threads of different lengths to prevent models overfitting to threads of specific context lengths. Conclusion We predict instructor intervention on student discussions by first inferring the optimal size of the context needed to decide on the intervention decision for the intervened post. We first show that a structured representation of the complete thread as the context is better than a bag-of-words, feature-rich representation. We then propose attention-based models to infer and select a context – defined as a contiguous subsequence of student posts – to improve over a model that always takes the complete thread as a context to prediction intervention. Our Any Post Attention (APA) model enables instructors to tune the model to predict intervention early or late. We posit our APA model will enable MOOC instructors employing varying pedagogical styles to use the model equally well. We introspect the attention models' performance across threads of varying lengths and show that APA predicts intervention on longer threads, which possesses more candidate contexts, better. We note that the recall of the predictive models for longer threads (that is, threads of length greater 2) can still be improved. Models perform differently between shorter and longer length. An ensemble model or a multi-objective loss function is thus planned in our future work to better prediction on such longer threads.
What was the previous state of the art for this task?
hLSTM
3,725
qasper
4k
Introduction Natural text generation, as a key task in NLP, has been advanced substantially thanks to the flourish of neural models BIBREF0 , BIBREF1 . Typical frameworks such as sequence-to-sequence (seq2seq) have been applied to various generation tasks, including machine translation BIBREF2 and dialogue generation BIBREF3 . The standard paradigm to train such neural models is maximum likelihood estimation (MLE), which maximizes the log-likelihood of observing each word in the text given the ground-truth proceeding context BIBREF4 . Although widely used, MLE suffers from the exposure bias problem BIBREF5 , BIBREF6 : during test, the model sequentially predicts the next word conditioned on its previous generated words while during training conditioned on ground-truth words. To tackle this problem, generative adversarial networks (GAN) with reinforcement learning (RL) training approaches have been introduced to text generation tasks BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , where the discriminator is trained to distinguish real and generated text samples to provide reward signals for the generator, and the generator is optimized via policy gradient BIBREF7 . However, recent studies have shown that potential issues of training GANs on discrete data are more severe than exposure bias BIBREF14 , BIBREF15 . One of the fundamental issues when generating discrete text samples with GANs is training instability. Updating the generator with policy gradient always leads to an unstable training process because it's difficult for the generator to derive positive and stable reward signals from the discriminator even with careful pre-training BIBREF8 . As a result, the generator gets lost due to the high variance of reward signals and the training process may finally collapse BIBREF16 . In this paper, we propose a novel adversarial training framework called Adversarial Reward Augmented Maximum Likelihood (ARAML) to deal with the instability issue of training GANs for text generation. At each iteration of adversarial training, we first train the discriminator to assign higher rewards to real data than to generated samples. Then, inspired by reward augmented maximum likelihood (RAML) BIBREF17 , the generator is updated on the samples acquired from a stationary distribution with maximum likelihood estimation (MLE), weighted by the discriminator's rewards. This stationary distribution is designed to guarantee that training samples are surrounding the real data, thus the exploration space of our generator is indeed restricted by the MLE training objective, resulting in more stable training. Compared to other text GANs with RL training techniques, our framework acquires samples from the stationary distribution rather than the generator's distribution, and uses RAML training paradigm to optimize the generator instead of policy gradient. Our contributions are mainly as follows: Related Work Recently, text generation has been widely studied with neural models trained with maximum likelihood estimation BIBREF4 . However, MLE tends to generate universal text BIBREF18 . Various methods have been proposed to enhance the generation quality by refining the objective function BIBREF18 , BIBREF19 or modifying the generation distribution with external information like topic BIBREF20 , sentence type BIBREF21 , emotion BIBREF22 and knowledge BIBREF23 . As mentioned above, MLE suffers from the exposure bias problem BIBREF5 , BIBREF6 . Thus, reinforcement learning has been introduced to text generation tasks such as policy gradient BIBREF6 and actor-critic BIBREF24 . BIBREF17 proposed an efficient and stable approach called Reward Augmented Maximum Likelihood (RAML), which connects the log-likelihood and expected rewards to incorporate MLE training objective into RL framework. Since some text generation tasks have no explicit metrics to be directly optimized, adversarial training has been applied to generating discrete text samples with a discriminator to learn a proper reward. For instance, SeqGAN BIBREF7 devised a discriminator to distinguish the real data and generated samples, and a generator to maximize the reward from the discriminator via policy gradient. Other variants of GANs have been proposed to improve the generator or the discriminator. To improve the generator, MaliGAN BIBREF8 developed a normalized maximum likelihood optimization target for the generator to stably model the discrete sequences. LeakGAN BIBREF11 guided the generator with reward signals leaked from the discriminator at all generation steps to deal with long text generation task. MaskGAN BIBREF10 employed an actor-critic architecture to make the generator fill in missing text conditioned on the surrounding context, which is expected to mitigate the problem of mode collapse. As for the discriminator, RankGAN BIBREF9 replaced traditional discriminator with a ranker to learn the relative ranking information between the real texts and generated ones. Inverse reinforcement learning BIBREF12 used a trainable reward approximator as the discriminator to provide dense reward signals at each generation step. DPGAN BIBREF13 introduced a language model based discriminator and regarded cross-entropy as rewards to promote the diversity of generation results. The most similar works to our model are RAML BIBREF17 and MaliGAN BIBREF8 : 1) Compared with RAML, our model adds a discriminator to learn the reward signals instead of choosing existing metrics as rewards. We believe that our model can adapt to various text generation tasks, particularly those without explicit evaluation metrics. 2) Unlike MaliGAN, we acquire samples from a fixed distribution near the real data rather than the generator's distribution, which is expected to make the training process more stable. Task Definition and Model Overview Text generation can be formulated as follows: given the real data distribution INLINEFORM0 , the task is to train a generative model INLINEFORM1 where INLINEFORM2 can fit INLINEFORM3 well. In this formulation, INLINEFORM4 and INLINEFORM5 denotes a word in the vocabulary INLINEFORM6 . Figure FIGREF3 shows the overview of our model ARAML. This adversarial training framework consists of two phases: 1) The discriminator is trained to assign higher rewards to real data than to generated data. 2) The generator is trained on the samples acquired from a stationary distribution with reward augmented MLE training objective. This training paradigm of the generator indeed constrains the search space with the MLE training objective, which alleviates the issue of unstable training. Discriminator The discriminator INLINEFORM0 aims to distinguish real data and generated data like other GANs. Inspired by Least-Square GAN BIBREF25 , we devise the loss function as follows: DISPLAYFORM0 This loss function forces the discriminator to assign higher rewards to real data than to generated data, so the discriminator can learn to provide more proper rewards as the training proceeds. Generator The training objective of our generator INLINEFORM0 is derived from the objective of other discrete GANs with RL training method: DISPLAYFORM0 where INLINEFORM0 denotes the rewards from the discriminator INLINEFORM1 and the entropy regularized term INLINEFORM2 encourages INLINEFORM3 to generate diverse text samples. INLINEFORM4 is a temperature hyper-parameter to balance these two terms. As mentioned above, discrete GANs suffer from the instability issue due to policy gradient, thus they are consequently difficult to train. Inspired by RAML BIBREF17 , we introduce an exponential payoff distribution INLINEFORM0 to connect RL loss with RAML loss: DISPLAYFORM0 where INLINEFORM0 . Thus, we can rewrite INLINEFORM1 with INLINEFORM2 and INLINEFORM3 as follows: DISPLAYFORM0 Following RAML, we remove the constant term and optimize the KL divergence in the opposite direction: DISPLAYFORM0 where INLINEFORM0 is a constant in the training phase of the generator. It has been proved that INLINEFORM1 and INLINEFORM2 are equivalent up to their first order Taylor approximations, and they have the same global optimum BIBREF17 . INLINEFORM3 can be trained in a MLE-like fashion but sampling from the distribution INLINEFORM4 is intractable in the adversarial setting, because INLINEFORM5 varies with the discriminator INLINEFORM6 . Thus, we introduce importance sampling to separate sampling process from INLINEFORM7 and obtain the final loss function: DISPLAYFORM0 where INLINEFORM0 denotes a stationary distribution and INLINEFORM1 . To optimize this loss function, we first construct the fixed distribution INLINEFORM2 to get samples, and devise the proper reward function INLINEFORM3 to train the generator in a stable and effective way. We construct the distribution INLINEFORM0 based on INLINEFORM1 : DISPLAYFORM0 In this way, INLINEFORM0 can be designed to guarantee that INLINEFORM1 is near INLINEFORM2 , leading to a more stable training process. To obtain a new sample INLINEFORM3 from a real data sample INLINEFORM4 , we can design three steps which contain sampling an edit distance INLINEFORM5 , the positions INLINEFORM6 for substitution and the new words INLINEFORM7 filled into the corresponding positions. Thus, INLINEFORM8 can be decomposed into three terms: DISPLAYFORM0 The first step is to sample an edit distance based on a real data sample INLINEFORM0 , where INLINEFORM1 is a sequence of length INLINEFORM2 . The number of sentences which have the edit distance INLINEFORM3 to some input sentence can be computed approximately as below: DISPLAYFORM0 where INLINEFORM0 denotes the number of sentences which have an edit distance INLINEFORM1 to a sentence of length INLINEFORM2 , and INLINEFORM3 indicates the size of vocabulary. We then follow BIBREF17 to re-scale the counts by INLINEFORM4 and do normalization, so that we can sample an edit distance INLINEFORM5 from: DISPLAYFORM0 where INLINEFORM0 , as a temperature hyper-parameter, restricts the search space surrounding the original sentence. Larger INLINEFORM1 brings more samples with long edit distances. The next step is to select positions for substitution based on the sampled edit distance INLINEFORM0 . Intuitively, we can randomly choose INLINEFORM1 distinct positions in INLINEFORM2 to be replaced by new words. The probability of choosing the position INLINEFORM3 is calculated as follows: DISPLAYFORM0 Following this sampling strategy, we can obtain the position set INLINEFORM0 . This strategy approximately guarantees that the edit distance between a new sentence and the original sentence is INLINEFORM1 . At the final step, our model determines new words for substitution at each sampled position INLINEFORM0 . We can formulate this sampling process from the original sequence INLINEFORM1 to a new sample INLINEFORM2 as a sequential transition INLINEFORM3 . At each step from INLINEFORM4 to INLINEFORM5 INLINEFORM6 , we first sample a new word INLINEFORM7 from the distribution INLINEFORM8 , then replace the old word at position INLINEFORM9 of INLINEFORM10 to obtain INLINEFORM11 . The whole sampling process can be decomposed as follows: DISPLAYFORM0 There are two common sampling strategies to model INLINEFORM0 , i.e. random sampling and constrained sampling. Random sampling strategy samples a new word INLINEFORM1 according to the uniform distribution over the vocabulary INLINEFORM2 BIBREF17 , while constrained sampling strategy samples INLINEFORM3 to maximize the language model score of the target sentence INLINEFORM4 BIBREF26 , BIBREF27 . Here, we adopt constrained sampling in our model and compare the performances of two strategies in the experiment. We devise the reward function INLINEFORM0 according to the discriminator's output INLINEFORM1 and the stationary distribution INLINEFORM2 : DISPLAYFORM0 Intuitively, this reward function encourages the generator to generate sentences with large sampling probability and high rewards from the discriminator. Thus, the weight of samples INLINEFORM0 can be calculated as follows: DISPLAYFORM0 So far, we can successfully optimize the generator's loss INLINEFORM0 via Equation EQREF12 . This training paradigm makes our generator avoid possible variances caused by policy gradient and get more stable reward signals from the discriminator, because our generator is restricted to explore the training samples near the real data. [htb] Adversarial Reward Augmented Maximum Likelihood [1] Total adversarial training iterations: INLINEFORM0 Steps of training generator: INLINEFORM0 Steps of training discriminator: INLINEFORM0 Pre-train the generator INLINEFORM0 with MLE loss Generate samples from INLINEFORM1 Pre-train the discriminator INLINEFORM2 via Eq.( EQREF6 ) Construct INLINEFORM3 via Eq.( EQREF14 ) - Eq.( EQREF19 ) each INLINEFORM4 each INLINEFORM5 Update INLINEFORM6 via Eq.( EQREF12 ) each INLINEFORM7 Update INLINEFORM8 via Eq.( EQREF6 ) Extension to Conditional Text Generation We have shown our adversarial training framework for text generation tasks without an input. Actually, it can also be extended to conditional text generation tasks like dialogue generation. Given the data distribution INLINEFORM0 where INLINEFORM1 denote contexts and responses respectively, the objective function of ARAML's generator can be modified as below: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 is trained to distinguish whether INLINEFORM2 is the true response to INLINEFORM3 . Comparison with RAML and MaliGAN The most similar works to our framework are RAML BIBREF17 and MaliGAN BIBREF8 . The main difference among them is the training objective of their generators. We have shown different objective functions in Table TABREF26 . For comparison, we use the form with no input for all the three models. Our model is greatly inspired by RAML, which gets samples from a non-parametric distribution INLINEFORM0 constructed based on a specific reward. Compared to RAML, our reward comes from a learnable discriminator which varies as the adversarial training proceeds rather than a specific reward function. This difference equips our framework with the ability to adapt to the text generation tasks with no explicit evaluation metrics as rewards. Our model is also similar to MaliGAN, which gets samples from the generator's distribution. In MaliGAN's training objective, INLINEFORM0 also indicates the generator's distribution but it's used in the sampling phase and fixed at each optimization step. The weight of samples INLINEFORM1 . Different from our model, MaliGAN acquires samples from the generator's distribution INLINEFORM2 , which usually brings samples with low rewards even with careful pre-training for the generator, leading to training instability. Instead, our framework gets samples from a stationary distribution INLINEFORM3 around real data, thus our training process is more stable. Datasets We evaluated ARAML on three datasets: COCO image caption dataset BIBREF28 , EMNLP2017 WMT dataset and WeiboDial single-turn dialogue dataset BIBREF29 . COCO and EMNLP2017 WMT are the common benchmarks with no input to evaluate the performance of discrete GANs, and we followed the existing works to preprocess these datasets BIBREF12 , BIBREF11 . WeiboDial, as a dialogue dataset, was applied to test the performance of our model with input trigger. We simply removed post-response pairs containing low-frequency words and randomly selected a subset for our training/test set. The statistics of three datasets are presented in Table TABREF28 . Baselines We compared our model with MLE, RL and GAN baselines. Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively: MLE: a RNN model trained with MLE objective BIBREF4 . Its extension, Seq2Seq, can work on the dialogue dataset BIBREF2 . SeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator BIBREF7 . LeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator BIBREF11 . MaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective BIBREF8 . IRL: This inverse reinforcement learning method replaces the discriminator with a reward approximator to provide dense rewards BIBREF12 . RAML: A RL approach to incorporate MLE objective into RL training framework, which regards BLEU as rewards BIBREF17 . DialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective BIBREF16 . DPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards BIBREF13 . Note that MLE, SeqGAN, LeakGAN, MaliGAN and IRL are the baselines on COCO and EMNLP2017 WMT, while MLE, RAML, DialogGAN, and DPGAN on WeiboDial. The original codes are used to test the baselines. Implementation Details The implementation details of our model are shown in Table TABREF31 . For COCO / EMNLP2017, the generator is a LSTM unit BIBREF30 with 128 cells, and the discriminator is implemented based on BIBREF7 . For WeiboDial, the generator is an encoder-decoder structure with attention mechanism, where both the encoder and the decoder consist of a two-layer GRU BIBREF31 with 128 cells. The discriminator is implemented based on BIBREF32 . The language model used in the constrained sampling of ARAML is implemented in the same setting as the generators, and is pre-trained on the training set of each dataset. The codes and the datasets are available at https://github.com/kepei1106/ARAML. As for the details of the baselines, the generators of all the baselines except LeakGAN are the same as ours. Note that the generator of LeakGAN consists of a hierarchical LSTM unit, thus we followed the implementation in the original paper. In terms of the differences, the discriminators of GAN baselines are implemented based on the original papers. Other hyper-parameters of baselines including batch size, learning rate, and pre-training epochs, were set based on the original codes, because the convergence of baselines is sensitive to these hyper-parameters. Language Generation on COCO and EMNLP2017 WMT We adopted forward/reverse perplexity BIBREF33 and Self-BLEU BIBREF34 to evaluate the quality of generated texts. Forward perplexity (PPL-F) indicates the perplexity on the generated data provided by a language model trained on real data to measure the fluency of generated samples. Reverse perplexity (PPL-R) switches the roles of generated data and real data to reflect the discrepancy between the generated distribution and the data distribution. Self-BLEU (S-BLEU) regards each sentence in the generated collection as hypothesis and the others as reference to obtain BLEU scores, which evaluates the diversity of generated results. Results are shown in Table TABREF33 . LeakGAN performs best on forward perplexity because it can generate more fluent samples. As for reverse perplexity, our model ARAML beats other baselines, showing that our model can fit the data distribution better. Other GANs, particularly LeakGAN, obtain high reverse perplexity due to mode collapse BIBREF12 , thus they only capture limited fluent expressions, resulting in large discrepancy between the generated distribution and data distribution. ARAML also outperforms the baselines in terms of Self-BLEU, indicating that our model doesn't fall into mode collapse with the help of the MLE training objective and has the ability to generate more diverse sentences. We also provide standard deviation of each metric in Table TABREF33 , reflecting the stability of each model's performance. Our model ARAML nearly achieves the smallest standard deviation in all the metrics, indicating that our framework outperforms policy gradient in the stability of adversarial training. Dialogue Generation on WeiboDial Dialogue evaluation is an open problem and existing works have found that automatic metrics have low correlation to human evaluation BIBREF35 , BIBREF36 , BIBREF37 . Thus, we resorted to manual evaluation to assess the generation quality on WeiboDial. We randomly sampled 200 posts from the test set and collected the generated results from all the models. For each pair of responses (one from ARAML and the other from a baseline, given the same input post), five annotators were hired to label which response is better (i.e. win, lose or tie) in terms of grammaticality (whether a response itself is grammatical and logical) and relevance (whether a response is appropriate and relevant to the post). The two metrics were evaluated independently. The evaluation results are shown in Table TABREF35 . To measure the inter-annotator agreement, we calculated Fleiss' kappa BIBREF38 for each pair-wise comparison where results show moderate agreement ( INLINEFORM0 ). We also conducted sign test to check the significance of the differences. As shown in Table TABREF35 , ARAML performs significantly better than other baselines in all the cases. This result indicates that the samples surrounding true responses provide stable rewards for the generator, and stable RAML training paradigm significantly enhances the performance in both metrics. Further Analysis on Stability To verify the training stability, we conducted experiments on COCO many times and chose the best 5 trials for SeqGAN, LeakGAN, IRL, MaliGAN and ARAML, respectively. Then, we presented the forward/reverse perplexity in the training process in Figure FIGREF38 . We can see that our model with smaller standard deviation is more stable than other GAN baselines in both metrics. Although LeakGAN reaches the best forward perplexity, its standard deviation is extremely large and it performs badly in reverse perplexity, indicating that it generates limited expressions that are grammatical yet divergent from the data distribution. Ablation Study The temperature INLINEFORM0 controls the search space surrounding the real data as we analyze in Section UID13 . To investigate its impact on the performance of our model, we fixed all the other hyper-parameters and test ARAML with different temperatures on COCO. The experimental results are shown in Figure FIGREF41 . We can see that as the temperature becomes larger, forward perplexity increases gradually while Self-BLEU decreases. As mentioned in Section UID13 , large temperatures encourage our generator to explore the samples that are distant from real data distribution, thus the diversity of generated results will be improved. However, these samples distant from the data distribution are more likely to be poor in fluency, leading to worse forward perplexity. Reverse perplexity is influenced by both generation quality and diversity, so the correlation between temperature and reverse perplexity is not intuitive. We can observe that the model with INLINEFORM0 reaches the best reverse perplexity. We have mentioned two common sampling strategies in Section UID13 , i.e. random sampling and constrained sampling. To analyze their impact, we keep all the model structures and hyper-parameters fixed and test ARAML with these two strategies on COCO. Table TABREF45 shows the results. It's obvious that random sampling hurts the model performance except Self-BLEU-1, because it indeed allows low-quality samples available to the generator. Exploring these samples degrades the quality and diversity of generated results. Despite the worse performance on automatic metrics, random sampling doesn't affect the training stability of our framework. The standard deviation of ARAML-R is still smaller than other GAN baselines. Case Study Table TABREF47 presents the examples generated by the models on COCO. We can find that other baselines suffer from grammatical errors (e.g. “in front of flying her kite" from MLE), repetitive expressions (e.g. “A group of people" from IRL) and incoherent statements (e.g. “A group of people sitting on a cell phone” from IRL). By contrast, our model performs well in these sentences and has the ability to generate grammatical and coherent results. Table TABREF48 shows the generated examples on WeiboDial. It's obvious that other baselines don't capture the topic word “late" in the post, thus generate irrelevant responses. ARAML can provide a response that is grammatical and closely relevant to the post. Conclusion We propose a novel adversarial training framework to deal with the instability problem of current GANs for text generation. To address the instability issue caused by policy gradient, we incorporate RAML into the advesarial training paradigm to make our generator acquire stable rewards. Experiments show that our model performs better than several state-of-the-art GAN baselines with lower training variance, yet producing better performance on three text generation tasks. Acknowledgments This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support.
How much improvement is gained from Adversarial Reward Augmented Maximum Likelihood (ARAML)?
ARAM has achieved improvement over all baseline methods using reverese perplexity and slef-BLEU metric. The maximum reverse perplexity improvement 936,16 is gained for EMNLP2017 WMT dataset and 48,44 for COCO dataset.
3,796
qasper
4k
Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Related Work Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions. Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public. Data Collection We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users). Data Collection ::: Crowdsourced Question Elicitation The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document. Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Data Collection ::: Analysis ::: Answer Validation When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another. We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not). Experimental Setup We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41). Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Experimental Setup ::: Privacy Question Answering ::: Baselines We describe baselines on this task, including a human performance baseline. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline. Results and Discussion The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain. Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section . Results and Discussion ::: Error Analysis Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories. We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Conclusion We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study.
Were other baselines tested to compare with the neural baseline?
SVM, No-Answer Baseline (NA) , Word Count Baseline, Human Performance
3,855
qasper
4k
Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks.
What are method improvements of F1 for paraphrase identification?
Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP
3,566
qasper
4k
Introduction Headline generation is the process of creating a headline-style sentence given an input article. The research community has been regarding the task of headline generation as a summarization task BIBREF1, ignoring the fundamental differences between headlines and summaries. While summaries aim to contain most of the important information from the articles, headlines do not necessarily need to. Instead, a good headline needs to capture people's attention and serve as an irresistible invitation for users to read through the article. For example, the headline “$2 Billion Worth of Free Media for Trump”, which gives only an intriguing hint, is considered better than the summarization style headline “Measuring Trump’s Media Dominance” , as the former gets almost three times the readers as the latter. Generating headlines with many clicks is especially important in this digital age, because many of the revenues of journalism come from online advertisements and getting more user clicks means being more competitive in the market. However, most existing websites naively generate sensational headlines using only keywords or templates. Instead, this paper aims to learn a model that generates sensational headlines based on an input article without labeled data. To generate sensational headlines, there are two main challenges. Firstly, there is a lack of sensationalism scorer to measure how sensational a headline is. Some researchers have tried to manually label headlines as clickbait or non-clickbait BIBREF2, BIBREF3. However, these human-annotated datasets are usually small and expensive to collect. To capture a large variety of sensationalization patterns, we need a cheap and easy way to collect a large number of sensational headlines. Thus, we propose a distant supervision strategy to collect a sensationalism dataset. We regard headlines receiving lots of comments as sensational samples and the headlines generated by a summarization model as non-sensational samples. Experimental results show that by distinguishing these two types of headlines, we can partially teach the model a sense of being sensational. Secondly, after training a sensationalism scorer on our sensationalism dataset, a natural way to generate sensational headlines is to maximize the sensationalism score using reinforcement learning (RL). However, the following shows an example of a RL model maximizing the sensationalism score by generating a very unnatural sentence, while its sensationalism scorer gave a very high score of 0.99996: UTF8gbsn 十个可穿戴产品的设计原则这消息消息可惜说明 Ten design principles for wearable devices, this message message pity introduction. This happens because the sensationalism scorer can make mistakes and RL can generate unnatural phrases which fools our sensationalism scorer. Thus, how to effectively leverage RL with noisy rewards remains an open problem. To deal with the noisy reward, we introduce Auto-tuned Reinforcement Learning (ARL). Our model automatically tunes the ratio between MLE and RL based on how sensational the training headline is. In this way, we effectively take advantage of RL with a noisy reward to generate headlines that are both sensational and fluent. The major contributions of this paper are as follows: 1) To the best of our knowledge, we propose the first-ever model that tackles the sensational headline generation task with reinforcement learning techniques. 2) Without human-annotated data, we propose a distant supervision strategy to train a sensationalism scorer as a reward function.3) We propose a novel loss function, Auto-tuned Reinforcement Learning, to give dynamic weights to balance between MLE and RL. Our code will be released . Sensationalism Scorer To evaluate the sensationalism intensity score $\alpha _{\text{sen}}$ of a headline, we collect a sensationalism dataset and then train a sensationalism scorer. For the sensationalism dataset collection, we choose headlines with many comments from popular online websites as positive samples. For the negative samples, we propose to use the generated headlines from a sentence summarization model. Intuitively, the summarization model, which is trained to preserve the semantic meaning, will lose the sensationalization ability and thus the generated negative samples will be less sensational than the original one, similar to the obfuscation of style after back-translation BIBREF4. For example, an original headline like UTF8gbsn“一趟挣10万?铁总增开申通、顺丰专列" (One trip to earn 100 thousand? China Railway opens new Shentong and Shunfeng special lines) will become UTF8gbsn“中铁总将增开京广两列快递专列" (China Railway opens two special lines for express) from the baseline model, which loses the sensational phrases of UTF8gbsn“一趟挣10万?" (One trip to earn 100 thousand?) . We then train the sensationalism scorer by classifying sensational and non-sensational headlines using a one-layer CNN with a binary cross entropy loss $L_{\text{sen}}$. Firstly, 1-D convolution is used to extract word features from the input embeddings of a headline. This is followed by a ReLU activation layer and a max-pooling layer along the time dimension. All features from different channels are concatenated together and projected to the sensationalism score by adding another fully connected layer with sigmoid activation. Binary cross entropy is used to compute the loss $L_{\text{sen}}$. Sensationalism Scorer ::: Training Details and Dataset For the CNN model, we choose filter sizes of 1, 3, and 5 respectively. Adam is used to optimize $L_{sen}$ with a learning rate of 0.0001. We set the embedding size as 300 and initialize it from qiu2018revisiting trained on the Weibo corpus with word and character features. We fix the embeddings during training. For dataset collection, we utilize the headlines collected in qin2018automatic, lin2019learning from Tencent News, one of the most popular Chinese news websites, as the positive samples. We follow the same data split as the original paper. As some of the links are not available any more, we get 170,754 training samples and 4,511 validation samples. For the negative training samples collection, we randomly select generated headlines from a pointer generator BIBREF0 model trained on LCSTS dataset BIBREF5 and create a balanced training corpus which includes 351,508 training samples and 9,022 validation samples. To evaluate our trained classifier, we construct a test set by randomly sampling 100 headlines from the test split of LCSTS dataset and the labels are obtained by 11 human annotators. Annotations show that 52% headlines are labeled as positive and 48% headlines as negative by majority voting (The detail on the annotation can be found in Section SECREF26). Sensationalism Scorer ::: Results and Discussion Our classifier achieves 0.65 accuracy and 0.65 averaged F1 score on the test set while a random classifier would only achieve 0.50 accuracy and 0.50 averaged F1 score. This confirms that the predicted sensationalism score can partially capture the sensationalism of headlines. On the other hand, a more natural choice is to take headlines with few comments as negative examples. Thus, we train another baseline classifier on a crawled balanced sensationalism corpus of 84k headlines where the positive headlines have at least 28 comments and the negative headlines have less than 5 comments. However, the results on the test set show that the baseline classifier gets 60% accuracy, which is worse than the proposed classifier (which achieves 65%). The reason could be that the balanced sensationalism corpus are sampled from different distributions from the test set and it is hard for the trained model to generalize. Therefore, we choose the proposed one as our sensationalism scorer. Therefore, our next challenge is to show that how to leverage this noisy sensationalism reward to generate sensational headlines. Sensational Headline Generation Our sensational headline generation model takes an article as input and output a sensational headline. The model consists of a Pointer-Gen headline generator and is trained by ARL. The diagram of ARL can be found in Figure FIGREF6. We denote the input article as $x=\lbrace x_1,x_2,x_3,\cdots ,x_M\rbrace $, and the corresponding headline as $y^*=\lbrace y_1^*,y_2^*,y_3^*,\cdots ,y_T^*\rbrace $, where $M$ is the number of tokens in an article and $T$ is the number of tokens in a headline. Sensational Headline Generation ::: Pointer-Gen Headline Generator We choose Pointer Generator (Pointer-Gen) BIBREF0, a widely used summarization model, as our headline generator for its ability to copy words from the input article. It takes a news article as input and generates a headline. Firstly, the tokens of each article, $\lbrace x_1,x_2,x_3,\cdots ,x_M\rbrace $, are fed into the encoder one-by-one and the encoder generates a sequence of hidden states $h_i$. For each decoding step $t$, the decoder receives the embedding for each token of a headline $y_t$ as input and updates its hidden states $s_t$. An attention mechanism following luong2015effective is used: where $v$, $W_h$, $W_s$, and $b_{attn}$ are the trainable parameters and $h_t^*$ is the context vector. $s_t$ and $h_t^*$ are then combined to give a probability distribution over the vocabulary through two linear layers: where $V$, $b$, $V^{^{\prime }}$, and $b^{^{\prime }}$ are trainable parameters. We use a pointer generator network to enable our model to copy rare/unknown words from the input article, giving the following final word probability: where $x^t$ is the embedding of the input word of the decoder, $w_{h^*}^T$, $w_s^T$, $w_x^T$, and $b_{ptr}$ are trainable parameters, and $\sigma $ is the sigmoid function. Sensational Headline Generation ::: Training Methods We first briefly introduce MLE and RL objective functions, and a naive way to mix these two by a hyper-parameter $\lambda $. Then we point out the challenge of training with noisy reward, and propose ARL to address this issue. Sensational Headline Generation ::: Training Methods ::: MLE and RL A headline generation model can be trained with MLE, RL or a combination of MLE and RL. MLE training is to minimize the negative log likelihood of the training headlines. We feed $y^*$ into the decoder word by word and maximize the likelihood of $y^*$. The loss function for MLE becomes For RL training, we choose the REINFORCE algorithm BIBREF6. In the training phase, after encoding an article, a headline $y^s = \lbrace y_1^s, y_2^s, y_3^s, \cdots , y_T^s\rbrace $ is obtained by sampling from $P(w)$ from our generator, and then a reward of sensationalism or ROUGE(RG) is calculated. We use the baseline reward $\hat{R_t}$ to reduce the variance of the reward, similar to ranzato2015sequence. To elaborate, a linear model is deployed to estimate the baseline reward $\hat{R_t}$ based on $t$-th state $o_t$ for each timestep $t$. The parameters of the linear model are trained by minimizing the mean square loss between $R$ and $\hat{R_t}$: where $W_r$ and $b_r$ are trainable parameters. To maximize the expected reward, our loss function for RL becomes A naive way to mix these two objective functions using a hyper-parameter $\lambda $ has been successfully incorporated in the summarization task BIBREF7. It includes the MLE training as a language model to mitigate the readability and quality issues in RL. The mixed loss function is shown as follows: where $*$ is the reward type. Usually $\lambda $ is large, and paulus2017deep used 0.9984. Sensational Headline Generation ::: Training Methods ::: Auto-tuned Reinforcement Learning Applying the naive mixed training method using sensationalism score as the reward is not obvious/trivial in our task. The main reason is that our sensationalism reward is notably more noisy and more fragile than the ROUGE-L reward or abstractive reward used in the summarization task BIBREF7, BIBREF8. A higher ROUGE-L F1 reward in summarization indicates higher overlapping ratio between generation and true summary statistically, but our sensationalism reward is a learned score which is fragile to be fooled with unnatural samples. To effectively train the model with RL under noisy sensationalism reward, our idea is to balance RL with MLE. However, we argue that the weighted ratio between MLE and RL should be sample-dependent, instead of being fixed for all training samples as in paulus2017deep, kryscinski2018improving. The reason is that, RL and MLE have inconsistent optimization objectives. When the training headline is non-sensational, MLE training will encourage our model to imitate the training headline (thus generating non-sensational headlines), which counteracts the effects of RL training to generate sensational headlines. The sensationalism score is, therefore, used to give dynamic weight to MLE and RL. Our ARL loss function becomes: If $\alpha _{\text{sen}}(y^*)$ is high, meaning the training headline is sensational, our loss function encourages our model to imitate the sample more using the MLE training. If $\alpha _{\text{sen}}(y^*)$ is low, our loss function replies on RL training to improve the sensationalism. Note that the weight $\alpha _{\text{sen}}(y^*)$ is different from our sensationalism reward $\alpha _{\text{sen}}(y^s)$ and we call the loss function Auto-tuned Reinforcement Learning, because the ratio between MLE and RL are well “tuned” towards different samples. Sensational Headline Generation ::: Dataset We use LCSTS BIBREF5 as our dataset to train the summarization model. The dataset is collected from the Chinese microblogging website Sina Weibo. It contains over 2 million Chinese short texts with corresponding headlines given by the author of each text. The dataset is split into 2,400,591 samples for training, 10,666 samples for validation and 725 samples for testing. We tokenize each sentence with Jieba and a vocabulary size of 50000 is saved. Sensational Headline Generation ::: Baselines and Our Models We experiment and compare with the following models. Pointer-Gen is the baseline model trained by optimizing $L_\text{MLE}$ in Equation DISPLAY_FORM13. Pointer-Gen+Pos is the baseline model by training Pointer-Gen only on positive examples whose sensationalism score is larger than 0.5 Pointer-Gen+Same-FT is the model which fine-tunes Pointer-Gen on the training samples whose sensationalism score is larger than 0.1 Pointer-Gen+Pos-FT is the model which fine-tunes Pointer-Gen on the training samples whose sensationalism score is larger than 0.5 Pointer-Gen+RL-ROUGE is the baseline model trained by optimizing $L_\text{RL-ROUGE}$ in Equation DISPLAY_FORM17, with ROUGE-L BIBREF9 as the reward. Pointer-Gen+RL-SEN is the baseline model trained by optimizing $L_\text{RL-SEN}$ in Equation DISPLAY_FORM17, with $\alpha _\text{sen}$ as the reward. Pointer-Gen+ARL-SEN is our model trained by optimizing $L_\text{ARL-SEN}$ in Equation DISPLAY_FORM19, with $\alpha _\text{sen}$ as the reward. Test set is the headlines from the test set. Note that we didn't compare to Pointer-Gen+ARL-ROUGE as it is actually Pointer-GEN. Recall that $\alpha _{\text{sen}}(y^*)$ in Equation DISPLAY_FORM19 measures how good (based on reward function) is $y^*$. Then the loss function for Pointer-Gen+ARL-ROUGE will be We also tried text style transfer baseline BIBREF10, but the generated headlines were very poor (many unknown words and irrelevant). Sensational Headline Generation ::: Training Details MLE training: An Adam optimizer is used with the learning rate of 0.0001 to optimize $L_{\text{MLE}}$. The batch size is set as 128 and a one-layer, bi-directional Long Short-Term Memory (bi-LSTM) model with 512 hidden sizes and a 350 embedding size is utilized. Gradients with the l2 norm larger than 2.0 are clipped. We stop training when the ROUGE-L f-score stops increasing. Hybrid training: An Adam optimizer with a learning rate of 0.0001 is used to optimize $L_{\text{RL-*}}$ (Equation DISPLAY_FORM17) and $L_\text{{ARL-SEN}}$ (Equation DISPLAY_FORM19). When training Pointer-Gen+RL-ROUGE, the best $\lambda $ is chosen based on the ROUGE-L score on the validation set. In our experiment, $\lambda $ is set as 0.95. An Adam optimizer with a learning rate of 0.001 is used to optimize $L_b$. When training Pointer-Gen+ARL-SEN, we don't use the full LCSTS dataset, but only headlines with a sensationalism score larger than 0.1 as we observe that Pointer-Gen+ARL-SEN will generate a few unnatural phrases when using full dataset. We believe the reason is the high ratio of RL during training. Figure FIGREF23 shows that the probability density near 0 is very high, meaning that in each batch, many of the samples will have a very low sensationalism score. On expectation, each sample will receive 0.239 MLE training and 0.761 RL training. This leads to RL dominanting the loss. Thus, we propose to filter samples with a minimum sensationalism score with 0.1 and it works very well. For Pointer-Gen+RL-SEN, we also set the minimum sensationalism score as 0.1, and $\lambda $ is set as 0.5 to remove unnatural phrases, making a fair comparison to Pointer-Gen+ARL-SEN. We stop training Pointer-Gen+Same-FT, Pointer-Gen+Pos-FT, Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN, when $\alpha _\text{sen}$ stops increasing on the validation set. Beam-search with a beam size of 5 is adopted for decoding in all models. Sensational Headline Generation ::: Evaluation Metrics We briefly describe the evaluation metrics below. ROUGE: ROUGE is a commonly used evaluation metric for summarization. It measures the N-gram overlap between generated and training headlines. We use it to evaluate the relevance of generated headlines. The widely used pyrouge toolkit is used to calculate ROUGE-1 (RG-1), ROUGE-2 (RG-2), and ROUGE-L (RG-L). Human evaluation: We randomly sample 50 articles from the test set and send the generated headlines from all models and corresponding headlines in the test set to human annotators. We evaluate the sensationalism and fluency of the headlines by setting up two independent human annotation tasks. We ask 10 annotators to label each headline for each task. For the sensationalism annotation, each annotator is asked one question, “Is the headline sensational?”, and he/she has to choose either `yes' or `no'. The annotators were not told which system the headline is from. The process of distributing samples and recruiting annotators is managed by Crowdflower. After annotation, we define the sensationalism score as the proportion of annotations on all generated headlines from one model labeled as `yes'. For the fluency annotation, we repeat the same procedure as for the sensationalism annotation, except that we ask each annotator the question “Is the headline fluent?” We define the fluency score as the proportion of annotations on all headlines from one specific model labeled as `yes'. We put human annotation instructions in the supplemental material. UTF8gbsn Results We first compare all four models, Pointer-Gen, Pointer-Gen-RL+ROUGE, Pointer-Gen-RL-SEN, and Pointer-Gen-ARL-SEN, to existing models with ROUGE in Table TABREF25 to establish that our model produces relevant headlines and we leave the sensationalism for human evaluation. Note that we only compare our models to commonly used strong summarization baselines, to validate that our implementation achieves comparable performance to existing work. In our implementation, Pointer-Gen achieves a 34.51 RG-1 score, 22.21 RG-2 score, and 31.68 RG-L score, which is similar to the results of gu2016incorporating. Pointer-Gen+ARL-SEN, although optimized for the sensationalism reward, achieves similar performance to our Pointer-Gen baseline, which means that Pointer-Gen+ARL-SEN still keeps its summarization ability. An example of headlines generated from different models in Table TABREF29 shows that Pointer-Gen and Pointer-Gen+RL-ROUGE learns to summarize the main point of the article: “The Nikon D600 camera is reported to have black spots when taking photos”. Pointer-Gen+RL-SEN makes the headline more sensational by blaming Nikon for attributing the damage to the smog. Pointer-Gen+ARL-SEN generates the most sensational headline by exaggerating the result “Getting a serious trouble!” to maximize user's attention. We then compare different models using the sensationalism score in Table TABREF30. The Pointer-Gen baseline model achieves a 42.6% sensationalism score, which is the minimum that a typical summarization model achieves. By filtering out low-sensational headlines, Pointer-Gen+Same-FT and Pointer-Gen+Pos-FT achieves higher sensationalism scores, which implies the effectiveness of our sensationalism scorer. Our Pointer-Gen+ARL-SEN model achieves the best performance of 60.8%. This is an absolute improvement of 18.2% over the Pointer-Gen baseline. The Chi-square test on the results confirms that Pointer-Gen+ARL-SEN is statistically significantly more sensational than all the other baseline models, with the largest p-value less than 0.01. Also, we find that the test set headlines achieves 57.8% sensationalism score, much larger than Pointer-Gen baseline, which also supports our intuition that generated headlines will be less sensational than the original one. On the other hand, we found that Pointer-Gen+Pos is much worse than other baselines. The reason is that training on sensational samples alone discards around 80% of the whole training set that is also helpful for maintaining relevance and a good language model. It shows the necessity of using RL. UTF8gbsn In addition, both Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN, which use the sensationalism score as the reward, obtain statistically better results than Pointer-Gen+RL-ROUGE and Pointer-Gen, with a p-value less than 0.05 by a Chi-square test. This result shows the effectiveness of RL to generate more sensational headlines. The reason is that even though our noisy classifier could also learn to classify domains, the generator during RL training is not allowed to increase the reward by shifting domains but encouraged to generate more sensational headlines, due to the consistency constraint on the domains of the headline and the article. Furthermore, Poiner-Gen+ARL-SEN gets better performance than Pointer-Gen+RL-SEN, which confirms the superiority of the ARL loss function. We also visualize in Figure FIGREF31 a comparison between Pointer-Gen+ARL-SEN and Pointer-Gen+RL-SEN according to how sensational the test set headlines are. The blue bars denote the smaller scores between the two models. For example, if the blue bar is 0.6, it means that the worse model between Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN achieves 0.6. And the color of orange/black further indicates the better model and its score. We find that Pointer-Gen+ARL-SEN outperforms Pointer-Gen+RL-SEN for most cases. The improvement is higher when the test set headlines are not sensational (the sensationalism score is less than 0.5), which may be attributed to the higher ratio of RL training on non-sensational headlines. Apart from the sensationalism evaluation, we measure the fluency of the headlines generated from different models. Fluency scores in Table TABREF30 show that Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN achieve comparable fluency performance to Pointer-Gen and Pointer-Gen+RL-ROUGE. Test set headlines achieve the best performance among all models, but the difference is not statistically significant. Also, we observe that fine-tuning on sensational headlines will hurt the performance, both in sensationalism and fluency. After manually checking the outputs, we observe that our model is able to generate sensational headlines using diverse sensationalization strategies. These strategies include, but are not limited to, creating a curiosity gap, asking questions, highlighting numbers, being emotional and emphasizing the user. Examples can be found in Table TABREF32. Related Work Our work is related to summarization tasks. An encoder-decoder model was first applied to two sentence-level abstractive summarization tasks on the DUC-2004 and Gigaword datasets BIBREF12. This model was later extended by selective encoding BIBREF13, a coarse to fine approach BIBREF14, minimum risk training BIBREF1, and topic-aware models BIBREF15. As long summaries were recognized as important, the CNN/Daily Mail dataset was used in nallapati2016abstractive. Graph-based attention BIBREF16, pointer-generator with coverage loss BIBREF0 are further developed to improve the generated summaries. celikyilmaz2018deep proposed deep communicating agents for representing a long document for abstractive summarization. In addition, many papers BIBREF17, BIBREF18, BIBREF19 use extractive methods to directly select sentences from articles. However, none of these work considered the sensationalism of generated outputs. RL is also gaining popularity as it can directly optimize non-differentiable metrics BIBREF20, BIBREF21, BIBREF22. paulus2017deep proposed an intra-decoder model and combined RL and MLE to deal with summaries with bad qualities. RL has also been explored with generative adversarial networks (GANs) BIBREF23. liu2017generative applied GANs on summarization task and achieved better performance. niu2018polite tackles the problem of polite generation with politeness reward. Our work is different in that we propose a novel function to balance RL and MLE. Our task is also related to text style transfer. Implicit methods BIBREF10, BIBREF24, BIBREF4 transfer the styles by separating sentence representations into content and style, for example using back-translationBIBREF4. However, these methods cannot guarantee the content consistency between the original sentence and transferred output BIBREF25. Explicit methods BIBREF26, BIBREF25 transfer the style by directly identifying style related keywords and modifying them. However, sensationalism is not always restricted to keywords, but the full sentence. By leveraging small human labeled English dataset, clickbait detection has been well investigated BIBREF2, BIBREF27, BIBREF3. However, these human labeled dataset are not available for other languages, such as Chinese. Modeling sensationalism is also related to modeling emotion. Emotion has been well investigated in both word levelBIBREF28, BIBREF29 and sentence levelBIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. It has also been considered an important factor in engaging interactive systemsBIBREF35, BIBREF36, BIBREF37. Although we observe that sensational headlines contain emotion, it is still not clear which emotion and how emotions will influence the sensationalism. Conclusion and Future Work In this paper, we propose a model that generates sensational headlines without labeled data using Reinforcement Learning. Firstly, we propose a distant supervision strategy to train the sensationalism scorer. As a result, we achieve 65% accuracy between the predicted sensationalism score and human evaluation. To effectively leverage this noisy sensationalism score as the reward for RL, we propose a novel loss function, ARL, to automatically balance RL with MLE. Human evaluation confirms the effectiveness of both our sensationalism scorer and ARL to generate more sensational headlines. Future work can be improving the sensationalism scorer and investigating the applications of dynamic balancing methods between RL and MLE in textGANBIBREF23. Our work also raises the ethical questions about generating sensational headlines, which can be further explored. Acknowledgments Thanks to ITS/319/16FP of Innovation Technology Commission, HKUST 16248016 of Hong Kong Research Grants Council for funding. In addition, we thank Zhaojiang Lin for helpful discussion and Yan Xu, Zihan Liu for the data collection.
Which baselines are used for evaluation?
Pointer-Gen, Pointer-Gen+Pos, Pointer-Gen+Same-FT, Pointer-Gen+Pos-FT, Pointer-Gen+RL-ROUGE, Pointer-Gen+RL-SEN
4,085
qasper
4k
Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks.
How are weights dynamically adjusted?
One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified.
3,640
qasper
4k
Introduction Semantic Role Labeling (SRL) has emerged as an important task in Natural Language Processing (NLP) due to its applicability in information extraction, question answering, and other NLP tasks. SRL is the problem of finding predicate-argument structure in a sentence, as illustrated below: INLINEFORM0 Here, the predicate WRITE has two arguments: `Mike' as A0 or the writer, and `a book' as A1 or the thing written. The labels A0 and A1 correspond to the PropBank annotations BIBREF0 . As the need for SRL arises in different domains and languages, the existing manually annotated corpora become insufficient to build supervised systems. This has motivated work on unsupervised SRL BIBREF1 , BIBREF2 , BIBREF3 . Previous work has indicated that unsupervised systems could benefit from the word alignment information in parallel text in two or more languages BIBREF4 , BIBREF5 , BIBREF6 . For example, consider the German translation of sentence INLINEFORM0 : INLINEFORM0 If sentences INLINEFORM0 and INLINEFORM1 have the word alignments: Mike-Mike, written-geschrieben, and book-Buch, the system might be able to predict A1 for Buch, even if there is insufficient information in the monolingual German data to learn this assignment. Thus, in languages where the resources are sparse or not good enough, or the distributions are not informative, SRL systems could be made more accurate by using parallel data with resource rich or more amenable languages. In this paper, we propose a joint Bayesian model for unsupervised semantic role induction in multiple languages. The model consists of individual Bayesian models for each language BIBREF3 , and crosslingual latent variables to incorporate soft role agreement between aligned constituents. This latent variable approach has been demonstrated to increase the performance in a multilingual unsupervised part-of-speech tagging model based on HMMs BIBREF4 . We investigate the application of this approach to unsupervised SRL, presenting the performance improvements obtained in different settings involving labeled and unlabeled data, and analyzing the annotation effort required to obtain similar gains using labeled data. We begin by briefly describing the unsupervised SRL pipeline and the monolingual semantic role induction model we use, and then describe our multilingual model. Unsupervised SRL Pipeline As established in previous work BIBREF7 , BIBREF8 , we use a standard unsupervised SRL setup, consisting of the following steps: The task we model, unsupervised semantic role induction, is the step 4 of this pipeline. Monolingual Model We use the Bayesian model of garg2012unsupervised as our base monolingual model. The semantic roles are predicate-specific. To model the role ordering and repetition preferences, the role inventory for each predicate is divided into Primary and Secondary roles as follows: For example, the complete role sequence in a frame could be: INLINEFORM0 INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 INLINEFORM9 . The ordering is defined as the sequence of PRs, INLINEFORM10 INLINEFORM11 , INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 INLINEFORM16 . Each pair of consecutive PRs in an ordering is called an interval. Thus, INLINEFORM17 is an interval that contains two SRs, INLINEFORM18 and INLINEFORM19 . An interval could also be empty, for instance INLINEFORM20 contains no SRs. When we evaluate, these roles get mapped to gold roles. For instance, the PR INLINEFORM21 could get mapped to a core role like INLINEFORM22 , INLINEFORM23 , etc. or to a modifier role like INLINEFORM24 , INLINEFORM25 , etc. garg2012unsupervised reported that, in practice, PRs mostly get mapped to core roles and SRs to modifier roles, which conforms to the linguistic motivations for this distinction. Figure FIGREF16 illustrates two copies of the monolingual model, on either side of the crosslingual latent variables. The generative process is as follows: All the multinomial and binomial distributions have symmetric Dirichlet and beta priors respectively. Figure FIGREF7 gives the probability equations for the monolingual model. This formulation models the global role ordering and repetition preferences using PRs, and limited context for SRs using intervals. Ordering and repetition information was found to be helpful in supervised SRL as well BIBREF9 , BIBREF8 , BIBREF10 . More details, including the motivations behind this model, are in BIBREF3 . Multilingual Model The multilingual model uses word alignments between sentences in a parallel corpus to exploit role correspondences across languages. We make copies of the monolingual model for each language and add additional crosslingual latent variables (CLVs) to couple the monolingual models, capturing crosslingual semantic role patterns. Concretely, when training on parallel sentences, whenever the head words of the arguments are aligned, we add a CLV as a parent of the two corresponding role variables. Figure FIGREF16 illustrates this model. The generative process, as explained below, remains the same as the monolingual model for the most part, with the exception of aligned roles which are now generated by both the monolingual process as well as the CLV. Every predicate-tuple has its own inventory of CLVs specific to that tuple. Each CLV INLINEFORM0 is a multi-valued variable where each value defines a distribution over role labels for each language (denoted by INLINEFORM1 above). These distributions over labels are trained to be peaky, so that each value INLINEFORM2 for a CLV represents a correlation between the labels that INLINEFORM3 predicts in the two languages. For example, a value INLINEFORM4 for the CLV INLINEFORM5 might give high probabilities to INLINEFORM6 and INLINEFORM7 in language 1, and to INLINEFORM8 in language 2. If INLINEFORM9 is the only value for INLINEFORM10 that gives high probability to INLINEFORM11 in language 1, and the monolingual model in language 1 decides to assign INLINEFORM12 to the role for INLINEFORM13 , then INLINEFORM14 will predict INLINEFORM15 in language 2, with high probability. We generate the CLVs via a Chinese Restaurant Process BIBREF11 , a non-parametric Bayesian model, which allows us to induce the number of CLVs for every predicate-tuple from the data. We continue to train on the non-parallel sentences using the respective monolingual models. The multilingual model is deficient, since the aligned roles are being generated twice. Ideally, we would like to add the CLV as additional conditioning variables in the monolingual models. The new joint probability can be written as equation UID11 (Figure FIGREF7 ), which can be further decomposed following the decomposition of the monolingual model in Figure FIGREF7 . However, having this additional conditioning variable breaks the Dirichlet-multinomial conjugacy, which makes it intractable to marginalize out the parameters during inference. Hence, we use an approximation where we treat each of the aligned roles as being generated twice, once by the monolingual model and once by the corresponding CLV (equation ). This is the first work to incorporate the coupling of aligned arguments directly in a Bayesian SRL model. This makes it easier to see how to extend this model in a principled way to incorporate additional sources of information. First, the model scales gracefully to more than two languages. If there are a total of INLINEFORM0 languages, and there is an aligned argument in INLINEFORM1 of them, the multilingual latent variable is connected to only those INLINEFORM2 aligned arguments. Second, having one joint Bayesian model allows us to use the same model in various semi-supervised learning settings, just by fixing the annotated variables during training. Section SECREF29 evaluates a setting where we have some labeled data in one language (called source), while no labeled data in the second language (called target). Note that this is different from a classic annotation projection setting (e.g. BIBREF12 ), where the role labels are mapped from source constituents to aligned target constituents. Inference and Training The inference problem consists of predicting the role labels and CLVs (the hidden variables) given the predicate, its voice, and syntactic features of all the identified arguments (the visible variables). We use a collapsed Gibbs-sampling based approach to generate samples for the hidden variables (model parameters are integrated out). The sample counts and the priors are then used to calculate the MAP estimate of the model parameters. For the monolingual model, the role at a given position is sampled as: DISPLAYFORM0 where the subscript INLINEFORM0 refers to all the variables except at position INLINEFORM1 , INLINEFORM2 refers to the variables in all the training instances except the current one, and INLINEFORM3 refers to all the model parameters. The above integral has a closed form solution due to Dirichlet-multinomial conjugacy. For sampling roles in the multilingual model, we also need to consider the probabilities of roles being generated by the CLVs: DISPLAYFORM0 For sampling CLVs, we need to consider three factors: two corresponding to probabilities of generating the aligned roles, and the third one corresponding to selecting the CLV according to CRP. DISPLAYFORM0 where the aligned roles INLINEFORM0 and INLINEFORM1 are connected to INLINEFORM2 , and INLINEFORM3 refers to all the variables except INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . We use the trained parameters to parse the monolingual data using the monolingual model. The crosslingual parameters are ignored even if they were used during training. Thus, the information coming from the CLVs acts as a regularizer for the monolingual models. Evaluation Following the setting of titovcrosslingual, we evaluate only on the arguments that were correctly identified, as the incorrectly identified arguments do not have any gold semantic labels. Evaluation is done using the metric proposed by lang2011unsupervised, which has 3 components: (i) Purity (PU) measures how well an induced cluster corresponds to a single gold role, (ii) Collocation (CO) measures how well a gold role corresponds to a single induced cluster, and (iii) F1 is the harmonic mean of PU and CO. For each predicate, let INLINEFORM0 denote the total number of argument instances, INLINEFORM1 the instances in the induced cluster INLINEFORM2 , and INLINEFORM3 the instances having label INLINEFORM4 in gold annotations. INLINEFORM5 , INLINEFORM6 , and INLINEFORM7 . The score for each predicate is weighted by the number of its argument instances, and a weighted average is computed over all the predicates. Baseline We use the same baseline as used by lang2011unsupervised which has been shown to be difficult to outperform. This baseline assigns a semantic role to a constituent based on its syntactic function, i.e. the dependency relation to its head. If there is a total of INLINEFORM0 clusters, INLINEFORM1 most frequent syntactic functions get a cluster each, and the rest are assigned to the INLINEFORM2 th cluster. Closest Previous Work This work is closely related to the cross-lingual unsupervised SRL work of titovcrosslingual. Their model has separate monolingual models for each language and an extra penalty term which tries to maximize INLINEFORM0 and INLINEFORM1 i.e. for all the aligned arguments with role label INLINEFORM2 in language 1, it tries to find a role label INLINEFORM3 in language 2 such that the given proportion is maximized and vice verse. However, there is no efficient way to optimize the objective with this penalty term and the authors used an inference method similar to annotation projection. Further, the method does not scale naturally to more than two languages. Their algorithm first does monolingual inference in one language ignoring the penalty and then does the inference in the second language taking into account the penalty term. In contrast, our model adds the latent variables as a part of the model itself, and not an external penalty, which enables us to use the standard Bayesian learning methods such as sampling. The monolingual model we use BIBREF3 also has two main advantages over titovcrosslingual. First, the former incorporates a global role ordering probability that is missing in the latter. Secondly, the latter defines argument-keys as a tuple of four syntactic features and all the arguments having the same argument-keys are assigned the same role. This kind of hard clustering is avoided in the former model where two constituents having the same set of features might get assigned different roles if they appear in different contexts. Data Following titovcrosslingual, we run our experiments on the English (EN) and German (DE) sections of the CoNLL 2009 corpus BIBREF13 , and EN-DE section of the Europarl corpus BIBREF14 . We get about 40k EN and 36k DE sentences from the CoNLL 2009 training set, and about 1.5M parallel EN-DE sentences from Europarl. For appropriate comparison, we keep the same setting as in BIBREF6 for automatic parses and argument identification, which we briefly describe here. The EN sentences are parsed syntactically using MaltParser BIBREF15 and DE using LTH parser BIBREF16 . All the non-auxiliary verbs are selected as predicates. In CoNLL data, this gives us about 3k EN and 500 DE predicates. The total number of predicate instances are 3.4M in EN (89k CoNLL + 3.3M Europarl) and 2.62M in DE (17k CoNLL + 2.6M Europarl). The arguments for EN are identified using the heuristics proposed by lang2011unsupervised. However, we get an F1 score of 85.1% for argument identification on CoNLL 2009 EN data as opposed to 80.7% reported by titovcrosslingual. This could be due to implementation differences, which unfortunately makes our EN results incomparable. For DE, the arguments are identified using the LTH system BIBREF16 , which gives an F1 score of 86.5% on the CoNLL 2009 DE data. The word alignments for the EN-DE parallel Europarl corpus are computed using GIZA++ BIBREF17 . For high-precision, only the intersecting alignments in the two directions are kept. We define two semantic arguments as aligned if their head-words are aligned. In total we get 9.3M arguments for EN (240k CoNLL + 9.1M Europarl) and 4.43M for DE (32k CoNLL + 4.4M Europarl). Out of these, 0.76M arguments are aligned. Main Results Since the CoNLL annotations have 21 semantic roles in total, we use 21 roles in our model as well as the baseline. Following garg2012unsupervised, we set the number of PRs to 2 (excluding INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ), and SRs to 21-2=19. Table TABREF27 shows the results. In the first setting (Line 1), we train and test the monolingual model on the CoNLL data. We observe significant improvements in F1 score over the Baseline (Line 0) in both languages. Using the CoNLL 2009 dataset alone, titovcrosslingual report an F1 score of 80.9% (PU=86.8%, CO=75.7%) for German. Thus, our monolingual model outperforms their monolingual model in German. For English, they report an F1 score of 83.6% (PU=87.5%, CO=80.1%), but note that our English results are not directly comparable to theirs due to differences argument identification, as discussed in section SECREF25 . As their argument identification score is lower, perhaps their system is discarding “difficult” arguments which leads to a higher clustering score. In the second setting (Line 2), we use the additional monolingual Europarl (EP) data for training. We get equivalent results in English and a significant improvement in German compared to our previous setting (Line 1). The German dataset in CoNLL is quite small and benefits from the additional EP training data. In contrast, the English model is already quite good due to a relatively big dataset from CoNLL, and good accuracy syntactic parsers. Unfortunately, titovcrosslingual do not report results with this setting. The third setting (Line 3) gives the results of our multilingual model, which adds the word alignments in the EP data. Comparing with Line 2, we get non-significant improvements in both languages. titovcrosslingual obtain an F1 score of 82.7% (PU=85.0%, CO=80.6%) for German, and 83.7% (PU=86.8%, CO=80.7%) for English. Thus, for German, our multilingual Bayesian model is able to capture the cross-lingual patterns at least as well as the external penalty term in BIBREF6 . We cannot compare the English results unfortunately due to differences in argument identification. We also compared monolingual and bilingual training data using a setting that emulates the standard supervised setup of separate training and test data sets. We train only on the EP dataset and test on the CoNLL dataset. Lines 4 and 5 of Table TABREF27 give the results. The multilingual model obtains small improvements in both languages, which confirms the results from the standard unsupervised setup, comparing lines 2 to 3. These results indicate that little information can be learned about semantic roles from this parallel data setup. One possible explanation for this result is that the setup itself is inadequate. Given the definition of aligned arguments, only 8% of English arguments and 17% of German arguments are aligned. This plus our experiments suggest that improving the alignment model is a necessary step to making effective use of parallel data in multilingual SRI, for example by joint modeling with SRI. We leave this exploration to future work. Multilingual Training with Labeled Data for One Language Another motivation for jointly modeling SRL in multiple languages is the transfer of information from a resource rich language to a resource poor language. We evaluated our model in a very general annotation transfer scenario, where we have a small labeled dataset for one language (source), and a large parallel unlabeled dataset for the source and another (target) language. We investigate whether this setting improves the parameter estimates for the target language. To this end, we clamp the role annotations of the source language in the CoNLL dataset using a predefined mapping, and do not sample them during training. This data gives us good parameters for the source language, which are used to sample the roles of the source language in the unlabeled Europarl data. The CLVs aim to capture this improvement and thereby improve sampling and parameter estimates for the target language. Table TABREF28 shows the results of this experiment. We obtain small improvements in the target languages. As in the unsupervised setting, the small percentage of aligned roles probably limits the impact of the cross-lingual information. Labeled Data in Monolingual Model We explored the improvement in the monolingual model in a semi-supervised setting. To this end, we randomly selected INLINEFORM0 of the sentences in the CoNLL dataset as “supervised sentences” and the rest INLINEFORM1 were kept unsupervised. Next, we clamped the role labels of the supervised sentences using the predefined mapping from Section SECREF29 . Sampling was done on the unsupervised sentences as usual. We then measured the clustering performance using the trained parameters. To access the contribution of partial supervision better, we constructed a “supervised baseline” as follows. For predicates seen in the supervised sentences, a MAP estimate of the parameters was calculated using the predefined mapping. For the unseen predicates, the standard baseline was used. Figures FIGREF33 and FIGREF33 show the performance variation with INLINEFORM0 . We make the following observations: [leftmargin=*] In both languages, at around INLINEFORM0 , the supervised baseline starts outperforming the semi-supervised model, which suggests that manually labeling about 10% of the sentences is a good enough alternative to our training procedure. Note that 10% amounts to about 3.6k sentences in German and 4k in English. We noticed that the proportion of seen predicates increases dramatically as we increase the proportion of supervised sentences. At 10% supervised sentences, the model has already seen 63% of predicates in German and 44% in English. This explains to some extent why only 10% labeled sentences are enough. For German, it takes about 3.5% or 1260 supervised sentences to have the same performance increase as 1.5M unlabeled sentences (Line 1 to Line 2 in Table TABREF27 ). Adding about 180 more supervised sentences also covers the benefit obtained by alignments in the multilingual model (Line 2 to Line 3 in Table TABREF27 ). There is no noticeable performance difference in English. We also evaluated the performance variation on a completely unseen CoNLL test set. Since the test set is very small compared to the training set, the clustering evaluation is not as reliable. Nonetheless, we broadly obtained the same pattern. Related Work As discussed in section SECREF24 , our work is closely related to the crosslingual unsupervised SRL work of titovcrosslingual. The idea of using superlingual latent variables to capture cross-lingual information was proposed for POS tagging by naseem2009multilingual, which we use here for SRL. In a semi-supervised setting, pado2009cross used a graph based approach to transfer semantic role annotations from English to German. furstenau2009graph used a graph alignment method to measure the semantic and syntactic similarity between dependency tree arguments of known and unknown verbs. For monolingual unsupervised SRL, swier2004unsupervised presented the first work on a domain-general corpus, the British National Corpus, using 54 verbs taken from VerbNet. garg2012unsupervised proposed a Bayesian model for this problem that we use here. titov2012bayesian also proposed a closely related Bayesian model. grenager2006unsupervised proposed a generative model but their parameter space consisted of all possible linkings of syntactic constituents and semantic roles, which made unsupervised learning difficult and a separate language-specific rule based method had to be used to constrain this space. Other proposed models include an iterative split-merge algorithm BIBREF18 and a graph-partitioning based approach BIBREF1 . marquez2008semantic provide a good overview of the supervised SRL systems. Conclusions We propose a Bayesian model of semantic role induction (SRI) that uses crosslingual latent variables to capture role alignments in parallel corpora. The crosslingual latent variables capture correlations between roles in different languages, and regularize the parameter estimates of the monolingual models. Because this is a joint Bayesian model of multilingual SRI, we can apply the same model to a variety of training scenarios just by changing the inference procedure appropriately. We evaluate monolingual SRI with a large unlabeled dataset, bilingual SRI with a parallel corpus, bilingual SRI with annotations available for the source language, and monolingual SRI with a small labeled dataset. Increasing the amount of monolingual unlabeled data significantly improves SRI in German but not in English. Adding word alignments in parallel sentences results in small, non significant improvements, even if there is some labeled data available in the source language. This difficulty in showing the usefulness of parallel corpora for SRI may be due to the current assumptions about role alignments, which mean that only a small percentage of roles are aligned. Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora. Future work includes training on different language pairs, on more than two languages, and with more inclusive models of role alignment. Acknowledgments This work was funded by the Swiss NSF grant 200021_125137 and EC FP7 grant PARLANCE.
What does an individual model consist of?
Bayesian model of garg2012unsupervised as our base monolingual model
3,701
qasper
4k
Introduction Medical search engines are an essential component for many online medical applications, such as online diagnosis systems and medical document databases. A typical online diagnosis system, for instance, relies on a medical search engine. The search engine takes as input a user query that describes some symptoms and then outputs clinical concept entries that provide relevant information to assist in diagnosing the problem. One challenge medical search engines face is the segmentation of individual clinical entities. When a user query consists of multiple clinical entities, a search engine would often fail to recognize them as separate entities. For example, the user query “fever joint pain weight loss headache” contains four separate clinical entities: “fever”, “joint pain”, “weight loss”, and “headache”. But when the search engine does not recognize them as separate entities and proceeds to retrieve results for each word in the query, it may find "pain" in body locations other than "joint pain", or it may miss "headache" altogether, for example. Some search engines allow the users to enter a single clinical concept by selecting from an auto-completion pick list. But this could also result in retrieving inaccurate or partial results and lead to poor user experience. We want to improve the medical search engine so that it can accurately retrieve all the relevant clinical concepts mentioned in a user query, where relevant clinical concepts are defined with respect to the terminologies the search engine uses. The problem of extracting clinical concept mentions from a user query can be seen as a variant of the Concept Extraction (CE) task in the frequently-cited NLP challenges in healthcare, such as 2010 i2b2/VA BIBREF0 and 2013 ShARe/CLEF Task 1 BIBREF1. Both CE tasks in 2010 i2b2/VA and 2013 ShARe/CLEF Task 1 ask the participants to design an algorithm to tag a set of predefined entities of interest in clinical notes. These entity tagging tasks are also known as clinical Named Entity Recognition (NER). For example, the CE task in 2010 i2b2/VA defines three types of entities: “problem”, “treatment”, and “test”. The CE task in 2013 ShARe/CLEF defines various types of disorder such as “injury or poisoning”, "disease or syndrome”, etc. In addition to tagging, the CE task in 2013 ShARe/CLEF has an encoding component which requires selecting one and only one Concept Unique Identifier (CUI) from Systematized Nomenclature Of Medicine Clinical Terms (SNOMED-CT) for each disorder entity tagged. Our problem, similar to the CE task in 2013 ShARe/CLEF, also contains two sub-problems: tagging mentions of entities of interest (entity tagging), and selecting appropriate terms from a glossary to match the mentions (term matching). However, several major differences exist. First, compared to clinical notes, the user queries are much shorter, less technical, and often less coherent. Second, instead of encoding, we are dealing with term matching where we rank a few best terms that match an entity, instead of selecting only one. This is because the users who type the queries may not have a clear idea about what they are looking for, or could be laymen who know little terminology, it may be more helpful to provide a set of likely results and let the users choose. Third, the types of entities are different. Each medical search engine may have its own types of entities to tag. There is also one minor difference in the tagging scheme between our problem and the CE task in 2013 ShARe/CLEF - We limit our scope to dealing with entities of consecutive words and not disjoint entities . We use only Beginning, Inside, Outside (BIO) tags. Given the differences listed above, we need to customize a framework consisting of an entity tagging and term matching component for our CE problem. Related Work An effective model that has been commonly used for NER problem is a Bi-directional LSTM with a Conditional Random Field (CRF) on the top layer (BiLSTM-CRF), which is described in the next section. Combining LSTM’s power of representing relations between words and CRF’s capability of accounting for tag sequence constraints, Huang et al. BIBREF2 proposed the BiLSTM-CRF model and used handcrafted word features as the input to the model. Lample et al. BIBREF3 used a combination of character-level and word-level word embeddings as the input to BiLSTM-CRF. Since then, similar models with variation in types of word embeddings have been used extensively for clinical CE tasks and produced state-of-the-art results BIBREF4, BIBREF5, BIBREF6, BIBREF7. Word embeddings have become the cornerstone of the neural models in NLP since the famous Word2vec BIBREF8 model demonstrated its power in word analogy tasks. One well-known example is that after training Word2vec on a large amount of news data, we can get word relations such as $vector(^{\prime }king^{\prime }) - vector(^{\prime }queen^{\prime }) + vector(^{\prime }woman^{\prime }) \approx vector(^{\prime }man^{\prime })$. More sophisticated word embedding technique emerged since Word2vec. It has been shown empirically that better quality in word embeddings leads to better performance in many downstream NLP including entity tagging BIBREF9, BIBREF10. Recently, contextualized word embeddings generated by deep learning models, such as ELMo BIBREF11, BERT BIBREF12, and Flair BIBREF13, have been shown to be more effective in various NLP tasks. In our project, we make use of a fine-tuned ELMo model and a fine-tuned Flair model in the medical domain. We experiment with the word embeddings from the two fine-tuned models as the input to the BiLSTM-CRF model separately and compare the results. Tang et al. BIBREF14 provided straightforward algorithm for term matching. The algorithm starts with finding candidate terms that contain ALL the entity words, with term frequency - inverse document frequency (tf-idf) weighting. Then the candidates are ranked based on the pairwise cosine distance between the word embeddings of the candidates and the entity. Framework We adopt the tagging - encoding pipeline framework from the CE task in 2013 ShARe/CLEF. We first tag the clinical entities in the user query and then select relevant terms from a glossary in dermatology to match the entities. Framework ::: Entity Tagging We use the same BiLSTM-CRF model proposed by Huang et al. BIBREF2. An illustration of the architecture is shown in Figure FIGREF6 . Given a sequence (or sentence) of n tokens, $r = (w_1, w_2,..., w_n)$, we use a fine-tuned ELMo model to generate contextual word embeddings for all the tokens in the sentence, where a token refers to a word or punctuation. We denote the ELMo embedding, $x$, for a token $w$ in the sentence $r$ by $x = ELMo(w|r)$. The notation and the procedure described here can be adopted for Flair embeddings or other embeddings. Now, given a sequence of tokens in ELMo embeddings, $X =(x_1, x_2, ..., x_n)$, the BiLSTM layer generates a matrix of scores, $P(\theta )$ of size $n \times k$, where $k$ is the number of tag types, and $\theta $ is the parameters of the BiLSTM. To simplify notation, we will omit the $\theta $ and write $P$. Then, $P_{i,j}$ denotes the score of the token, $x_i$, being assigned to the $j$th tag. Since certain constraints may exist in the transition between tags, an "O" tag should not be followed by an "I" tag, for example, a transition matrix, $A$, of dimension $(k+2)\times (k+2)$, is initialized to model the constraints. The learnable parameters, $A_{i,j}$, represent the probability of the $j$th tag follows the $i$th tag in a sequence. For example, if we index the tags by: 1:“B”, 2:“I”, and 3:“O”, then $A_{1,3}$ would be the probability that an “O” tag follows a “B” tag. A beginning transition and an end transition are inserted in $A$ and hence $A$ is of dimension $(k+2)\times (k+2)$. Given a sequence of tags, $Y=(y_1,y_2,...,y_n)$, where each $y_i$, $1\le i \le n$, corresponds to an index of the tags, the score of the sequence is then given by The probability of the sequence of tags is then calculated by a softmax, where $\lbrace Y_x\rbrace $ denotes the set of all possible tag sequences. During training, the objective function is to maximize $\log (P(Y|X))$ by adjusting $A$ and $P$. Framework ::: Term Matching The term matching algorithm of Tang et al. BIBREF14 is adopted with some major modifications. First, to identify candidate terms, we use a much looser string search algorithm where we stem the entity words with snowball stemmer and then find candidate terms which contain ANY non-stopword words in the entity. The stemming is mainly used to represent a word in its stem. For example, “legs” becomes “leg”, and “jammed” becomes “jam”. Thus, stemming can provide more tolerance when finding candidates. Similarly, finding candidates using the condition ANY (instead of ALL) also increases the tolerance. However, if the tagged entity contains stopwords such as “in”, “on”, etc., the pool of candidates will naturally grow very large and increase the computation cost for later part, because even completely irrelevant terms may contain stopwords. Therefore, we match based on non-stopwords only. To illustrate the points above, suppose a query is tagged with the entity Ex. 3.1 “severe burns on legs”, and one relevant term is “leg burn”. After stemming, “burns” and “legs” in Ex.UNKREF12 become “burn” and “leg”, respectively, allowing "leg burn" to be considered as a candidate. Although the word “severe” is not in the term “leg burn”, the term is still considered a candidate because we selected using ANY. The stopword “on” is ignored when finding candidate terms so that not every term that contains the word “on” is added to the candidate pool. When a candidate term, $C$, is found in this manner for the tagged entity, $E$, we calculate the semantic similarity score, $s$, between $C$ and $E$ in two steps. In the first step, calculate the maximum similarity score for each word in $C$ as shown in Figure FIGREF10. Given a word in the candidate term, $C_i$ ($1 \le i \le m$, $m$ is the number of words in the candidate term) and a word in the tagged entity, $E_j$.Their similarity score, $s_{ij}$ (shown as the element in the boxed matrix in Figure FIGREF10), is given by where $ELMo(C_i|C)$ and $ELMo(E_j|E)$ are the ELMo embeddings for the word $C_i$ and $E_j$, respectively. The ELMo embeddings have the same dimension for all words when using the same fine-tuned ELMo model. Thus, we can use a distance function (e.g., the cosine distance), denoted $d(\cdot )$ in equation DISPLAY_FORM13, to compute the semantic similarity between words. In step 2, we calculate the candidate-entity relevance score (similarity) using the formula where $s_c$ is a score threshold, and $\mathbb {I} \lbrace max(\vec{S_i}) > s_c\rbrace $ is an indicator function that equals 1 if $max(\vec{S_i}) > s_c$ or equals 0 if not. In equation DISPLAY_FORM14 we define a metric that measures “information coverage” of the candidate terms with respect to a tagged entity. If the constituent words of a candidate term are relevant to the constituent words in the tagged entity, then the candidate term offers more information coverage. Intuitively, the more relevant words present in the candidate term, the more relevant the candidate is to the tagged entity. The purpose of the cutoff, $s_c$, is to screen the $(C_i,E_j)$ word pairs that are dissimilar, so that they do not contribute to information coverage. One can adjust the strictness of the entity - terminology matching by adjusting $s_c$. The higher we set $s_c$, the fewer candidate terms will be selected for a tagged entity. A normalization factor, $\frac{1}{m}$, is added to give preference to more concise candidate terms given the same amount of information coverage. We need to create an extra stopword list to include words such as “configuration” and “color”, and exclude these words from the word count for a candidate term. This is because the terms associated with the description of color or configuration usually have the word “color” or “configuration” in them. On the other hand, a user query normally does not contain such words. For example, a tagged entity in a user query could be “round yellow patches”, for which the relevant terminologies include “round configuration” and “yellow color”. Since we applied a normalization factor, $\frac{1}{m}$, to the relevance score, the word “color” and “configuration” would lower the relevance score because they do not have a counterpart in the tagged entity. Therefore, we need to exclude them from word count. Once the process is complete, calculate $s(C,E)$ for all candidate terms and then we can apply a threshold on all $s(C,E)$ to ignore candidate terms with low information coverage. Finally, rank the terms by their $s(C,E)$ and return the ranked list as the results. Experiments ::: Data Despite the greater similarity between our task and the 2013 ShARe/CLEF Task 1, we use the clinical notes from the CE task in 2010 i2b2/VA on account of 1) the data from 2010 i2b2/VA being easier to access and parse, 2) 2013 ShARe/CLEF containing disjoint entities and hence requiring more complicated tagging schemes. The synthesized user queries are generated using the aforementioned dermatology glossary. Tagged sentences are extracted from the clinical notes. Sentences with no clinical entity present are ignored. 22,489 tagged sentences are extracted from the clinical notes. We will refer to these tagged sentences interchangeably as the i2b2 data. The sentences are shuffled and split into train/dev/test set with a ratio of 7:2:1. The synthesized user queries are composed by randomly selecting several clinical terms from the dermatology glossary and then combining them in no particular order. When combining the clinical terms, we attach the BIO tags to their constituent words. The synthesized user queries (13,697 in total) are then split into train/dev/test set with the same ratio. Next, each set in the i2b2 data and the corresponding set in the synthesized query data are combined to form a hybrid train/dev/test set, respectively. This way we ensure that in each hybrid train/dev/test set, the ratio between the i2b2 data and the synthesized query data is the same. The reason for combining the two data is their drastic structural difference (See figure FIGREF16 for an example). Previously, when trained on the i2b2 data only, the BiLSTM-CRF model was not able to segment clinical entities at the correct boundary. It would fail to recognize the user query in Figure FIGREF16(a) as four separate entities. On the other hand, if the model was trained solely on the synthesized user queries, we could imagine that it would fail miserably on any queries that resemble the sentence in Figure FIGREF16(b) because the model would have never seen an “O” tag in the training data. Therefore, it is necessary to use the hybrid training data containing both the i2b2 data and the synthesized user queries. To make the hybrid training data, we need to unify the tags. Recall that in Section SECREF1 we point out that the tags are different for the different tasks and datasets. Since we use custom tags for dermatology glossary in our problem, we would need to convert the tags used in 2010 i2b2/VA. But this would be an infeasible job as we need experts to manually do that. An alternative is to avoid distinguishing the tag types and label all tags under the generic BIO tags. Experiments ::: Setup To show the effects of using the hybrid training data, we trained two models of the same architecture and hyperparameters. One model was trained on the hybrid data and will be referred to as hybrid NER model. The other model was trained on clinical notes only and will be referred to as i2b2 NER model. We evaluated the performance of the NER models by micro-F1 score on the test set of both the synthesized queries and the i2b2 data. We used the BiLSTM-CRF implementation provided by the flair package BIBREF16. We set the hidden size value to be 256 in the LSTM structure and left everything else at default values for the SequenceTagger model on flair. For word embeddings, we used the ELMo embeddings fine-tuned on PubMed articles and flair embeddings BIBREF13 trained on $5\%$ of PubMed abstracts , respectively. We trained models for 10 epochs and experimented with different learning rate, mini batch size, and dropouts. We ran hyperparameter optimization tests to find the best combination. $S_c$ is set to be 0.6 in our experiment. Experiments ::: Hyperparameter Tuning We defined the following hyperparameter search space: embeddings: [“ELMo on pubmed”, “stacked flair on pubmed”], hidden_size: [128, 256], learning_rate: [0.05, 0.1], mini_batch_size: [32, 64, 128]. The hyperparameter optimization was performed using Hyperopt . Three evaluations were run for each combination of hyperparameters. Each ran for 10 epochs. Then the results were averaged to give the performance for that particular combination of hyperparameters. Experiments ::: Results From the hyperparameter tuning we found that the best combination was embeddings: “ELMo on pubmed”, hidden_size: 256, learning_rate: 0.05, mini_batch_size: 32. With the above hyperparameter setting, the hybrid NER model achieved a F1 score of $0.995$ on synthesized queries and $0.948$ on clinical notes while the i2b2 NER model achieved a F1 score of $0.441$ on synthesized queries and $0.927$ on clinical notes (See Table TABREF23). Since there was no ground truth available for the retrieved terms, we randomly picked a few samples to assess its performance. Some example outputs of our complete framework on real user queries are shown in Figure FIGREF24. For example, from the figure we see that the query "child fever double vision dizzy" was correctly tagged with four entities: "child", "fever", "double vision", and "dizzy". A list of terms from our glossary was matched to each entity. In real world application, the lists of terms will be presented to the user as the retrieval results to their queries. Discussion In most real user queries we sampled, the entities were tagged at the correct boundary and the tagging was complete (such as the ones shown in Figure FIGREF24). Only on a few user queries the tagging was controversial. For example, the query “Erythematous blanching round, oval patches on torso, extremities” was tagged as “Erythematous blanching” and “oval patches on torso”. The entity “extremities” was missing. The segmentation was also not correct. A more appropriate tagging would be “Erythematous blanching round, oval patches”, “torso”, and “extremities”. The tagging could be further improved by synthesizing more realistic user queries. Recall that the synthesized user queries were created by randomly combining terminologies from the dermatology glossary, which, while providing data that helped the model learn entity segmentation, did not reflect the co-occurrence information in real user queries. For example, there could be two clinical entities that often co-occur or never co-occur in a user query. But since the synthesized user queries we used combined terms randomly, the co-occurrence information was thus missing. The final retrieval results of our framework were not evaluated quantitatively in terms of recall and precision, due the the lack of ground truth. When ground truth becomes available, we will be able to evaluate our framework more thoroughly. Recently, a fine-tuned BERT model in the medical domain called BioBERT BIBREF17 has attracted some attention in the medical NLP domain. We could experiment with BioBERT embeddings in the future. We could also include query expansion technique for term matching. When finding candidate terms for an entity, our first step was still based on string matching. Given that there might be multiple entities that could be matched to the same term, it could be hard to include all these entities in the glossary and hard to match terms to these entities. Conclusion In this project, we tackle the problem of extracting clinical concepts from user queries on medical search engines. By training a BiLSTM-CRF model on a hybrid data consisting of synthesized user queries and sentences from clinical note, we adopt a CE framework for clinical user queries with minimal effort spent on annotating user queries. We find that the hybrid data enables the NER model perform better on both tagging the user queries and the clinical note sentences. Furthermore, our framework is built on an easy-to-use deep learning NLP Python library, which lends it more prospective value to various online medical applications that employ medical search engines. Acknowledgment This paper results from a technical report of a project the authors have worked on with visualDx, a healthcare informatics company that provides web-based clinical decision support system. The authors would like to thank visualDx for providing them the opportunity to work on such an exciting project. In particular, the authors would like to thank Roy Robinson, the Vice President of Technology and Medical Informatics at visualDx, for providing the synthesized user queries, as well as preliminary feedback on the performance of our framework.
where did they obtain the annotated clinical notes from?
clinical notes from the CE task in 2010 i2b2/VA
3,432
qasper
4k
Introduction Text summarization generates summaries from input documents while keeping salient information. It is an important task and can be applied to several real-world applications. Many methods have been proposed to solve the text summarization problem BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . There are two main text summarization techniques: extractive and abstractive. Extractive summarization generates summary by selecting salient sentences or phrases from the source text, while abstractive methods paraphrase and restructure sentences to compose the summary. We focus on abstractive summarization in this work as it is more flexible and thus can generate more diverse summaries. Recently, many abstractive approaches are introduced based on neural sequence-to-sequence framework BIBREF4 , BIBREF0 , BIBREF3 , BIBREF5 . Based on the sequence-to-sequence model with copy mechanism BIBREF6 , BIBREF0 incorporates a coverage vector to track and control attention scores on source text. BIBREF4 introduce intra-temporal attention processes in the encoder and decoder to address the repetition and incoherent problem. There are two issues in previous abstractive methods: 1) these methods use left-context-only decoder, thus do not have complete context when predicting each word. 2) they do not utilize the pre-trained contextualized language models on the decoder side, so it is more difficult for the decoder to learn summary representations, context interactions and language modeling together. Recently, BERT has been successfully used in various natural language processing tasks, such as textual entailment, name entity recognition and machine reading comprehensions. In this paper, we present a novel natural language generation model based on pre-trained language models (we use BERT in this work). As far as we know, this is the first work to extend BERT to the sequence generation task. To address the above issues of previous abstractive methods, in our model, we design a two-stage decoding process to make good use of BERT's context modeling ability. On the first stage, we generate the summary using a left-context-only-decoder. On the second stage, we mask each word of the summary and predict the refined word one-by-one using a refine decoder. To further improve the naturalness of the generated sequence, we cooperate reinforcement objective with the refine decoder. The main contributions of this work are: 1. We propose a natural language generation model based on BERT, making good use of the pre-trained language model in the encoder and decoder process, and the model can be trained end-to-end without handcrafted features. 2. We design a two-stage decoder process. In this architecture, our model can generate each word of the summary considering both sides' context information. 3. We conduct experiments on the benchmark datasets CNN/Daily Mail and New York Times. Our model achieves a 33.33 average of ROUGE-1, ROUGE-2 and ROUGE-L on the CNN/Daily Mail, which is state-of-the-art. On the New York Times dataset, our model achieves about 5.6% relative improvement over ROUGE-1. Text Summarization In this paper, we focus on single-document multi-sentence summarization and propose a supervised abstractive model based on the neural attentive sequence-to-sequence framework which consists of two parts: a neural network for the encoder and another network for the decoder. The encoder encodes the input sequence to intermediate representation and the decoder predicts one word at a time step given the input sequence representation vector and previous decoded output. The goal of the model is to maximize the probability of generating the correct target sequences. In the encoding and generation process, the attention mechanism is used to concentrate on the most important positions of text. The learning objective of most sequence-to-sequence models is to minimize the negative log likelihood of the generated sequence as following equation shows, where $y^*_i$ is the i-th ground-truth summary token. $$Loss = - \log \sum _{t=1}^N P(y_t^*|y_{<t}^*, X)$$ (Eq. 3) However, with this objective, traditional sequence generation models consider only one direction context in the decoding process, which could cause performance degradation since complete context of one token contains preceding and following tokens, thus feeding only preceded decoded words to the decoder so that the model may generate unnatural sequences. For example, attentive sequence-to-sequence models often generate sequences with repeated phrases which harm the naturalness. Some previous works mitigate this problem by improving the attention calculation process, but in this paper we show that feeding bi-directional context instead of left-only-context can better alleviate this problem. Text summarization models are usually classified to abstractive and extractive ones. Recently, extractive models like DeepChannel BIBREF8 , rnn-ext+RL BIBREF9 and NeuSUM BIBREF2 achieve higher performances using well-designed structures. For example, DeepChannel propose a salience estimation network and iteratively extract salient sentences. BIBREF16 train a sentence compression model to teach another latent variable extractive model. Also, several recent works focus on improving abstractive methods. BIBREF3 design a content selector to over-determine phrases in a source document that should be part of the summary. BIBREF11 introduce inconsistency loss to force words in less attended sentences(which determined by extractive model) to have lower generation probabilities. BIBREF5 extend seq2seq model with an information selection network to generate more informative summaries. Bi-Directional Pre-Trained Context Encoders Recently, context encoders such as ELMo, GPT, and BERT have been widely used in many NLP tasks. These models are pre-trained on a huge unlabeled corpus and can generate better contextualized token embeddings, thus the approaches built on top of them can achieve better performance. Since our method is based on BERT, we illustrate the process briefly here. BERT consists of several layers. In each layer there is first a multi-head self-attention sub-layer and then a linear affine sub-layer with the residual connection. In each self-attention sub-layer the attention scores $e_{ij}$ are first calculated as Eq. ( 5 ) () shows, in which $d_e$ is output dimension, and $W^Q, W^K, W^V$ are parameter matrices. $$&a_{ij} = \cfrac{(h_iW_Q)(h_jW_K)^T}{\sqrt{d_e}} \\ &e_{ij} = \cfrac{\exp {e_{ij}}}{\sum _{k=1}^N\exp {e_{ik}}} $$ (Eq. 5) Then the output is calculated as Eq. ( 6 ) shows, which is the weighted sum of previous outputs $h$ added by previous output $h_i$ . The last layer outputs is context encoding of input sequence. $$o_i = h_i + \sum _{j=1}^{N} e_{ij}(h_j W_V) $$ (Eq. 6) Despite the wide usage and huge success, there is also a mismatch problem between these pre-trained context encoders and sequence-to-sequence models. The issue is that while using a pre-trained context encoder like GPT or BERT, they model token-level representations by conditioning on both direction context. During pre-training, they are fed with complete sequences. However, with a left-context-only decoder, these pre-trained language models will suffer from incomplete and inconsistent context and thus cannot generate good enough context-aware word representations, especially during the inference process. Model In this section, we describe the structure of our model, which learns to generate an abstractive multi-sentence summary from a given source document. Based on the sequence-to-sequence framework built on top of BERT, we first design a refine decoder at word-level to tackle the two problems described in the above section. We also introduce a discrete objective for the refine decoders to reduce the exposure bias problem. The overall structure of our model is illustrated in Figure 1 . Problem Formulation We denote the input document as $X = \lbrace x_1, \ldots , x_m\rbrace $ where $x_i \in \mathcal {X}$ represents one source token. The corresponding summary is denoted as $Y = \lbrace y_1, \ldots , y_L\rbrace $ , $L$ represents the summary length. Given input document $X$ , we first predict the summary draft by a left-context-only decoder, and then using the generated summary draft we can condition on both context sides and refine the content of the summary. The draft will guide and constrain the refine process of summary. Summary Draft Generation The summary draft is based on the sequence-to-sequence model. On the encoder side the input document $X$ is encoded into representation vectors $H = \lbrace h_1, \ldots , h_m\rbrace $ , and then fed to the decoder to generate the summary draft $A = \lbrace a_1, \ldots , a_{|a|}\rbrace $ . We simply use BERT as the encoder. It first maps the input sequence to word embeddings and then computes document embeddings as the encoder's output, denoted by following equation. $$H = BERT(x_1, \ldots , x_m)$$ (Eq. 10) In the draft decoder, we first introduce BERT's word embedding matrix to map the previous summary draft outputs $\lbrace y_1, \ldots , y_{t-1}\rbrace $ into embeddings vectors $\lbrace q_1, \ldots , q_{t-1}\rbrace $ at t-th time step. Note that as the input sequence of the decoder is not complete, we do not use the BERT network to predict the context vectors here. Then we introduce an $N$ layer Transformer decoder to learn the conditional probability $P(A|H)$ . Transformer's decoder-encoder multi-head attention helps the decoder learn soft alignments between summary and source document. At the t-th time step, the draft decoder predicts output probability conditioned on previous outputs and encoder hidden representations as Eq. ( 13 ) shows, in which $q_{<t} = \lbrace q_1, \ldots , q_{t-1}\rbrace $ . Each generated sequence will be truncated in the first position of a special token '[PAD]'. $$&P^{vocab}_t(w) = f_{dec}(q_{<t}, H) \\ &L_{dec} = \sum _{i=1}^{|a|} -\log P(a_i = y_i^*|a_{< i}, H) $$ (Eq. 13) As Eq. () shows, the decoder's learning objective is to minimize negative likelihood of conditional probability, in which $y_i^*$ is the i-th ground truth word of summary. However a decoder with this structure is not sufficient enough: if we use the BERT network in this decoder, then during training and inference, in-complete context(part of sentence) is fed into the BERT module, and although we can fine-tune BERT's parameters, the input distribution is quite different from the pre-train process, and thus harms the quality of generated context representations. If we just use the embedding matrix here, it will be more difficult for the decoder with fresh parameters to learn to model representations as well as vocabulary probabilities, from a relative small corpus compared to BERT's huge pre-training corpus. In a word, the decoder cannot utilize BERT's ability to generate high quality context vectors, which will also harm performance. This issue exists when using any other contextualized word representations, so we design a refine process to mitigate it in our approach which will be described in the next sub-section. As some summary tokens are out-of-vocabulary words and occurs in input document, we incorporate copy mechanism BIBREF6 based on the Transformer decoder, we will describe it briefly. At decoder time step $t$ , we first calculate the attention probability distribution over source document $X$ using the bi-linear dot product of the last layer decoder output of Transformer $o_t$ and the encoder output $h_j$ , as Eq. ( 15 ) () shows. $$u_t^j =& o_t W_c h_j \\ \alpha _t^j =& \cfrac{\exp {u_t^j}}{\sum _{k=1}^N\exp {u_t^k}} $$ (Eq. 15) We then calculate copying gate $g_t\in [0, 1]$ , which makes a soft choice between selecting from source and generating from vocabulary, $W_c, W_g, b_g$ are parameters: $$g_t = sigmoid(W_g \cdot [o_t, h] + b_g) $$ (Eq. 16) Using $g_t$ we calculate the weighted sum of copy probability and generation probability to get the final predicted probability of extended vocabulary $\mathcal {V} + \mathcal {X}$ , where $\mathcal {X}$ is the set of out of vocabulary words from the source document. The final probability is calculated as follow: $$P_t(w) = (1-g_t)P_t^{vocab}(w) + g_t\sum _{i:w_i=w} \alpha _t^i$$ (Eq. 17) Summary Refine Process The main reason to introduce the refine process is to enhance the decoder using BERT's contextualized representations, so we do not modify the encoder and reuse it during this process. On the decoder side, we propose a new word-level refine decoder. The refine decoder receives a generated summary draft as input, and outputs a refined summary. It first masks each word in the summary draft one by one, then feeds the draft to BERT to generate context vectors. Finally it predicts a refined summary word using an $N$ layer Transformer decoder which is the same as the draft decoder. At t-th time step the n-th word of input summary is masked, and the decoder predicts the n-th refined word given other words of the summary. The learning objective of this process is shown in Eq. ( 19 ), $y_i$ is the i-th summary word and $y_{i}^*$ for the ground-truth summary word, and $a_{\ne i} = \lbrace a_1, \ldots , a_{i-1}, a_{i+1}, \ldots , a_{|y|}\rbrace $ . $$L_{refine} = \sum _{i=1}^{|y|} -\log P(y_i = y_i^*|a_{\ne i}, H) $$ (Eq. 19) From the view of BERT or other contextualized embeddings, the refine decoding process provides a more complete input sequence which is consistent with their pre-training processes. Intuitively, this process works as follows: first the draft decoder writes a summary draft based on a document, and then the refine decoder edits the draft. It concentrates on one word at a time, based on the source document as well as other words. We design the word-level refine decoder because this process is similar to the cloze task in BERT's pre-train process, therefore by using the ability of the contextual language model the decoder can generate more fluent and natural sequences. The parameters are shared between the draft decoder and refine decoder, as we find that using individual parameters the model's performance degrades a lot. The reason may be that we use teach-forcing during training, and thus the word-level refine decoder learns to predict words given all the other ground-truth words of summary. This objective is similar to the language model's pre-train objective, and is probably not enough for the decoder to learn to generate refined summaries. So in our model all decoders share the same parameters. Researchers usually use ROUGE as the evaluation metric for summarization, however during sequence-to-sequence model training, the objective is to maximize the log likelihood of generated sequences. This mis-match harms the model's performance, so we add a discrete objective to the model, and optimize it by introducing the policy gradient method. For example, the discrete objective for the summary draft process is as Eq. ( 21 ) shows, where $a^s$ is the draft summary sampled from predicted distribution, and $R(a^s)$ is the reward score compared with the ground-truth summary, we use ROUGE-L in our experiment. To balance between optimizing the discrete objective and generating readable sequences, we mix the discrete objective with maximum-likelihood objective. As Eq. () shows, minimizing $\hat{L}_{dec}$ is the final objective for the draft process, note here $L_{dec}$ is $-logP(a|x)$ . In the refine process we introduce similar objectives. $$L^{rl}_{dec} = R(a^s)\cdot [-\log (P(a^s|x))] \\ \hat{L}_{dec} = \gamma * L^{rl}_{dec} + (1 - \gamma ) * L_{dec} $$ (Eq. 21) Learning and Inference During model training, the objective of our model is sum of the two processes, jointly trained using "teacher-forcing" algorithm. During training we feed the ground-truth summary to each decoder and minimize the objective. $$L_{model} = \hat{L}_{dec} + \hat{L}_{refine}$$ (Eq. 23) At test time, each time step we choose the predicted word by $\hat{y} = argmax_{y^{\prime }} P(y^{\prime }|x)$ , use beam search to generate the draft summaries, and use greedy search to generate the refined summaries. Settings In this work, all of our models are built on $BERT_{BASE}$ , although another larger pre-trained model with better performance ( $BERT_{LARGE}$ ) has published but it costs too much time and GPU memory. We use WordPiece embeddings with a 30,000 vocabulary which is the same as BERT. We set the layer of transformer decoders to 12(8 on NYT50), and set the attention heads number to 12(8 on NYT50), set fully-connected sub-layer hidden size to 3072. We train the model using an Adam optimizer with learning rate of $3e-4$ , $\beta _1=0.9$ , $\beta _2=0.999$ and $\epsilon =10^{-9}$ and use a dynamic learning rate during the training process. For regularization, we use dropout BIBREF13 and label smoothing BIBREF14 in our models and set the dropout rate to 0.15, and the label smoothing value to 0.1. We set the RL objective factor $\gamma $ to 0.99. During training, we set the batch size to 36, and train for 4 epochs(8 epochs for NYT50 since it has many fewer training samples), after training the best model are selected from last 10 models based on development set performance. Due to GPU memory limit, we use gradient accumulation, set accumulate step to 12 and feed 3 samples at each step. We use beam size 4 and length penalty of 1.0 to generate logical form sequences. We filter repeated tri-grams in beam-search process by setting word probability to zero if it will generate an tri-gram which exists in the existing summary. It is a nice method to avoid phrase repetition since the two datasets seldom contains repeated tri-grams in one summary. We also fine tune the generated sequences using another two simple rules. When there are multi summary sentences with exactly the same content, we keep the first one and remove the other sentences; we also remove sentences with less than 3 words from the result. To evaluate the performance of our model, we conduct experiments on CNN/Daily Mail dataset, which is a large collection of news articles and modified for summarization. Following BIBREF0 we choose the non-anonymized version of the dataset, which consists of more than 280,000 training samples and 11490 test set samples. We also conduct experiments on the New York Times(NYT) dataset which also consists of many news articles. The original dataset can be applied here. In our experiment, we follow the dataset splits and other pre-process settings of BIBREF15 . We first filter all samples without a full article text or abstract and then remove all samples with summaries shorter than 50 words. Then we choose the test set based on the date of publication(all examples published after January 1, 2007). The final dataset contains 22,000 training samples and 3,452 test samples and is called NYT50 since all summaries are longer than 50 words. We tokenize all sequences of the two datasets using the WordPiece tokenizer. After tokenizing, the average article length and summary length of CNN/Daily Mail are 691 and 51, and NYT50's average article length and summary length are 1152 and 75. We truncate the article length to 512, and the summary length to 100 in our experiment(max summary length is set to 150 on NYT50 as its average golden summary length is longer). On CNN/Daily Mail dataset, we report the full-length F-1 score of the ROUGE-1, ROUGE-2 and ROUGE-L metrics, calculated using PyRouge package and the Porter stemmer option. On NYT50, following BIBREF4 we evaluate limited length ROUGE recall score(limit the generated summary length to the ground truth length). We split NYT50 summaries into sentences by semicolons to calculate the ROUGE scores. Results and Analysis Table 1 shows the results on CNN/Daily Mail dataset, we compare the performance of many recent approaches with our model. We classify them to two groups based on whether they are extractive or abstractive models. As the last line of the table shows, the ROUGE-1 and ROUGE-2 score of our full model is comparable with DCA, and outperforms on ROUGE-L. Also, compared to extractive models NeuSUM and MASK- $LM^{global}$ , we achieve slight higher ROUGE-1. Except the four scores, our model outperforms these models on all the other scores, and since we have 95% confidence interval of at most $\pm $ 0.20, these improvements are statistically significant. As the last four lines of Table 1 show, we conduct an ablation study on our model variants to analyze the importance of each component. We use three ablation models for the experiments. One-Stage: A sequence-to-sequence model with copy mechanism based on BERT; Two-Stage: Adding the word-refine decoder to the One-Stage model; Two-Stage + RL: Full model with refine process cooperated with RL objective. First, we compare the Two-Stage+RL model with Two-Stage ablation, we observe that the full model outperforms by 0.30 on average ROUGE, suggesting that the reinforcement objective helps the model effectively. Then we analyze the effect of refine process by removing word-level refine from the Two-Stage model, we observe that without the refine process the average ROUGE score drops by 1.69. The ablation study shows that each module is necessary for our full model, and the improvements are statistically significant on all metrics. To evaluate the impact of summary length on model performance, we compare the average rouge score improvements of our model with different length of ground-truth summaries. As the above sub-figure of Figure 2 shows, compared to Pointer-Generator with Coverage, on length interval 40-80(occupies about 70% of test set) the improvements of our model are higher than shorter samples, confirms that with better context representations, in longer documents our model can achieve higher performance. As the below sub-figure of Figure 2 shows, compared to extractive baseline: Lead-3 BIBREF0 , the advantage of our model will fall when golden summary length is greater than 80. This probably because that we truncate the long documents and golden summaries and cannot get full information, it could also because that the training data in these intervals is too few to train an abstractive model, so simple extractive method will not fall too far behind. Additional Results on NYT50 Table 2 shows experiments on the NYT50 corpus. Since the short summary samples are filtered, NYT50 has average longer summaries than CNN/Daily Mail. So the model needs to catch long-term dependency of the sequences to generate good summaries. The first two lines of Table 2 show results of the two baselines introduced by BIBREF15 : these baselines select first n sentences, or select the first k words from the original document. Also we compare performance of our model with two recent models, we see 2.39 ROUGE-1 improvements compared to the ML+RL with intra-attn approach(previous SOTA) carries over to this dataset, which is a large margin. On ROUGE-2, our model also get an improvement of 0.51. The experiment proves that our approach can outperform competitive methods on different data distributions. Pre-trained language models Pre-trained word vectors BIBREF17 , BIBREF18 , BIBREF19 have been widely used in many NLP tasks. More recently, pre-trained language models (ELMo, GPT and BERT), have also achieved great success on several NLP problems such as textual entailment, semantic similarity, reading comprehension, and question answering BIBREF20 , BIBREF21 , BIBREF22 . Some recent works also focus on leveraging pre-trained language models in summarization. BIBREF23 pretrain a language model and use it as the sentiment analyser when generating reviews of goods. BIBREF24 train a language model on golden summaries, and then use it on the decoder side to incorporate prior knowledge. In this work, we use BERT(which is a pre-trained language model using large scale unlabeled data) on the encoder and decoder of a seq2seq model, and by designing a two stage decoding structure we build a competitive model for abstractive text summarization. Conclusion and Future Work In this work, we propose a two-stage model based on sequence-to-sequence paradigm. Our model utilize BERT on both encoder and decoder sides, and introduce reinforce objective in learning process. We evaluate our model on two benchmark datasets CNN/Daily Mail and New York Times, the experimental results show that compared to previous systems our approach effectively improves performance. Although our experiments are conducted on summarization task, our model can be used in most natural language generation tasks, such as machine translation, question generation and paraphrasing. The refine decoder and mixed objective can also be applied on other sequence generation tasks, and we will investigate on them in future work.
Why masking words in the decoder is helpful?
ecause this process is similar to the cloze task in BERT's pre-train process, therefore by using the ability of the contextual language model the decoder can generate more fluent and natural sequences.
3,919
qasper
4k
Introduction Ancient Chinese is the writing language in ancient China. It is a treasure of Chinese culture which brings together the wisdom and ideas of the Chinese nation and chronicles the ancient cultural heritage of China. Learning ancient Chinese not only helps people to understand and inherit the wisdom of the ancients, but also promotes people to absorb and develop Chinese culture. However, it is difficult for modern people to read ancient Chinese. Firstly, compared with modern Chinese, ancient Chinese is more concise and shorter. The grammatical order of modern Chinese is also quite different from that of ancient Chinese. Secondly, most modern Chinese words are double syllables, while the most of the ancient Chinese words are monosyllabic. Thirdly, there is more than one polysemous phenomenon in ancient Chinese. In addition, manual translation has a high cost. Therefore, it is meaningful and useful to study the automatic translation from ancient Chinese to modern Chinese. Through ancient-modern Chinese translation, the wisdom, talent and accumulated experience of the predecessors can be passed on to more people. Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 has achieved remarkable performance on many bilingual translation tasks. It is an end-to-end learning approach for machine translation, with the potential to show great advantages over the statistic machine translation (SMT) systems. However, NMT approach has not been widely applied to the ancient-modern Chinese translation task. One of the main reasons is the limited high-quality parallel data resource. The most popular method of acquiring translation examples is bilingual text alignment BIBREF5 . This kind of method can be classified into two types: lexical-based and statistical-based. The lexical-based approaches BIBREF6 , BIBREF7 focus on lexical information, which utilize the bilingual dictionary BIBREF8 , BIBREF9 or lexical features. Meanwhile, the statistical-based approaches BIBREF10 , BIBREF11 rely on statistical information, such as sentence length ratio in two languages and align mode probability. However, these methods are designed for other bilingual language pairs that are written in different language characters (e.g. English-French, Chinese-Japanese). The ancient-modern Chinese has some characteristics that are quite different from other language pairs. For example, ancient and modern Chinese are both written in Chinese characters, but ancient Chinese is highly concise and its syntactical structure is different from modern Chinese. The traditional methods do not take these characteristics into account. In this paper, we propose an effective ancient-modern Chinese text alignment method at the level of clause based on the characteristics of these two languages. The proposed method combines both lexical-based information and statistical-based information, which achieves 94.2 F1-score on Test set. Recently, a simple longest common subsequence based approach for ancient-modern Chinese sentence alignment is proposed in BIBREF12 . Our experiments showed that our proposed alignment approach performs much better than their method. We apply the proposed method to create a large translation parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. Furthermore, we test SMT models and various NMT models on the created dataset and provide a strong baseline for this task. Overview There are four steps to build the ancient-modern Chinese translation dataset: (i) The parallel corpus crawling and cleaning. (ii) The paragraph alignment. (iii) The clause alignment based on aligned paragraphs. (iv) Augmenting data by merging aligned adjacent clauses. The most critical step is the third step. Clause Alignment In the clause alignment step, we combine both statistical-based and lexical-based information to measure the score for each possible clause alignment between ancient and modern Chinese strings. The dynamic programming is employed to further find overall optimal alignment paragraph by paragraph. According to the characteristics of the ancient and modern Chinese languages, we consider the following factors to measure the alignment score INLINEFORM0 between a bilingual clause pair: Lexical Matching. The lexical matching score is used to calculate the matching coverage of the ancient clause INLINEFORM0 . It contains two parts: exact matching and dictionary matching. An ancient Chinese character usually corresponds to one or more modern Chinese words. In the first part, we carry out Chinese Word segmentation to the modern Chinese clause INLINEFORM1 . Then we match the ancient characters and modern words in the order from left to right. In further matching, the words that have been matched will be deleted from the original clauses. However, some ancient characters do not appear in its corresponding modern Chinese words. An ancient Chinese dictionary is employed to address this issue. We preprocess the ancient Chinese dictionary and remove the stop words. In this dictionary matching step, we retrieve the dictionary definition of each unmatched ancient character and use it to match the remaining modern Chinese words. To reduce the impact of universal word matching, we use Inverse Document Frequency (IDF) to weight the matching words. The lexical matching score is calculated as: DISPLAYFORM0 The above equation is used to calculate the matching coverage of the ancient clause INLINEFORM0 . The first term of equation ( EQREF8 ) represents exact matching score. INLINEFORM1 denotes the length of INLINEFORM2 , INLINEFORM3 denotes each ancient character in INLINEFORM4 , and the indicator function INLINEFORM5 indicates whether the character INLINEFORM6 can match the words in the clause INLINEFORM7 . The second term is dictionary matching score. Here INLINEFORM8 and INLINEFORM9 represent the remaining unmatched strings of INLINEFORM10 and INLINEFORM11 , respectively. INLINEFORM12 denotes the INLINEFORM13 -th character in the dictionary definition of the INLINEFORM14 and its IDF score is denoted as INLINEFORM15 . The INLINEFORM16 is a predefined parameter which is used to normalize the IDF score. We tuned the value of this parameter on the Dev set. Statistical Information. Similar to BIBREF11 and BIBREF6 , the statistical information contains alignment mode and length information. There are many alignment modes between ancient and modern Chinese languages. If one ancient Chinese clause aligns two adjacent modern Chinese clauses, we call this alignment as 1-2 alignment mode. We show some examples of different alignment modes in Figure FIGREF9 . In this paper, we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes which account for INLINEFORM0 of the Dev set. We estimate the probability Pr INLINEFORM1 n-m INLINEFORM2 of each alignment mode n-m on the Dev set. To utilize length information, we make an investigation on length correlation between these two languages. Based on the assumption of BIBREF11 that each character in one language gives rise to a random number of characters in the other language and those random variables INLINEFORM3 are independent and identically distributed with a normal distribution, we estimate the mean INLINEFORM4 and standard deviation INLINEFORM5 from the paragraph aligned parallel corpus. Given a clause pair INLINEFORM6 , the statistical information score can be calculated by: DISPLAYFORM0 where INLINEFORM0 denotes the normal distribution probability density function. Edit Distance. Because ancient and modern Chinese are both written in Chinese characters, we also consider using the edit distance. It is a way of quantifying the dissimilarity between two strings by counting the minimum number of operations (insertion, deletion, and substitution) required to transform one string into the other. Here we define the edit distance score as: DISPLAYFORM0 Dynamic Programming. The overall alignment score for each possible clause alignment is as follows: DISPLAYFORM0 Here INLINEFORM0 and INLINEFORM1 are pre-defined interpolation factors. We use dynamic programming to find the overall optimal alignment paragraph by paragraph. Let INLINEFORM2 be total alignment scores of aligning the first to INLINEFORM3 -th ancient Chinese clauses with the first to to INLINEFORM4 -th modern Chinese clauses, and the recurrence then can be described as follows: DISPLAYFORM0 Where INLINEFORM0 denotes concatenate clause INLINEFORM1 to clause INLINEFORM2 . As we discussed above, here we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes. Ancient-Modern Chinese Dataset Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. They used plain and accurate words to express what happened at that time, and thus ensure the generality of the translated materials. Paragraph Alignment. To further ensure the quality of the new dataset, the work of paragraph alignment is manually completed. After data cleaning and manual paragraph alignment, we obtained 35K aligned bilingual paragraphs. Clause Alignment. We applied our clause alignment algorithm on the 35K aligned bilingual paragraphs and obtained 517K aligned bilingual clauses. The reason we use clause alignment algorithm instead of sentence alignment is because we can construct more aligned sentences more flexibly and conveniently. To be specific, we can get multiple additional sentence level bilingual pairs by “data augmentation”. Data Augmentation. We augmented the data in the following way: Given an aligned clause pair, we merged its adjacent clause pairs as a new sample pair. For example, suppose we have three adjacent clause level bilingual pairs: ( INLINEFORM0 , INLINEFORM1 ), ( INLINEFORM2 , INLINEFORM3 ), and ( INLINEFORM4 , INLINEFORM5 ). We can get some additional sentence level bilingual pairs, such as: ( INLINEFORM6 , INLINEFORM7 ) and ( INLINEFORM8 , INLINEFORM9 ). Here INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 are adjacent clauses in the original paragraph, and INLINEFORM13 denotes concatenate clause INLINEFORM14 to clause INLINEFORM15 . The advantage of using this data augmentation method is that compared with only using ( INLINEFORM16 , INLINEFORM17 ) as the training data, we can also use ( INLINEFORM18 , INLINEFORM19 ) and ( INLINEFORM20 , INLINEFORM21 ) as the training data, which can provide richer supervision information for the model and make the model learn the align information between the source language and the target language better. After the data augmentation, we filtered the sentences which are longer than 50 or contain more than four clause pairs. Dataset Creation. Finally, we split the dataset into three sets: training (Train), development (Dev) and testing (Test). Note that the unaugmented dataset contains 517K aligned bilingual clause pairs from 35K aligned bilingual paragraphs. To keep all the sentences in different sets come from different articles, we split the 35K aligned bilingual paragraphs into Train, Dev and Test sets following these ratios respectively: 80%, 10%, 10%. Before data augmentation, the unaugmented Train set contains INLINEFORM0 aligned bilingual clause pairs from 28K aligned bilingual paragraphs. Then we augmented the Train, Dev and Test sets respectively. Note that the augmented Train, Dev and Test sets also contain the unaugmented data. The statistical information of the three data sets is shown in Table TABREF17 . We show some examples of data in Figure FIGREF14 . RNN-based NMT model We first briefly introduce the RNN based Neural Machine Translation (RNN-based NMT) model. The RNN-based NMT with attention mechanism BIBREF0 has achieved remarkable performance on many translation tasks. It consists of encoder and decoder part. We firstly introduce the encoder part. The input word sequence of source language are individually mapped into a INLINEFORM0 -dimensional vector space INLINEFORM1 . Then a bi-directional RNN BIBREF15 with GRU BIBREF16 or LSTM BIBREF17 cell converts these vectors into a sequences of hidden states INLINEFORM2 . For the decoder part, another RNN is used to generate target sequence INLINEFORM0 . The attention mechanism BIBREF0 , BIBREF18 is employed to allow the decoder to refer back to the hidden state sequence and focus on a particular segment. The INLINEFORM1 -th hidden state INLINEFORM2 of decoder part is calculated as: DISPLAYFORM0 Here g INLINEFORM0 is a linear combination of attended context vector c INLINEFORM1 and INLINEFORM2 is the word embedding of (i-1)-th target word: DISPLAYFORM0 The attended context vector c INLINEFORM0 is computed as a weighted sum of the hidden states of the encoder: DISPLAYFORM0 The probability distribution vector of the next word INLINEFORM0 is generated according to the following: DISPLAYFORM0 We take this model as the basic RNN-based NMT model in the following experiments. Transformer-NMT Recently, the Transformer model BIBREF4 has made remarkable progress in machine translation. This model contains a multi-head self-attention encoder and a multi-head self-attention decoder. As proposed by BIBREF4 , an attention function maps a query and a set of key-value pairs to an output, where the queries INLINEFORM0 , keys INLINEFORM1 , and values INLINEFORM2 are all vectors. The input consists of queries and keys of dimension INLINEFORM3 , and values of dimension INLINEFORM4 . The attention function is given by: DISPLAYFORM0 Multi-head attention mechanism projects queries, keys and values to INLINEFORM0 different representation subspaces and calculates corresponding attention. The attention function outputs are concatenated and projected again before giving the final output. Multi-head attention allows the model to attend to multiple features at different positions. The encoder is composed of a stack of INLINEFORM0 identical layers. Each layer has two sub-layers: multi-head self-attention mechanism and position-wise fully connected feed-forward network. Similarly, the decoder is also composed of a stack of INLINEFORM1 identical layers. In addition to the two sub-layers in each encoder layer, the decoder contains a third sub-layer which performs multi-head attention over the output of the encoder stack (see more details in BIBREF4 ). Experiments Our experiments revolve around the following questions: Q1: As we consider three factors for clause alignment, do all these factors help? How does our method compare with previous methods? Q2: How does the NMT and SMT models perform on this new dataset we build? Clause Alignment Results (Q1) In order to evaluate our clause alignment algorithm, we manually aligned bilingual clauses from 37 bilingual ancient-modern Chinese articles, and finally got 4K aligned bilingual clauses as the Test set and 2K clauses as the Dev set. Metrics. We used F1-score and precision score as the evaluation metrics. Suppose that we get INLINEFORM0 bilingual clause pairs after running the algorithm on the Test set, and there are INLINEFORM1 bilingual clause pairs of these INLINEFORM2 pairs are in the ground truth of the Test set, the precision score is defined as INLINEFORM3 (the algorithm gives INLINEFORM4 outputs, INLINEFORM5 of which are correct). And suppose that the ground truth of the Test set contains INLINEFORM6 bilingual clause pairs, the recall score is INLINEFORM7 (there are INLINEFORM8 ground truth samples, INLINEFORM9 of which are output by the algorithm), then the F1-score is INLINEFORM10 . Baselines. Since the related work BIBREF10 , BIBREF11 can be seen as the ablation cases of our method (only statistical score INLINEFORM0 with dynamic programming), we compared the full proposed method with its variants on the Test set for ablation study. In addition, we also compared our method with the longest common subsequence (LCS) based approach proposed by BIBREF12 . To the best of our knowledge, BIBREF12 is the latest related work which are designed for Ancient-Modern Chinese alignment. Hyper-parameters. For the proposed method, we estimated INLINEFORM0 and INLINEFORM1 on all aligned paragraphs. The probability Pr INLINEFORM2 n-m INLINEFORM3 of each alignment mode n-m was estimated on the Dev set. For the hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , the grid search was applied to tune them on the Dev set. In order to show the effect of hyper-parameters INLINEFORM7 , INLINEFORM8 , and INLINEFORM9 , we reported the results of various hyper-parameters on the Dev set in Table TABREF26 . Based on the results of grid search on the Dev set, we set INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 in the following experiment. The Jieba Chinese text segmentation is employed for modern Chinese word segmentation. Results. The results on the Test set are shown in Table TABREF28 , the abbreviation w/o means removing a particular part from the setting. From the results, we can see that the lexical matching score is the most important among these three factors, and statistical information score is more important than edit distance score. Moreover, the dictionary term in lexical matching score significantly improves the performance. From these results, we obtain the best setting that involves all these three factors. We used this setting for dataset creation. Furthermore, the proposed method performs much better than LCS BIBREF12 . Translation Results (Q2) In this experiment, we analyzed and compared the performance of the SMT and various NMT models on our built dataset. To verify the effectiveness of our data augmented method. We trained the NMT and SMT models on both unaugmented dataset (including 0.46M training pairs) and augmented dataset, and test all the models on the same Test set which is augmented. The models to be tested and their configurations are as follows: SMT. The state-of-art Moses toolkit BIBREF19 was used to train SMT model. We used KenLM BIBREF20 to train a 5-gram language model, and the GIZA++ toolkit to align the data. RNN-based NMT. The basic RNN-based NMT model is based on BIBREF0 which is introduced above. Both the encoder and decoder used 2-layer RNN with 1024 LSTM cells, and the encoder is a bi-directional RNN. The batch size, threshold of element-wise gradient clipping and initial learning rate of Adam optimizer BIBREF21 were set to 128, 5.0 and 0.001. When trained the model on augmented dataset, we used 4-layer RNN. Several techniques were investigated to train the model, including layer-normalization BIBREF22 , RNN-dropout BIBREF23 , and learning rate decay BIBREF1 . The hyper-parameters were chosen empirically and adjusted in the Dev set. Furthermore, we tested the basic NMT model with several techniques, such as target language reversal BIBREF24 (reversing the order of the words in all target sentences, but not source sentences), residual connection BIBREF25 and pre-trained word2vec BIBREF26 . For word embedding pre-training, we collected an external ancient corpus which contains INLINEFORM0 134M tokens. Transformer-NMT. We also trained the Transformer model BIBREF4 which is a strong baseline of NMT on both augmented and unaugmented parallel corpus. The training configuration of the Transformer model is shown in Table TABREF32 . The hyper-parameters are set based on the settings in the paper BIBREF4 and the sizes of our training sets. For the evaluation, we used the average of 1 to 4 gram BLEUs multiplied by a brevity penalty BIBREF27 which computed by multi-bleu.perl in Moses as metrics. The results are reported in Table TABREF34 . For RNN-based NMT, we can see that target language reversal, residual connection, and word2vec can further improve the performance of the basic RNN-based NMT model. However, we find that word2vec and reversal tricks seem no obvious improvement when trained the RNN-based NMT and Transformer models on augmented parallel corpus. For SMT, it performs better than NMT models when they were trained on the unaugmented dataset. Nevertheless, when trained on the augmented dataset, both the RNN-based NMT model and Transformer based NMT model outperform the SMT model. In addition, as with other translation tasks BIBREF4 , the Transformer also performs better than RNN-based NMT. Because the Test set contains both augmented and unaugmented data, it is not surprising that the RNN-based NMT model and Transformer based NMT model trained on unaugmented data would perform poorly. In order to further verify the effect of data augmentation, we report the test results of the models on only unaugmented test data (including 48K test pairs) in Table TABREF35 . From the results, it can be seen that the data augmentation can still improve the models. Analysis The generated samples of various models are shown in Figure FIGREF36 . Besides BLEU scores, we analyze these examples from a human perspective and draw some conclusions. At the same time, we design different metrics and evaluate on the whole Test set to support our conclusions as follows: On the one hand, we further compare the translation results from the perspective of people. We find that although the original meaning can be basically translated by SMT, its translation results are less smooth when compared with the other two NMT models (RNN-based NMT and Transformer). For example, the translations of SMT are usually lack of auxiliary words, conjunctions and function words, which is not consistent with human translation habits. To further confirm this conclusion, the average length of the translation results of the three models are measured (RNN-based NMT:17.12, SMT:15.50, Transformer:16.78, Reference:16.47). We can see that the average length of the SMT outputs is shortest, and the length gaps between the SMT outputs and the references are largest. Meanwhile, the average length of the sentences translated by Transformer is closest to the average length of references. These results indirectly verify our point of view, and show that the NMT models perform better than SMT in this task. On the other hand, there still exists some problems to be solved. We observe that translating proper nouns and personal pronouns (such as names, place names and ancient-specific appellations) is very difficult for all of these models. For instance, the ancient Chinese appellation `Zhen' should be translated into `Wo' in modern Chinese. Unfortunately, we calculate the accurate rate of some special words (such as `Zhen',`Chen' and `Gua'), and find that this rate is very low (the accurate rate of translating `Zhen' are: RNN-based NMT:0.14, SMT:0.16, Transformer:0.05). We will focus on this issue in the future. Conclusion and Future Work We propose an effective ancient-modern Chinese clause alignment method which achieves 94.2 F1-score on Test set. Based on it, we build a large scale parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. In addition, we test the performance of the SMT and various NMT models on our built dataset and provide a strong NMT baseline for this task which achieves 27.16 BLEU score (4-gram). We further analyze the performance of the SMT and various NMT models and summarize some specific problems that machine translation models will encounter when translating ancient Chinese. For the future work, firstly, we are going to expand the dataset using the proposed method continually. Secondly, we will focus on solving the problem of proper noun translation and improve the translation system according to the features of ancient Chinese translation. Finally, we plan to introduce some techniques of statistical translation into neural machine translation to improve the performance. This work is supported by National Natural Science Fund for Distinguished Young Scholar (Grant No. 61625204) and partially supported by the State Key Program of National Science Foundation of China (Grant Nos. 61836006 and 61432014).
Where does the ancient Chinese dataset come from?
ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era
3,722
qasper
4k
Introduction Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule . Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such non-linguistic modalities. This makes recognizing textual sarcasm more challenging for both humans and machines. Sarcasm detection plays an indispensable role in applications like online review summarizers, dialog systems, recommendation systems and sentiment analyzers. This makes automatic detection of sarcasm an important problem. However, it has been quite difficult to solve such a problem with traditional NLP tools and techniques. This is apparent from the results reported by the survey from DBLP:journals/corr/JoshiBC16. The following discussion brings more insights into this. Consider a scenario where an online reviewer gives a negative opinion about a movie through sarcasm: “This is the kind of movie you see because the theater has air conditioning”. It is difficult for an automatic sentiment analyzer to assign a rating to the movie and, in the absence of any other information, such a system may not be able to comprehend that prioritizing the air-conditioning facilities of the theater over the movie experience indicates a negative sentiment towards the movie. This gives an intuition to why, for sarcasm detection, it is necessary to go beyond textual analysis. We aim to address this problem by exploiting the psycholinguistic side of sarcasm detection, using cognitive features extracted with the help of eye-tracking. A motivation to consider cognitive features comes from analyzing human eye-movement trajectories that supports the conjecture: Reading sarcastic texts induces distinctive eye movement patterns, compared to literal texts. The cognitive features, derived from human eye movement patterns observed during reading, include two primary feature types: The cognitive features, along with textual features used in best available sarcasm detectors, are used to train binary classifiers against given sarcasm labels. Our experiments show significant improvement in classification accuracy over the state of the art, by performing such augmentation. Related Work Sarcasm, in general, has been the focus of research for quite some time. In one of the pioneering works jorgensen1984test explained how sarcasm arises when a figurative meaning is used opposite to the literal meaning of the utterance. In the word of clark1984pretense, sarcasm processing involves canceling the indirectly negated message and replacing it with the implicated one. giora1995irony, on the other hand, define sarcasm as a mode of indirect negation that requires processing of both negated and implicated messages. ivanko2003context define sarcasm as a six tuple entity consisting of a speaker, a listener, Context, Utterance, Literal Proposition and Intended Proposition and study the cognitive aspects of sarcasm processing. Computational linguists have previously addressed this problem using rule based and statistical techniques, that make use of : (a) Unigrams and Pragmatic features BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 (b) Stylistic patterns BIBREF4 and patterns related to situational disparity BIBREF5 and (c) Hastag interpretations BIBREF6 , BIBREF7 . Most of the previously done work on sarcasm detection uses distant supervision based techniques (ex: leveraging hashtags) and stylistic/pragmatic features (emoticons, laughter expressions such as “lol” etc). But, detecting sarcasm in linguistically well-formed structures, in absence of explicit cues or information (like emoticons), proves to be hard using such linguistic/stylistic features alone. With the advent of sophisticated eye-trackers and electro/magneto-encephalographic (EEG/MEG) devices, it has been possible to delve deep into the cognitive underpinnings of sarcasm understanding. Filik2014, using a series of eye-tracking and EEG experiments try to show that for unfamiliar ironies, the literal interpretation would be computed first. They also show that a mismatch with context would lead to a re-interpretation of the statement, as being ironic. Camblin2007103 show that in multi-sentence passages, discourse congruence has robust effects on eye movements. This also implies that disrupted processing occurs for discourse incongruent words, even though they are perfectly congruous at the sentence level. In our previous work BIBREF8 , we augment cognitive features, derived from eye-movement patterns of readers, with textual features to detect whether a human reader has realized the presence of sarcasm in text or not. The recent advancements in the literature discussed above, motivate us to explore gaze-based cognition for sarcasm detection. As far as we know, our work is the first of its kind. Eye-tracking Database for Sarcasm Analysis Sarcasm often emanates from incongruity BIBREF9 , which enforces the brain to reanalyze it BIBREF10 . This, in turn, affects the way eyes move through the text. Hence, distinctive eye-movement patterns may be observed in the case of successful processing of sarcasm in text in contrast to literal texts. This hypothesis forms the crux of our method for sarcasm detection and we validate this using our previously released freely available sarcasm dataset BIBREF8 enriched with gaze information. Document Description The database consists of 1,000 short texts, each having 10-40 words. Out of these, 350 are sarcastic and are collected as follows: (a) 103 sentences are from two popular sarcastic quote websites, (b) 76 sarcastic short movie reviews are manually extracted from the Amazon Movie Corpus BIBREF11 by two linguists. (c) 171 tweets are downloaded using the hashtag #sarcasm from Twitter. The 650 non-sarcastic texts are either downloaded from Twitter or extracted from the Amazon Movie Review corpus. The sentences do not contain words/phrases that are highly topic or culture specific. The tweets were normalized to make them linguistically well formed to avoid difficulty in interpreting social media lingo. Every sentence in our dataset carries positive or negative opinion about specific “aspects”. For example, the sentence “The movie is extremely well cast” has positive sentiment about the aspect “cast”. The annotators were seven graduate students with science and engineering background, and possess good English proficiency. They were given a set of instructions beforehand and are advised to seek clarifications before they proceed. The instructions mention the nature of the task, annotation input method, and necessity of head movement minimization during the experiment. Task Description The task assigned to annotators was to read sentences one at a time and label them with with binary labels indicating the polarity (i.e., positive/negative). Note that, the participants were not instructed to annotate whether a sentence is sarcastic or not., to rule out the Priming Effect (i.e., if sarcasm is expected beforehand, processing incongruity becomes relatively easier BIBREF12 ). The setup ensures its “ecological validity” in two ways: (1) Readers are not given any clue that they have to treat sarcasm with special attention. This is done by setting the task to polarity annotation (instead of sarcasm detection). (2) Sarcastic sentences are mixed with non sarcastic text, which does not give prior knowledge about whether the forthcoming text will be sarcastic or not. The eye-tracking experiment is conducted by following the standard norms in eye-movement research BIBREF13 . At a time, one sentence is displayed to the reader along with the “aspect” with respect to which the annotation has to be provided. While reading, an SR-Research Eyelink-1000 eye-tracker (monocular remote mode, sampling rate 500Hz) records several eye-movement parameters like fixations (a long stay of gaze) and saccade (quick jumping of gaze between two positions of rest) and pupil size. The accuracy of polarity annotation varies between 72%-91% for sarcastic texts and 75%-91% for non-sarcastic text, showing the inherent difficulty of sentiment annotation, when sarcasm is present in the text under consideration. Annotation errors may be attributed to: (a) lack of patience/attention while reading, (b) issues related to text comprehension, and (c) confusion/indecisiveness caused due to lack of context. For our analysis, we do not discard the incorrect annotations present in the database. Since our system eventually aims to involve online readers for sarcasm detection, it will be hard to segregate readers who misinterpret the text. We make a rational assumption that, for a particular text, most of the readers, from a fairly large population, will be able to identify sarcasm. Under this assumption, the eye-movement parameters, averaged across all readers in our setting, may not be significantly distorted by a few readers who would have failed to identify sarcasm. This assumption is applicable for both regular and multi-instance based classifiers explained in section SECREF6 . Analysis of Eye-movement Data We observe distinct behavior during sarcasm reading, by analyzing the “fixation duration on the text” (also referred to as “dwell time” in the literature) and “scanpaths” of the readers. Variation in the Average Fixation Duration per Word Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time BIBREF14 . Hence, fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one. We observe this for all participants in our dataset, with the average fixation duration per word for sarcastic texts being at least 1.5 times more than that of non-sarcastic texts. To test the statistical significance, we conduct a two-tailed t-test (assuming unequal variance) to compare the average fixation duration per word for sarcastic and non-sarcastic texts. The hypothesized mean difference is set to 0 and the error tolerance limit ( INLINEFORM0 ) is set to 0.05. The t-test analysis, presented in Table TABREF11 , shows that for all participants, a statistically significant difference exists between the average fixation duration per word for sarcasm (higher average fixation duration) and non-sarcasm (lower average fixation duration). This affirms that the presence of sarcasm affects the duration of fixation on words. It is important to note that longer fixations may also be caused by other linguistic subtleties (such as difficult words, ambiguity and syntactically complex structures) causing delay in comprehension, or occulomotor control problems forcing readers to spend time adjusting eye-muscles. So, an elevated average fixation duration per word may not sufficiently indicate the presence of sarcasm. But we would also like to share that, for our dataset, when we considered readability (Flesch readability ease-score BIBREF15 ), number of words in a sentence and average character per word along with the sarcasm label as the predictors of average fixation duration following a linear mixed effect model BIBREF16 , sarcasm label turned out to be the most significant predictor with a maximum slope. This indicates that average fixation duration per word has a strong connection with the text being sarcastic, at least in our dataset. We now analyze scanpaths to gain more insights into the sarcasm comprehension process. Analysis of Scanpaths Scanpaths are line-graphs that contain fixations as nodes and saccades as edges; the radii of the nodes represent the fixation duration. A scanpath corresponds to a participant's eye-movement pattern while reading a particular sentence. Figure FIGREF14 presents scanpaths of three participants for the sarcastic sentence S1 and the non-sarcastic sentence S2. The x-axis of the graph represents the sequence of words a reader reads, and the y-axis represents a temporal sequence in milliseconds. Consider a sarcastic text containing incongruous phrases A and B. Our qualitative scanpath-analysis reveals that scanpaths with respect to sarcasm processing have two typical characteristics. Often, a long regression - a saccade that goes to a previously visited segment - is observed when a reader starts reading B after skimming through A. In a few cases, the fixation duration on A and B are significantly higher than the average fixation duration per word. In sentence S1, we see long and multiple regressions from the two incongruous phrases “misconception” and “cherish”, and a few instances where phrases “always cherish” and “original misconception” are fixated longer than usual. Such eye-movement behaviors are not seen for S2. Though sarcasm induces distinctive scanpaths like the ones depicted in Figure FIGREF14 in the observed examples, presence of such patterns is not sufficient to guarantee sarcasm; such patterns may also possibly arise from literal texts. We believe that a combination of linguistic features, readability of text and features derived from scanpaths would help discriminative machine learning models learn sarcasm better. Features for Sarcasm Detection We describe the features used for sarcasm detection in Table . The features enlisted under lexical,implicit incongruity and explicit incongruity are borrowed from various literature (predominantly from joshi2015harnessing). These features are essential to separate sarcasm from other forms semantic incongruity in text (for example ambiguity arising from semantic ambiguity or from metaphors). Two additional textual features viz. readability and word count of the text are also taken under consideration. These features are used to reduce the effect of text hardness and text length on the eye-movement patterns. Simple Gaze Based Features Readers' eye-movement behavior, characterized by fixations, forward saccades, skips and regressions, can be directly quantified by simple statistical aggregation (i.e., either computing features for individual participants and then averaging or performing a multi-instance based learning as explained in section SECREF6 ). Since these eye-movement attributes relate to the cognitive process in reading BIBREF17 , we consider these as features in our model. Some of these features have been reported by sarcasmunderstandability for modeling sarcasm understandability of readers. However, as far as we know, these features are being introduced in NLP tasks like textual sarcasm detection for the first time. The values of these features are believed to increase with the increase in the degree of surprisal caused by incongruity in text (except skip count, which will decrease). Complex Gaze Based Features For these features, we rely on a graph structure, namely “saliency graphs", derived from eye-gaze information and word sequences in the text. For each reader and each sentence, we construct a “saliency graph”, representing the reader's attention characteristics. A saliency graph for a sentence INLINEFORM0 for a reader INLINEFORM1 , represented as INLINEFORM2 , is a graph with vertices ( INLINEFORM3 ) and edges ( INLINEFORM4 ) where each vertex INLINEFORM5 corresponds to a word in INLINEFORM6 (may not be unique) and there exists an edge INLINEFORM7 between vertices INLINEFORM8 and INLINEFORM9 if R performs at least one saccade between the words corresponding to INLINEFORM10 and INLINEFORM11 . Figure FIGREF15 shows an example of a saliency graph.A saliency graph may be weighted, but not necessarily connected, for a given text (as there may be words in the given text with no fixation on them). The “complex” gaze features derived from saliency graphs are also motivated by the theory of incongruity. For instance, Edge Density of a saliency graph increases with the number of distinct saccades, which could arise from the complexity caused by presence of sarcasm. Similarly, the highest weighted degree of a graph is expected to be higher, if the node corresponds to a phrase, incongruous to some other phrase in the text. The Sarcasm Classifier We interpret sarcasm detection as a binary classification problem. The training data constitutes 994 examples created using our eye-movement database for sarcasm detection. To check the effectiveness of our feature set, we observe the performance of multiple classification techniques on our dataset through a stratified 10-fold cross validation. We also compare the classification accuracy of our system and the best available systems proposed by riloff2013sarcasm and joshi2015harnessing on our dataset. Using Weka BIBREF18 and LibSVM BIBREF19 APIs, we implement the following classifiers: Results Table TABREF17 shows the classification results considering various feature combinations for different classifiers and other systems. These are: Unigram (with principal components of unigram feature vectors), Sarcasm (the feature-set reported by joshi2015harnessing subsuming unigram features and features from other reported systems) Gaze (the simple and complex cognitive features we introduce, along with readability and word count features), and Gaze+Sarcasm (the complete set of features). For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance “bag” of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag. For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall. To see if the improvement obtained is statistically significant over the state-of-the art system with textual sarcasm features alone, we perform McNemar test. The output of the SVM classifier using only linguistic features used for sarcasm detection by joshi2015harnessing and the output of the MILR classifier with the complete set of features are compared, setting threshold INLINEFORM0 . There was a significant difference in the classifier's accuracy with p(two-tailed) = 0.02 with an odds-ratio of 1.43, showing that the classification accuracy improvement is unlikely to be observed by chance in 95% confidence interval. Considering Reading Time as a Cognitive Feature along with Sarcasm Features One may argue that, considering simple measures of reading effort like “reading time” as cognitive feature instead of the expensive eye-tracking features for sarcasm detection may be a cost-effective solution. To examine this, we repeated our experiments with “reading time” considered as the only cognitive feature, augmented with the textual features. The F-scores of all the classifiers turn out to be close to that of the classifiers considering sarcasm feature alone and the difference in the improvement is not statistically significant ( INLINEFORM0 ). One the other hand, F-scores with gaze features are superior to the F-scores when reading time is considered as a cognitive feature. How Effective are the Cognitive Features We examine the effectiveness of cognitive features on the classification accuracy by varying the input training data size. To examine this, we create a stratified (keeping the class ratio constant) random train-test split of 80%:20%. We train our classifier with 100%, 90%, 80% and 70% of the training data with our whole feature set, and the feature combination from joshi2015harnessing. The goodness of our system is demonstrated by improvements in F-score and Kappa statistics, shown in Figure FIGREF22 . We further analyze the importance of features by ranking the features based on (a) Chi squared test, and (b) Information Gain test, using Weka's attribute selection module. Figure FIGREF23 shows the top 20 ranked features produced by both the tests. For both the cases, we observe 16 out of top 20 features to be gaze features. Further, in each of the cases, Average Fixation Duration per Word and Largest Regression Position are seen to be the two most significant features. Example Cases Table TABREF21 shows a few example cases from the experiment with stratified 80%-20% train-test split. Example sentence 1 is sarcastic, and requires extra-linguistic knowledge (about poor living conditions at Manchester). Hence, the sarcasm detector relying only on textual features is unable to detect the underlying incongruity. However, our system predicts the label successfully, possibly helped by the gaze features. Similarly, for sentence 2, the false sense of presence of incongruity (due to phrases like “Helped me” and “Can't stop”) affects the system with only linguistic features. Our system, though, performs well in this case also. Sentence 3 presents a false-negative case where it was hard for even humans to get the sarcasm. This is why our gaze features (and subsequently the complete set of features) account for erroneous prediction. In sentence 4, gaze features alone false-indicate presence of incongruity, whereas the system predicts correctly when gaze and linguistic features are taken together. From these examples, it can be inferred that, only gaze features would not have sufficed to rule out the possibility of detecting other forms of incongruity that do not result in sarcasm. Error Analysis Errors committed by our system arise from multiple factors, starting from limitations of the eye-tracker hardware to errors committed by linguistic tools and resources. Also, aggregating various eye-tracking parameters to extract the cognitive features may have caused information loss in the regular classification setting. Conclusion In the current work, we created a novel framework to detect sarcasm, that derives insights from human cognition, that manifests over eye movement patterns. We hypothesized that distinctive eye-movement patterns, associated with reading sarcastic text, enables improved detection of sarcasm. We augmented traditional linguistic features with cognitive features obtained from readers' eye-movement data in the form of simple gaze-based features and complex features derived from a graph structure. This extended feature-set improved the success rate of the sarcasm detector by 3.7%, over the best available system. Using cognitive features in an NLP Processing system like ours is the first proposal of its kind. Our general approach may be useful in other NLP sub-areas like sentiment and emotion analysis, text summarization and question answering, where considering textual clues alone does not prove to be sufficient. We propose to augment this work in future by exploring deeper graph and gaze features. We also propose to develop models for the purpose of learning complex gaze feature representation, that accounts for the power of individual eye movement patterns along with the aggregated patterns of eye movements. Acknowledgments We thank the members of CFILT Lab, especially Jaya Jha and Meghna Singh, and the students of IIT Bombay for their help and support.
What traditional linguistics features did they use?
Unanswerable
3,543
qasper
4k
Introduction Chinese word segmentation (CWS) is a task for Chinese natural language process to delimit word boundary. CWS is a basic and essential task for Chinese which is written without explicit word delimiters and different from alphabetical languages like English. BIBREF0 treats Chinese word segmentation (CWS) as a sequence labeling task with character position tags, which is followed by BIBREF1, BIBREF2, BIBREF3. Traditional CWS models depend on the design of features heavily which effects the performance of model. To minimize the effort in feature engineering, some CWS models BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11 are developed following neural network architecture for sequence labeling tasks BIBREF12. Neural CWS models perform strong ability of feature representation, employing unigram and bigram character embedding as input and approach good performance. The CWS task is often modeled as one graph model based on a scoring model that means it is composed of two parts, one part is an encoder which is used to generate the representation of characters from the input sequence, the other part is a decoder which performs segmentation according to the encoder scoring. Table TABREF1 summarizes typical CWS models according to their decoding ways for both traditional and neural models. Markov models such as BIBREF13 and BIBREF4 depend on the maximum entropy model or maximum entropy Markov model both with a Viterbi decoder. Besides, conditional random field (CRF) or Semi-CRF for sequence labeling has been used for both traditional and neural models though with different representations BIBREF2, BIBREF15, BIBREF10, BIBREF17, BIBREF18. Generally speaking, the major difference between traditional and neural network models is about the way to represent input sentences. Recent works about neural CWS which focus on benchmark dataset, namely SIGHAN Bakeoff BIBREF21, may be put into the following three categories roughly. Encoder. Practice in various natural language processing tasks has been shown that effective representation is essential to the performance improvement. Thus for better CWS, it is crucial to encode the input character, word or sentence into effective representation. Table TABREF2 summarizes regular feature sets for typical CWS models including ours as well. The building blocks that encoders use include recurrent neural network (RNN) and convolutional neural network (CNN), and long-term memory network (LSTM). Graph model. As CWS is a kind of structure learning task, the graph model determines which type of decoder should be adopted for segmentation, also it may limit the capability of defining feature, as shown in Table 2, not all graph models can support the word features. Thus recent work focused on finding more general or flexible graph model to make model learn the representation of segmentation more effective as BIBREF9, BIBREF11. External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose BIBREF22, BIBREF23. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21. In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement. Shown in Table TABREF1, different decoders have particular decoding algorithms to match the respective CWS models. Markov models and CRF-based models often use Viterbi decoders with polynomial time complexity. In general graph model, search space may be too large for model to search. Thus it forces graph models to use an approximate beam search strategy. Beam search algorithm has a kind low-order polynomial time complexity. Especially, when beam width $b$=1, the beam search algorithm will reduce to greedy algorithm with a better time complexity $O(Mn)$ against the general beam search time complexity $O(Mnb^2)$, where $n$ is the number of units in one sentences, $M$ is a constant representing the model complexity. Greedy decoding algorithm can bring the fastest speed of decoding while it is not easy to guarantee the precision of decoding when the encoder is not strong enough. In this paper, we focus on more effective encoder design which is capable of offering fast and accurate Chinese word segmentation with only unigram feature and greedy decoding. Our proposed encoder will only consist of attention mechanisms as building blocks but nothing else. Motivated by the Transformer BIBREF24 and its strength of capturing long-range dependencies of input sentences, we use a self-attention network to generate the representation of input which makes the model encode sentences at once without feeding input iteratively. Considering the weakness of the Transformer to model relative and absolute position information directly BIBREF25 and the importance of localness information, position information and directional information for CWS, we further improve the architecture of standard multi-head self-attention of the Transformer with a directional Gaussian mask and get a variant called Gaussian-masked directional multi-head attention. Based on the newly improved attention mechanism, we expand the encoder of the Transformer to capture different directional information. With our powerful encoder, our model uses only simple unigram features to generate representation of sentences. For decoder which directly performs the segmentation, we use the bi-affinal attention scorer, which has been used in dependency parsing BIBREF26 and semantic role labeling BIBREF27, to implement greedy decoding on finding the boundaries of words. In our proposed model, greedy decoding ensures a fast segmentation while powerful encoder design ensures a good enough segmentation performance even working with greedy decoder together. Our model will be strictly evaluated on benchmark datasets from SIGHAN Bakeoff shared task on CWS in terms of closed test setting, and the experimental results show that our proposed model achieves new state-of-the-art. The technical contributions of this paper can be summarized as follows. We propose a CWS model with only attention structure. The encoder and decoder are both based on attention structure. With a powerful enough encoder, we for the first time show that unigram (character) featues can help yield strong performance instead of diverse $n$-gram (character and word) features in most of previous work. To capture the representation of localness information and directional information, we propose a variant of directional multi-head self-attention to further enhance the state-of-the-art Transformer encoder. Models The CWS task is often modelled as one graph model based on an encoder-based scoring model. The model for CWS task is composed of an encoder to represent the input and a decoder based on the encoder to perform actual segmentation. Figure FIGREF6 is the architecture of our model. The model feeds sentence into encoder. Embedding captures the vector $e=(e_1,...,e_n)$ of the input character sequences of $c=(c_1,...,c_n)$. The encoder maps vector sequences of $ {e}=(e_1,..,e_n)$ to two sequences of vector which are $ {v^b}=(v_1^b,...,v_n^b)$ and ${v^f}=(v_1^f,...v_n^f)$ as the representation of sentences. With $v^b$ and $v^f$, the bi-affinal scorer calculates the probability of each segmentation gaps and predicts the word boundaries of input. Similar as the Transformer, the encoder is an attention network with stacked self-attention and point-wise, fully connected layers while our encoder includes three independent directional encoders. Models ::: Encoder Stacks In the Transformer, the encoder is composed of a stack of N identical layers and each layer has one multi-head self-attention layer and one position-wise fully connected feed-forward layer. One residual connection is around two sub-layers and followed by layer normalization BIBREF24. This architecture provides the Transformer a good ability to generate representation of sentence. With the variant of multi-head self-attention, we design a Gaussian-masked directional encoder to capture representation of different directions to improve the ability of capturing the localness information and position information for the importance of adjacent characters. One unidirectional encoder can capture information of one particular direction. For CWS tasks, one gap of characters, which is from a word boundary, can divide one sequence into two parts, one part in front of the gap and one part in the rear of it. The forward encoder and backward encoder are used to capture information of two directions which correspond to two parts divided by the gap. One central encoder is paralleled with forward and backward encoders to capture the information of entire sentences. The central encoder is a special directional encoder for forward and backward information of sentences. The central encoder can fuse the information and enable the encoder to capture the global information. The encoder outputs one forward information and one backward information of each positions. The representation of sentence generated by center encoder will be added to these information directly: where $v^{b}=(v^b_1,...,v^b_n)$ is the backward information, $v^{f}=(v^f_1,...,v^f_n)$ is the forward information, $r^{b}=(r^b_1,...,r^b_n)$ is the output of backward encoder, $r^{c}=(r^c_1,...,r^c_n)$ is the output of center encoder and $r^{f}=(r^f_1,...,r^f_n)$ is the output of forward encoder. Models ::: Gaussian-Masked Directional Multi-Head Attention Similar as scaled dot-product attention BIBREF24, Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query $Q$ with all keys $K$, dividing each values by $\sqrt{d_k}$, where $\sqrt{d_k}$ is the dimension of keys, and apply a softmax function to generate the weights in the attention: Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters. Firstly we introduce the Gaussian weight matrix $G$ which presents the localness relationship between each two characters: where $g_{ij}$ is the Gaussian weight between character $i$ and $j$, $dis_{ij}$ is the distance between character $i$ and $j$, $\Phi (x)$ is the cumulative distribution function of Gaussian, $\sigma $ is the standard deviation of Gaussian function and it is a hyperparameter in our method. Equation (DISPLAY_FORM13) can ensure the Gaussian weight equals 1 when $dis_{ij}$ is 0. The larger distance between charactersis, the smaller the weight is, which makes one character can affect its adjacent characters more compared with other characters. To combine the Gaussian weight to the self-attention, we produce the Hadamard product of Gaussian weight matrix $G$ and the score matrix produced by $Q{K^{T}}$ where $AG$ is the Gaussian-masked attention. It ensures that the relationship between two characters with long distances is weaker than adjacent characters. The scaled dot-product attention models the relationship between two characters without regard to their distances in one sequence. For CWS task, the weight between adjacent characters should be more important while it is hard for self-attention to achieve the effect explicitly because the self-attention cannot get the order of sentences directly. The Gaussian-masked attention adjusts the weight between characters and their adjacent character to a larger value which stands for the effect of adjacent characters. For forward and backward encoder, the self-attention sublayer needs to use a triangular matrix mask to let the self-attention focus on different weights: where $pos_i$ is the position of character $c_i$. The triangular matrix for forward and backward encode are: $\left[ \begin{matrix} 1 & 0 & 0 & \cdots &0\\ 1 & 1 & 0 & \cdots &0\\ 1 & 1 & 1 & \cdots &0\\ \vdots &\vdots &\vdots &\ddots &\vdots \\ 1 & 1 & 1 & \cdots & 1\\ \end{matrix} \right]$ $\left[ \begin{matrix} 1 & 1 & 1 & \cdots &1 \\ 0 & 1 & 1 & \cdots &1 \\ 0 & 0& 1 & \cdots &1 \\ \vdots &\vdots &\vdots &\ddots &\vdots \\ 0 & 0 & 0 & \cdots & 1\\ \end{matrix}\right]$ Similar as BIBREF24, we use multi-head attention to capture information from different dimension positions as Figure FIGREF16 and get Gaussian-masked directional multi-head attention. With multi-head attention architecture, the representation of input can be captured by where $MH$ is the Gaussian-masked multi-head attention, ${W_i^q, W_i^k,W_i^v} \in \mathbb {R}^{d_k \times d_h}$ is the parameter matrices to generate heads, $d_k$ is the dimension of model and $d_h$ is the dimension of one head. Models ::: Bi-affinal Attention Scorer Regarding word boundaries as gaps between any adjacent words converts the character labeling task to the gap labeling task. Different from character labeling task, gap labeling task requires information of two adjacent characters. The relationship between adjacent characters can be represented as the type of gap. The characteristic of word boundaries makes bi-affine attention an appropriate scorer for CWS task. Bi-affinal attention scorer is the component that we use to label the gap. Bi-affinal attention is developed from bilinear attention which has been used in dependency parsing BIBREF26 and SRL BIBREF27. The distribution of labels in a labeling task is often uneven which makes the output layer often include a fixed bias term for the prior probability of different labels BIBREF27. Bi-affine attention uses bias terms to alleviate the burden of the fixed bias term and get the prior probability which makes it different from bilinear attention. The distribution of the gap is uneven that is similar as other labeling task which fits bi-affine. Bi-affinal attention scorer labels the target depending on information of independent unit and the joint information of two units. In bi-affinal attention, the score $s_{ij}$ of characters $c_i$ and $c_j$ $(i < j)$ is calculated by: where $v_i^f$ is the forward information of $c_i$ and $v_i^b$ is the backward information of $c_j$. In Equation (DISPLAY_FORM21), $W$, $U$ and $b$ are all parameters that can be updated in training. $W$ is a matrix with shape $(d_i \times N\times d_j)$ and $U$ is a $(N\times (d_i + d_j))$ matrix where $d_i$ is the dimension of vector $v_i^f$ and $N$ is the number of labels. In our model, the biaffine scorer uses the forward information of character in front of the gap and the backward information of the character behind the gap to distinguish the position of characters. Figure FIGREF22 is an example of labeling gap. The method of using biaffine scorer ensures that the boundaries of words can be determined by adjacent characters with different directional information. The score vector of the gap is formed by the probability of being a boundary of word. Further, the model generates all boundaries using activation function in a greedy decoding way. Experiments ::: Experimental Settings ::: Data We train and evaluate our model on datasets from SIGHAN Bakeoff 2005 BIBREF21 which has four datasets, PKU, MSR, AS and CITYU. Table TABREF23 shows the statistics of train data. We use F-score to evaluate CWS models. To train model with pre-trained embeddings in AS and CITYU, we use OpenCC to transfer data from traditional Chinese to simplified Chinese. Experiments ::: Experimental Settings ::: Pre-trained Embedding We only use unigram feature so we only trained character embeddings. Our pre-trained embedding are pre-trained on Chinese Wikipedia corpus by word2vec BIBREF29 toolkit. The corpus used for pre-trained embedding is all transferred to simplified Chinese and not segmented. On closed test, we use embeddings initialized randomly. Experiments ::: Experimental Settings ::: Hyperparameters For different datasets, we use two kinds of hyperparameters which are presented in Table TABREF24. We use hyperparameters in Table TABREF24 for small corpora (PKU and CITYU) and normal corpora (MSR and AS). We set the standard deviation of Gaussian function in Equation (DISPLAY_FORM13) to 2. Each training batch contains sentences with at most 4096 tokens. Experiments ::: Experimental Settings ::: Optimizer To train our model, we use the Adam BIBREF30 optimizer with $\beta _1=0.9$, $\beta _2=0.98$ and $\epsilon =10^{-9}$. The learning rate schedule is the same as BIBREF24: where $d$ is the dimension of embeddings, $step$ is the step number of training and $warmup_step$ is the step number of warmup. When the number of steps is smaller than the step of warmup, the learning rate increases linearly and then decreases. Experiments ::: Hardware and Implements We trained our models on a single CPU (Intel i7-5960X) with an nVidia 1080 Ti GPU. We implement our model in Python with Pytorch 1.0. Experiments ::: Results Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in BIBREF20, our model outperforms all the other models in MSR and AS except BIBREF18 and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various $n$-gram features while only our model takes unigram ones. With unsupervised segmentation features introduced by BIBREF20, our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff. Table TABREF36 compares our model and recent neural models in terms of open test setting in which any external resources, especially pre-trained embeddings or language models can be used. In MSR and AS, our model gets a comparable result while our results in CITYU and PKU are not remarkable. However, it is well known that it is always hard to compare models when using open test setting, especially with pre-trained embedding. Not all models may use the same method and data to pre-train. Though pre-trained embedding or language model can improve the performance, the performance improvement itself may be from multiple sources. It often that there is a success of pre-trained embedding to improve the performance, while it cannot prove that the model is better. Compared with other LSTM models, our model performs better in AS and MSR than in CITYU and PKU. Considering the scale of different corpora, we believe that the size of corpus affects our model and the larger size is, the better model performs. For small corpus, the model tends to be overfitting. Tables TABREF25 and TABREF26 also show the decoding time in different datasets. Our model finishes the segmentation with the least decoding time in all four datasets, thanks to the architecture of model which only takes attention mechanism as basic block. Related Work ::: Chinese Word Segmentation CWS is a task for Chinese natural language process to delimit word boundary. BIBREF0 for the first time formulize CWS as a sequence labeling task. BIBREF3 show that different character tag sets can make essential impact for CWS. BIBREF2 use CRFs as a model for CWS, achieving new state-of-the-art. Works of statistical CWS has built the basis for neural CWS. Neural word segmentation has been widely used to minimize the efforts in feature engineering which was important in statistical CWS. BIBREF4 introduce the neural model with sliding-window based sequence labeling. BIBREF6 propose a gated recursive neural network (GRNN) for CWS to incorporate complicated combination of contextual character and n-gram features. BIBREF7 use LSTM to learn long distance information. BIBREF9 propose a neural framework that eliminates context windows and utilize complete segmentation history. BIBREF33 explore a joint model that performs segmentation, POS-Tagging and chunking simultaneously. BIBREF34 propose a feature-enriched neural model for joint CWS and part-of-speech tagging. BIBREF35 present a joint model to enhance the segmentation of Chinese microtext by performing CWS and informal word detection simultaneously. BIBREF17 propose a character-based convolutional neural model to capture $n$-gram features automatically and an effective approach to incorporate word embeddings. BIBREF11 improve the model in BIBREF9 and propose a greedy neural word segmenter with balanced word and character embedding inputs. BIBREF23 propose a novel neural network model to incorporate unlabeled and partially-labeled data. BIBREF36 propose two methods that extend the Bi-LSTM to perform incorporating dictionaries into neural networks for CWS. BIBREF37 propose Switch-LSTMs to segment words and provided a more flexible solution for multi-criteria CWS which is easy to transfer the learned knowledge to new criteria. Related Work ::: Transformer Transformer BIBREF24 is an attention-based neural machine translation model. The Transformer is one kind of self-attention networks (SANs) which is proposed in BIBREF38. Encoder of the Transformer consists of one self-attention layer and a position-wise feed-forward layer. Decoder of the Transformer contains one self-attention layer, one encoder-decoder attention layer and one position-wise feed-forward layer. The Transformer uses residual connections around the sublayers and then followed by a layer normalization layer. Scaled dot-product attention is the key component in the Transformer. The input of attention contains queries, keys, and values of input sequences. The attention is generated using queries and keys like Equation (DISPLAY_FORM11). Structure of scaled dot-product attention allows the self-attention layer generate the representation of sentences at once and contain the information of the sentence which is different from RNN that process characters of sentences one by one. Standard self-attention is similar as Gaussian-masked direction attention while it does not have directional mask and gaussian mask. BIBREF24 also propose multi-head attention which is better to generate representation of sentence by dividing queries, keys and values to different heads and get information from different subspaces. Conclusion In this paper, we propose an attention mechanism only based Chinese word segmentation model. Our model uses self-attention from the Transformer encoder to take sequence input and bi-affine attention scorer to predict the label of gaps. To improve the ability of capturing the localness and directional information of self-attention based encoder, we propose a variant of self-attention called Gaussian-masked directional multi-head attention to replace the standard self-attention. We also extend the Transformer encoder to capture directional features. Our model uses only unigram features instead of multiple $n$-gram features in previous work. Our model is evaluated on standard benchmark dataset, SIGHAN Bakeoff 2005, which shows not only our model performs segmentation faster than any previous models but also gives new higher or comparable segmentation performance against previous state-of-the-art models.
How does Gaussian-masked directional multi-head attention works?
pays attentions to adjacent characters and casts a localness relationship between the characters as a fixed Gaussian weight assuming the weight relies on the distance between characters
3,612
qasper
4k
Introduction Explanations of happenings in one's life, causal explanations, are an important topic of study in social, psychological, economic, and behavioral sciences. For example, psychologists have analyzed people's causal explanatory style BIBREF0 and found strong negative relationships with depression, passivity, and hostility, as well as positive relationships with life satisfaction, quality of life, and length of life BIBREF1 , BIBREF2 , BIBREF0 . To help understand the significance of causal explanations, consider how they are applied to measuring optimism (and its converse, pessimism) BIBREF0 . For example, in “My parser failed because I always have bugs.”, the emphasized text span is considered a causal explanation which indicates pessimistic personality – a negative event where the author believes the cause is pervasive. However, in “My parser failed because I barely worked on the code.”, the explanation would be considered a signal of optimistic personality – a negative event for which the cause is believed to be short-lived. Language-based models which can detect causal explanations from everyday social media language can be used for more than automating optimism detection. Language-based assessments would enable other large-scale downstream tasks: tracking prevailing causal beliefs (e.g., about climate change or autism), better extracting process knowledge from non-fiction (e.g., gravity causes objects to move toward one another), or detecting attribution of blame or praise in product or service reviews (“I loved this restaurant because the fish was cooked to perfection”). In this paper, we introduce causal explanation analysis and its subtasks of detecting the presence of causality (causality prediction) and identifying explanatory phrases (causal explanation identification). There are many challenges to achieving these task. First, the ungrammatical texts in social media incur poor syntactic parsing results which drastically affect the performance of discourse relation parsing pipelines . Many causal relations are implicit and do not contain any discourse markers (e.g., `because'). Further, Explicit causal relations are also more difficult in social media due to the abundance of abbreviations and variations of discourse connectives (e.g., `cuz' and `bcuz'). Prevailing approaches for social media analyses, utilizing traditional linear models or bag of words models (e.g., SVM trained with n-gram, part-of-speech (POS) tags, or lexicon-based features) alone do not seem appropriate for this task since they simply cannot segment the text into meaningful discourse units or discourse arguments such as clauses or sentences rather than random consecutive token sequences or specific word tokens. Even when the discourse units are clear, parsers may still fail to accurately identify discourse relations since the content of social media is quite different than that of newswire which is typically used for discourse parsing. In order to overcome these difficulties of discourse relation parsing in social media, we simplify and minimize the use of syntactic parsing results and capture relations between discourse arguments, and investigate the use of a recursive neural network model (RNN). Recent work has shown that RNNs are effective for utilizing discourse structures for their downstream tasks BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , but they have yet to be directly used for discourse relation prediction in social media. We evaluated our model by comparing it to off-the-shelf end-to-end discourse relation parsers and traditional models. We found that the SVM and random forest classifiers work better than the LSTM classifier for the causality detection, while the LSTM classifier outperforms other models for identifying causal explanation. The contributions of this work include: (1) the proposal of models for both (a) causality prediction and (b) causal explanation identification, (2) the extensive evaluation of a variety of models from social media classification models and discourse relation parsers to RNN-based application models, demonstrating that feature-based models work best for causality prediction while RNNs are superior for the more difficult task of causal explanation identification, (3) performance analysis on architectural differences of the pipeline and the classifier structures, (4) exploration of the applications of causal explanation to downstream tasks, and (5) release of a novel, anonymized causality Facebook dataset along with our causality prediction and causal explanation identification models. Related Work Identifying causal explanations in documents can be viewed as discourse relation parsing. The Penn Discourse Treebank (PDTB) BIBREF7 has a `Cause' and `Pragmatic Cause' discourse type under a general `Contingency' class and Rhetorical Structure Theory (RST) BIBREF8 has a `Relations of Cause'. In most cases, the development of discourse parsers has taken place in-domain, where researchers have used the existing annotations of discourse arguments in newswire text (e.g. Wall Street Journal) from the discourse treebank and focused on exploring different features and optimizing various types of models for predicting relations BIBREF9 , BIBREF10 , BIBREF11 . In order to further develop automated systems, researchers have proposed end-to-end discourse relation parsers, building models which are trained and evaluated on the annotated PDTB and RST Discourse Treebank (RST DT). These corpora consist of documents from Wall Street Journal (WSJ) which are much more well-organized and grammatical than social media texts BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . Only a few works have attempted to parse discourse relations for out-of-domain problems such as text categorizations on social media texts; Ji and Bhatia used models which are pretrained with RST DT for building discourse structures from movie reviews, and Son adapted the PDTB discourse relation parsing approach for capturing counterfactual conditionals from tweets BIBREF4 , BIBREF3 , BIBREF16 . These works had substantial differences to what propose in this paper. First, Ji and Bhatia used a pretrained model (not fully optimal for some parts of the given task) in their pipeline; Ji's model performed worse than the baseline on the categorization of legislative bills, which is thought to be due to legislative discourse structures differing from those of the training set (WSJ corpus). Bhatia also used a pretrained model finding that utilizing discourse relation features did not boost accuracy BIBREF4 , BIBREF3 . Both Bhatia and Son used manual schemes which may limit the coverage of certain types of positive samples– Bhatia used a hand-crafted schema for weighting discourse structures for the neural network model and Son manually developed seven surface forms of counterfactual thinking for the rule-based system BIBREF4 , BIBREF16 . We use social-media-specific features from pretrained models which are directly trained on tweets and we avoid any hand-crafted rules except for those included in the existing discourse argument extraction techniques. The automated systems for discourse relation parsing involve multiple subtasks from segmenting the whole text into discourse arguments to classifying discourse relations between the arguments. Past research has found that different types of models and features yield varying performance for each subtask. Some have optimized models for discourse relation classification (i.e. given a document indicating if the relation existing) without discourse argument parsing using models such as Naive-Bayes or SVMs, achieve relatively stronger accuracies but a simpler task than that associated with discourse arguments BIBREF10 , BIBREF11 , BIBREF9 . Researchers who, instead, tried to build the end-to-end parsing pipelines considered a wider range of approaches including sequence models and RNNs BIBREF12 , BIBREF15 , BIBREF14 , BIBREF17 . Particularly, when they tried to utilize the discourse structures for out-domain applications, they used RNN-based models and found that those models are advantageous for their downstream tasks BIBREF4 , BIBREF3 . In our case, for identifying causal explanations from social media using discourse structure, we build an RNN-based model for its structural effectiveness in this task (see details in section UID13 ). However, we also note that simpler models such as SVMs and logistic regression obtained the state-of-the-art performances for text categorization tasks in social media BIBREF18 , BIBREF19 , so we build relatively simple models with different properties for each stage of the full pipeline of our parser. Methods We build our model based on PDTB-style discourse relation parsing since PDTB has a relatively simpler text segmentation method; for explicit discourse relations, it finds the presence of discourse connectives within a document and extracts discourse arguments which parametrize the connective while for implicit relations, it considers all adjacent sentences as candidate discourse arguments. Dataset We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages. Three well-trained annotators manually labeled whether or not each message contains the causal explanation and obtained 1,598 causality messages with substantial agreement ( $\kappa =0.61$ ). We used the majority vote for our gold standard. Then, on each causality message, annotators identified which text spans are causal explanations. For each task, we used 80% of the dataset for training our model and 10% for tuning the hyperparameters of our models. Finally, we evaluated all of our models on the remaining 10% (Table 1 and Table 2 ). For causal explanation detection task, we extracted discourse arguments using our parser and selected discourse arguments which most cover the annotated causal explanation text span as our gold standard. Model We build two types of models. First, we develop feature-based models which utilize features of the successful models in social media analysis and causal relation discourse parsing. Then, we build a recursive neural network model which uses distributed representation of discourse arguments as this approach can even capture latent properties of causal relations which may exist between distant discourse arguments. We specifically selected bidirectional LSTM since the model with the discourse distributional structure built in this form outperformed the traditional models in similar NLP downstream tasks BIBREF3 . As the first step of our pipeline, we use Tweebo parser BIBREF20 to extract syntactic features from messages. Then, we demarcate sentences using punctuation (`,') tag and periods. Among those sentences, we find discourse connectives defined in PDTB annotation along with a Tweet POS tag for conjunction words which can also be a discourse marker. In order to decide whether these connectives are really discourse connectives (e.g., I went home, but he stayed) as opposed to simple connections of two words (I like apple and banana) we see if verb phrases exist before and after the connective by using dependency parsing results. Although discourse connective disambiguation is a complicated task which can be much improved by syntactic features BIBREF21 , we try to minimize effects of syntactic parsing and simplify it since it is highly error-prone in social media. Finally, according to visual inspection, emojis (`E' tag) are crucial for discourse relation in social media so we take them as separate discourse arguments (e.g.,in “My test result... :(” the sad feeling is caused by the test result, but it cannot be captured by plain word tokens). We trained a linear SVM, an rbf SVM, and a random forest with N-gram, charater N-gram, and tweet POS tags, sentiment tags, average word lengths and word counts from each message as they have a pivotal role in the models for many NLP downstream tasks in social media BIBREF19 , BIBREF18 . In addition to these features, we also extracted First-Last, First3 features and Word Pairs from every adjacent pair of discourse arguments since these features were most helpful for causal relation prediction BIBREF9 . First-Last, First3 features are first and last word and first three words of two discourse arguments of the relation, and Word Pairs are the cross product of words of those discourse arguments. These two features enable our model to capture interaction between two discourse arguments. BIBREF9 reported that these two features along with verbs, modality, context, and polarity (which can be captured by N-grams, sentiment tags and POS tags in our previous features) obtained the best performance for predicting Contingency class to which causality belongs. We load the GLOVE word embedding BIBREF22 trained in Twitter for each token of extracted discourse arguments from messages. For the distributional representation of discourse arguments, we run a Word-level LSTM on the words' embeddings within each discourse argument and concatenate last hidden state vectors of forward LSTM ( $\overrightarrow{h}$ ) and backward LSTM ( $\overleftarrow{h}$ ) which is suggested by BIBREF3 ( $DA = [\overrightarrow{h};\overleftarrow{h}]$ ). Then, we feed the sequence of the vector representation of discourse arguments to the Discourse-argument-level LSTM (DA-level LSTM) to make a final prediction with log softmax function. With this structure, the model can learn the representation of interaction of tokens inside each discourse argument, then capture discourse relations across all of the discourse arguments in each message (Figure 2 ). In order to prevent the overfitting, we added a dropout layer between the Word-level LSTM and the DA-level LSTM layer. We also explore subsets of the full RNN architecture, specifically with one of the two LSTM layers removed. In the first model variant, we directly input all word embeddings of a whole message to a BiLSTM layer and make prediction (Word LSTM) without the help of the distributional vector representations of discourse arguments. In the second model variant, we take the average of all word embeddings of each discourse argument ( $DA_k=\frac{1}{N_k} \sum _{i=1}^{N_k}W_{i}$ ), and use them as inputs to a BiLSTM layer (DA AVG LSTM) as the average vector of embeddings were quite effective for representing the whole sequence BIBREF3 , BIBREF5 . As with the full architectures, for CP both of these variants ends with a many-to-one classification per message, while the CEI model ends with a sequence of classifications. Experiment We explored three types of models (RBF SVM, Linear SVM, and Random Forest Classifier) which have previously been shown empirically useful for the language analysis in social media. We filtered out low frequency Word Pairs features as they tend to be noisy and sparse BIBREF9 . Then, we conducted univariate feature selection to restrict all remaining features to those showing at least a small relationship with the outcome. Specifically, we keep all features passing a family-wise error rate of $\alpha = 60$ with the given outcome. After comparing the performance of the optimized version of each model, we also conducted a feature ablation test on the best model in order to see how much each feature contributes to the causality prediction. We used bidirectional LSTMs for causality classification and causal explanation identification since the discourse arguments for causal explanation can show up either before and after the effected events or results and we want our model to be optimized for both cases. However, there is a risk of overfitting due to the dataset which is relatively small for the high complexity of the model, so we added a dropout layer (p=0.3) between the Word-level LSTM and the DA-level LSTM. For tuning our model, we explore the dimensionality of word vector and LSTM hidden state vectors of discourse arguments of 25, 50, 100, and 200 as pretrained GLOVE vectors were trained in this setting. For optimization, we used Stochastic Gradient Descent (SGD) and Adam BIBREF23 with learning rates 0.01 and 0.001. We ignore missing word embeddings because our dataset is quite small for retraining new word embeddings. However, if embeddings are extracted as separate discourse arguments, we used the average of all vectors of all discourse arguments in that message. Average embeddings have performed well for representing text sequences in other tasks BIBREF5 . We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models work for each task and disassembled them to inspect how each part of the models can affect their final prediction performances. We conducted McNemar's test to determine whether the performance differences are statistically significant at $p < .05$ . Results We investigated various models for both causality detection and explanation identification. Based on their performances on the task, we analyzed the relationships between the types of models and the tasks, and scrutinized further for the best performing models. For performance analysis, we reported weighted F1 of classes. Causality Prediction In order to classify whether a message contains causal relation, we compared off-the-shelf PDTB parsers, linear SVM, RBF SVM, Random forest and LSTM classifiers. The off-the-shelf parsers achieved the lowest accuracies ( BIBREF12 and BIBREF13 in Table 3 ). This result can be expected since 1) these models were trained with news articles and 2) they are trained for all possible discourse relations in addition to causal relations (e.g., contrast, condition, etc). Among our suggested models, SVM and random forest classifier performed better than LSTM and, in the general trend, the more complex the models were, the worse they performed. This suggests that the models with more direct and simpler learning methods with features might classify the causality messages better than the ones more optimized for capturing distributional information or non-linear relationships of features. Table 4 shows the results of a feature ablation test to see how each feature contributes to causality classification performance of the linear SVM classifier. POS tags caused the largest drop in F1. We suspect POS tags played a unique role because discourse connectives can have various surface forms (e.g., because, cuz, bcuz, etc) but still the same POS tag `P'. Also POS tags can capture the occurrences of modal verbs, a feature previously found to be very useful for detecting similar discourse relations BIBREF9 . N-gram features caused 0.022 F1 drop while sentiment tags did not affect the model when removed. Unlike the previous work where First-Last, First3 and Word pairs tended to gain a large F1 increase for multiclass discourse relation prediction, in our case, they did not affect the prediction performance compared to other feature types such as POS tags or N-grams. Causal Explanation Identification In this task, the model identifies causal explanations given the discourse arguments of the causality message. We explored over the same models as those we used for causality (sans the output layer), and found the almost opposite trend of performances (see Table 5 ). The Linear SVM obtained lowest F1 while the LSTM model made the best identification performance. As opposed to the simple binary classification of the causality messages, in order to detect causal explanation, it is more beneficial to consider the relation across discourse arguments of the whole message and implicit distributional representation due to the implicit causal relations between two distant arguments. Architectural Variants For causality prediction, we experimented with only word tokens in the whole message without help of Word-level LSTM layer (Word LSTM), and F1 dropped by 0.064 (CP in Table 6 ). Also, when we used the average of the sequence of word embeddings of each discourse argument as an input to the DA-level LSTM and it caused F1 drop of 0.073. This suggests that the information gained from both the interaction of words in and in between discourse arguments help when the model utilizes the distributional representation of the texts. For causal explanation identification, in order to test how the LSTM classifier works without its capability of capturing the relations between discourse arguments, we removed DA-level LSTM layer and ran the LSTM directly on the word embedding sequence for each discourse argument for classifying whether the argument is causal explanation, and the model had 0.061 F1 drop (Word LSTM in CEI in Table 6 ). Also, when we ran DA-level LSTM on the average vectors of the word sequences of each discourse argument of messages, F1 decreased to 0.818. This follows the similar pattern observed from other types of models performances (i.e., SVMs and Random Forest classifiers) that the models with higher complexity for capturing the interaction of discourse arguments tend to identify causal explanation with the higher accuracies. For CEI task, we found that when the model ran on the sequence representation of discourse argument (DA AVG LSTM), its performance was higher than the plain sequence of word embeddings (Word LSTM). Finally, in both subtasks, when the models ran on both Word-level and DA-Level (Full LSTM), they obtained the highest performance. Complete Pipeline Evaluations thus far zeroed-in on each subtask of causal explanation analysis (i.e. CEI only focused on data already identified to contain causal explanations). Here, we seek to evaluate the complete pipeline of CP and CEI, starting from all of test data (those or without causality) and evaluating the final accuracy of CEI predictions. This is intended to evaluate CEI performance under an applied setting where one does not already know whether a document has a causal explanation. There are several approaches we could take to perform CEI starting from unannotated data. We could simply run CEI prediction by itself (CEI Only) or the pipeline of CP first and then only run CEI on documents predicted as causal (CP + CEI). Further, the CEI model could be trained only on those documents annotated causal (as was done in the previous experiments) or on all training documents including many that are not causal. Table 7 show results varying the pipeline and how CEI was trained. Though all setups performed decent ( $F1 > 0.81$ ) we see that the pipelined approach, first predicting causality (with the linear SVM) and then predicting causal explanations only for those with marked causal (CP + CEI $_{causal}$ ) yielded the strongest results. This also utilized the CEI model only trained on those annotated causal. Besides performance, an added benefit from this two step approach is that the CP step is less computational intensive of the CEI step and approximately 2/3 of documents will never need the CEI step applied. We had an inevitable limitation on the size of our dataset, since there is no other causality dataset over social media and the annotation required an intensive iterative process. This might have limited performances of more complex models, but considering the processing time and the computation load, the combination of the linear model and the RNN-based model of our pipeline obtained both the high performance and efficiency for the practical applications to downstream tasks. In other words, it's possible the linear model will not perform as well if the training size is increased substantially. However, a linear model could still be used to do a first-pass, computationally efficient labeling, in order to shortlist social media posts for further labeling from an LSTM or more complex model. Exploration Here, we explore the use of causal explanation analysis for downstream tasks. First we look at the relationship between use of causal explanation and one's demographics: age and gender. Then, we consider their use in sentiment analysis for extracting the causes of polarity ratings. Research involving human subjects was approved by the University of Pennsylvania Institutional Review Board. Conclusion We developed a pipeline for causal explanation analysis over social media text, including both causality prediction and causal explanation identification. We examined a variety of model types and RNN architectures for each part of the pipeline, finding an SVM best for causality prediction and a hierarchy of BiLSTMs for causal explanation identification, suggesting the later task relies more heavily on sequential information. In fact, we found replacing either layer of the hierarchical LSTM architecture (the word-level or the DA-level) with a an equivalent “bag of features” approach resulted in reduced accuracy. Results of our whole pipeline of causal explanation analysis were found quite strong, achieving an $F1=0.868$ at identifying discourse arguments that are causal explanations. Finally, we demonstrated use of our models in applications, finding associations between demographics and rate of mentioning causal explanations, as well as showing differences in the top words predictive of negative ratings in Yelp reviews. Utilization of discourse structure in social media analysis has been a largely untapped area of exploration, perhaps due to its perceived difficulty. We hope the strong results of causal explanation identification here leads to the integration of more syntax and deeper semantics into social media analyses and ultimately enables new applications beyond the current state of the art. Acknowledgments This work was supported, in part, by a grant from the Templeton Religion Trust (ID #TRT0048). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We also thank Laura Smith, Yiyi Chen, Greta Jawel and Vanessa Hernandez for their work in identifying causal explanations.
What types of social media did they consider?
Facebook status update messages
4,005
qasper
4k
Introduction We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification. A key problem arisen here, is how to leverage such knowledge to guide the learning process, an interesting problem for both NLP and machine learning communities. Previous studies addressing the problem fall into several lines. First, to leverage prior knowledge to label data BIBREF1 , BIBREF2 . Second, to encode prior knowledge with a prior on parameters, which can be commonly seen in many Bayesian approaches BIBREF3 , BIBREF4 . Third, to formalise prior knowledge with additional variables and dependencies BIBREF5 . Last, to use prior knowledge to control the distributions over latent output variables BIBREF6 , BIBREF7 , BIBREF8 , which makes the output variables easily interpretable. However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable. In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust. More attractively, we do not need manual annotation to label these neutral feature words in our proposed approach. More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later. To summarize, the main contributions of this work are as follows: The rest of the paper is structured as follows: In Section 2, we briefly describe the generalized expectation criteria and present the proposed regularization terms. In Section 3, we conduct extensive experiments to justify the proposed methods. We survey related work in Section 4, and summarize our work in Section 5. Method We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification. Generalized Expectation Criteria Generalized expectation (GE) criteria BIBREF7 provides us a natural way to directly constrain the model in the preferred direction. For example, when we know the proportion of each class of the dataset in a classification task, we can guide the model to predict out a pre-specified class distribution. Formally, in a parameter estimation objective function, a GE term expresses preferences on the value of some constraint functions about the model's expectation. Given a constraint function $G({\rm x}, y)$ , a conditional model distribution $p_\theta (y|\rm x)$ , an empirical distribution $\tilde{p}({\rm x})$ over input samples and a score function $S$ , a GE term can be expressed as follows: $$S(E_{\tilde{p}({\rm x})}[E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]])$$ (Eq. 4) Learning from Labeled Features Druck et al. ge-fl proposed GE-FL to learn from labeled features using generalized expectation criteria. When given a set of labeled features $K$ , the reference distribution over classes of these features is denoted by $\hat{p}(y| x_k), k \in K$ . GE-FL introduces the divergence between this reference distribution and the model predicted distribution $p_\theta (y | x_k)$ , as a term of the objective function: $$\mathcal {O} = \sum _{k \in K} KL(\hat{p}(y|x_k) || p_\theta (y | x_k)) + \sum _{y,i} \frac{\theta _{yi}^2}{2 \sigma ^2}$$ (Eq. 6) where $\theta _{yi}$ is the model parameter which indicates the importance of word $i$ to class $y$ . The predicted distribution $p_\theta (y | x_k)$ can be expressed as follows: $ p_\theta (y | x_k) = \frac{1}{C_k} \sum _{\rm x} p_\theta (y|{\rm x})I(x_k) $ in which $I(x_k)$ is 1 if feature $k$ occurs in instance ${\rm x}$ and 0 otherwise, $C_k = \sum _{\rm x} I(x_k)$ is the number of instances with a non-zero value of feature $k$ , and $p_\theta (y|{\rm x})$ takes a softmax form as follows: $ p_\theta (y|{\rm x}) = \frac{1}{Z(\rm x)}\exp (\sum _i \theta _{yi}x_i). $ To solve the optimization problem, L-BFGS can be used for parameter estimation. In the framework of GE, this term can be obtained by setting the constraint function $G({\rm x}, y) = \frac{1}{C_k} \vec{I} (y)I(x_k)$ , where $\vec{I}(y)$ is an indicator vector with 1 at the index corresponding to label $y$ and 0 elsewhere. Regularization Terms GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust. Neutral features are features that are not informative indicator of any classes, for instance, word player to the baseball-hockey classification task. Such features are usually frequent words across all categories. When we set the preference distribution of the neutral features to be uniform distributed, these neutral features will prevent the model from biasing to the class that has a dominate number of labeled features. Formally, given a set of neutral features $K^{^{\prime }}$ , the uniform distribution is $\hat{p}_u(y|x_k) = \frac{1}{|C|}, k \in K^{^{\prime }}$ , where $|C|$ is the number of classes. The objective function with the new term becomes $$\mathcal {O}_{NE} = \mathcal {O} + \sum _{k \in K^{^{\prime }}} KL(\hat{p}_u(y|x_k) || p_\theta (y | x_k)).$$ (Eq. 9) Note that we do not need manual annotation to provide neutral features. One simple way is to take the most common features as neutral features. Experimental results show that this strategy works successfully. Another way to prevent the model from drifting from the desired direction is to constrain the predicted class distribution on unlabeled data. When lacking knowledge about the class distribution of the data, one feasible way is to take maximum entropy principle, as below: $$\mathcal {O}_{ME} = \mathcal {O} + \lambda \sum _{y} p(y) \log p(y)$$ (Eq. 11) where $p(y)$ is the predicted class distribution, given by $ p(y) = \frac{1}{|X|} \sum _{\rm x} p_\theta (y | \rm x). $ To control the influence of this term on the overall objective function, we can tune $\lambda $ according to the difference in the number of labeled features of each class. In this paper, we simply set $\lambda $ to be proportional to the total number of labeled features, say $\lambda = \beta |K|$ . This maximum entropy term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ . Therefore, $E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]$ is just the model distribution $p_\theta (y|{\rm x})$ and its expectation with the empirical distribution $\tilde{p}(\rm x)$ is simply the average over input samples, namely $p(y)$ . When $S$ takes the maximum entropy form, we can derive the objective function as above. Sometimes, we have already had much knowledge about the corpus, and can estimate the class distribution roughly without labeling instances. Therefore, we introduce the KL divergence between the predicted and reference class distributions into the objective function. Given the preference class distribution $\hat{p}(y)$ , we modify the objective function as follows: $$\mathcal {O}_{KL} &= \mathcal {O} + \lambda KL(\hat{p}(y) || p(y))$$ (Eq. 13) Similarly, we set $\lambda = \beta |K|$ . This divergence term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ and setting the score function to $S(\hat{p}, p) = \sum _i \hat{p}_i \log \frac{\hat{p}_i}{p_i}$ , where $p$ and $\hat{p}$ are distributions. Note that this regularization term involves the reference class distribution which will be discussed later. Experiments In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria. Data Preparation We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features. The movie dataset, in which the task is to classify the movie reviews as positive or negtive, is used for testing the proposed approaches with unbalanced labeled features, unbalanced datasets or different $\lambda $ parameters. All unbalanced datasets are constructed based on the movie dataset by randomly removing documents of the positive class. For each experiment, we conduct 10-fold cross validation. As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ). Similar to BIBREF10 , BIBREF0 , we estimate the reference distribution of the labeled features using a heuristic strategy. If there are $|C|$ classes in total, and $n$ classes are associated with a feature $k$ , the probability that feature $k$ is related with any one of the $n$ classes is $\frac{0.9}{n}$ and with any other class is $\frac{0.1}{|C| - n}$ . Neutral features are the most frequent words after removing stop words, and their reference distributions are uniformly distributed. We use the top 10 frequent words as neutral features in all experiments. With Unbalanced Labeled Features In this section, we evaluate our approach when there is unbalanced knowledge on the categories to be classified. The labeled features are obtained through information gain. Two settings are chosen: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). As shown in Figure 1 , Maximum entropy principle shows improvement only on the balanced case. An obvious reason is that maximum entropy only favors uniform distribution. Incorporating Neutral features performs similarly to maximum entropy since we assume that neutral words are uniformly distributed. Its accuracy decreases slowly when the number of labeled features becomes larger ( $t>4$ ) (Figure 1 (a)), suggesting that the model gradually biases to the class with more labeled features, just like GE-FL. Incorporating the KL divergence of class distribution performs much better than GE-FL on both balanced and unbalanced datasets. This shows that it is effective to control the unbalance in labeled features and in the dataset. With Balanced Labeled Features We also compare with the baseline when the labeled features are balanced. Similar to the experiment above, the labeled features are obtained by information gain. Two settings are experimented with: (a) We randomly select $t \in [1, 20]$ features from the feature pool for each class, and conduct comparisons on the original balanced movie dataset (positive:negtive=1:1). (b) Similar to (a), but the class distribution is unbalanced, by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 2 . When the dataset is balanced (Figure 2 (a)), there is little difference between GE-FL and our methods. The reason is that the proposed regularization terms provide no additional knowledge to the model and there is no bias in the labeled features. On the unbalanced dataset (Figure 2 (b)), incorporating KL divergence is much better than GE-FL since we provide additional knowledge(the true class distribution), but maximum entropy and neutral features are much worse because forcing the model to approach the uniform distribution misleads it. With Unbalanced Class Distributions Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 . Figure 3 (a) shows that when the dataset and the labeled features are both balanced, there is little difference between our methods and GE-FL(also see Figure 2 (a)). But when the class distribution becomes more unbalanced, the difference becomes more remarkable. Performance of neutral features and maximum entropy decrease significantly but incorporating KL divergence increases remarkably. This suggests if we have more accurate knowledge about class distribution, KL divergence can guide the model to the right direction. Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive. The Influence of λ\lambda We present the influence of $\lambda $ on the method that incorporates KL divergence in this section. Since we simply set $\lambda = \beta |K|$ , we just tune $\beta $ here. Note that when $\beta = 0$ , the newly introduced regularization term is disappeared, and thus the model is actually GE-FL. Again, we test the method with different $\lambda $ in two settings: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other class. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 4 . As expected, $\lambda $ reflects how strong the regularization is. The model tends to be closer to our preferences with the increasing of $\lambda $ on both cases. Using LDA Selected Features We compare our methods with GE-FL on all the 9 datasets in this section. Instead of using features obtained by information gain, we use LDA to select labeled features. Unlike information gain, LDA does not employ any instance labels to find labeled features. In this setting, we can build classification models without any instance annotation, but just with labeled features. Table 1 shows that our three methods significantly outperform GE-FL. Incorporating neutral features performs better than GE-FL on 7 of the 9 datasets, maximum entropy is better on 8 datasets, and KL divergence better on 7 datasets. LDA selects out the most predictive features as labeled features without considering the balance among classes. GE-FL does not exert any control on such an issue, so the performance is severely suffered. Our methods introduce auxiliary regularization terms to control such a bias problem and thus promote the model significantly. Related Work There have been much work that incorporate prior knowledge into learning, and two related lines are surveyed here. One is to use prior knowledge to label unlabeled instances and then apply a standard learning algorithm. The other is to constrain the model directly with prior knowledge. Liu et al.text manually labeled features which are highly predictive to unsupervised clustering assignments and use them to label unlabeled data. Chang et al.guiding proposed constraint driven learning. They first used constraints and the learned model to annotate unlabeled instances, and then updated the model with the newly labeled data. Daumé daume2008cross proposed a self training method in which several models are trained on the same dataset, and only unlabeled instances that satisfy the cross task knowledge constraints are used in the self training process. MaCallum et al.gec proposed generalized expectation(GE) criteria which formalised the knowledge as constraint terms about the expectation of the model into the objective function.Graça et al.pr proposed posterior regularization(PR) framework which projects the model's posterior onto a set of distributions that satisfy the auxiliary constraints. Druck et al.ge-fl explored constraints of labeled features in the framework of GE by forcing the model's predicted feature distribution to approach the reference distribution. Andrzejewski et al.andrzejewski2011framework proposed a framework in which general domain knowledge can be easily incorporated into LDA. Altendorf et al.altendorf2012learning explored monotonicity constraints to improve the accuracy while learning from sparse data. Chen et al.chen2013leveraging tried to learn comprehensible topic models by leveraging multi-domain knowledge. Mann and McCallum simple,generalized incorporated not only labeled features but also other knowledge like class distribution into the objective function of GE-FL. But they discussed only from the semi-supervised perspective and did not investigate into the robustness problem, unlike what we addressed in this paper. There are also some active learning methods trying to use prior knowledge. Raghavan et al.feedback proposed to use feedback on instances and features interlacedly, and demonstrated that feedback on features boosts the model much. Druck et al.active proposed an active learning method which solicits labels on features rather than on instances and then used GE-FL to train the model. Conclusion and Discussions This paper investigates into the problem of how to leverage prior knowledge robustly in learning models. We propose three regularization terms on top of generalized expectation criteria. As demonstrated by the experimental results, the performance can be considerably improved when taking into account these factors. Comparative results show that our proposed methods is more effective and works more robustly against baselines. To the best of our knowledge, this is the first work to address the robustness problem of leveraging knowledge, and may inspire other research. We then present more detailed discussions about the three regularization methods. Incorporating neutral features is the simplest way of regularization, which doesn't require any modification of GE-FL but just finding out some common features. But as Figure 1 (a) shows, only using neutral features are not strong enough to handle extremely unbalanced labeled features. The maximum entropy regularization term shows the strong ability of controlling unbalance. This method doesn't need any extra knowledge, and is thus suitable when we know nothing about the corpus. But this method assumes that the categories are uniformly distributed, which may not be the case in practice, and it will have a degraded performance if the assumption is violated (see Figure 1 (b), Figure 2 (b), Figure 3 (a)). The KL divergence performs much better on unbalanced corpora than other methods. The reason is that KL divergence utilizes the reference class distribution and doesn't make any assumptions. The fact suggests that additional knowledge does benefit the model. However, the KL divergence term requires providing the true class distribution. Sometimes, we may have the exact knowledge about the true distribution, but sometimes we may not. Fortunately, the model is insensitive to the true distribution and therefore a rough estimation of the true distribution is sufficient. In our experiments, when the true class distribution is 1:2, where the reference class distribution is set to 1:1.5/1:2/1:2.5, the accuracy is 0.755/0.756/0.760 respectively. This provides us the possibility to perform simple computing on the corpus to obtain the distribution in reality. Or, we can set the distribution roughly with domain expertise.
What NLP tasks do they consider?
text classification for themes including sentiment, web-page, science, medical and healthcare
3,591
qasper
4k
Introduction Word embeddings are representations of words in numerical form, as vectors of typically several hundred dimensions. The vectors are used as an input to machine learning models; for complex language processing tasks these are typically deep neural networks. The embedding vectors are obtained from specialized learning tasks, based on neural networks, e.g., word2vec BIBREF0, GloVe BIBREF1, FastText BIBREF2, ELMo BIBREF3, and BERT BIBREF4. For training, the embeddings algorithms use large monolingual corpora that encode important information about word meaning as distances between vectors. In order to enable downstream machine learning on text understanding tasks, the embeddings shall preserve semantic relations between words, and this is true even across languages. Probably the best known word embeddings are produced by the word2vec method BIBREF5. The problem with word2vec embeddings is their failure to express polysemous words. During training of an embedding, all senses of a given word (e.g., paper as a material, as a newspaper, as a scientific work, and as an exam) contribute relevant information in proportion to their frequency in the training corpus. This causes the final vector to be placed somewhere in the weighted middle of all words' meanings. Consequently, rare meanings of words are poorly expressed with word2vec and the resulting vectors do not offer good semantic representations. For example, none of the 50 closest vectors of the word paper is related to science. The idea of contextual embeddings is to generate a different vector for each context a word appears in and the context is typically defined sentence-wise. To a large extent, this solves the problems with word polysemy, i.e. the context of a sentence is typically enough to disambiguate different meanings of a word for humans and so it is for the learning algorithms. In this work, we describe high-quality models for contextual embeddings, called ELMo BIBREF3, precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish. ELMo is one of the most successful approaches to contextual word embeddings. At time of its creation, ELMo has been shown to outperform previous word embeddings BIBREF3 like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution. This report is split into further five sections. In section SECREF2, we describe the contextual embeddings ELMo. In Section SECREF3, we describe the datasets used and in Section SECREF4 we describe preprocessing and training of the embeddings. We describe the methodology for evaluation of created vectors and results in Section SECREF5. We present conclusion in Section SECREF6 where we also outline plans for further work. ELMo Typical word embeddings models or representations, such as word2vec BIBREF0, GloVe BIBREF1, or FastText BIBREF2, are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding BIBREF3 is one of the state-of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component. ELMo model`s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, we try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, we try to predict the preceding word, based on the given following words. It is equivalent to the first LSTM, just reading the text in reverse. In NLP tasks, any set of these embeddings may be used; however, a weighted average is usually used. The weights of the average are learned during the training of the model for the specific task. Additionally, an entire ELMo model can be fine-tuned on a specific end task. Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese. ELMo ::: ELMoForManyLangs Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository. Training Data We trained ELMo models for seven languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian and Swedish. To obtain high-quality embeddings, we used large monolingual corpora from various sources for each language. Some corpora are available online under permissive licences, others are available only for research purposes or have limited availability. The corpora used in training datasets are a mix of news articles and general web crawl, which we preprocessed and deduplicated. Below we shortly describe the used corpora in alphabetical order of the involved languages. Their names and sizes are summarized in Table TABREF3. Croatian dataset include hrWaC 2.1 corpus BIBREF9, Riznica BIBREF10, and articles of Croatian branch of Styria media house, made available to us through partnership in a joint project. hrWaC was built by crawling the .hr internet domain in 2011 and 2014. Riznica is composed of Croatian fiction and non-fiction prose, poetry, drama, textbooks, manuals, etc. The Styria dataset consists of 570,219 news articles published on the Croatian 24sata news portal and niche portals related to 24sata. Estonian dataset contains texts from two sources, CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, and news articles made available to us by Ekspress Meedia due to partnership in the project. Ekspress Meedia dataset is composed of Estonian news articles between years 2009 and 2019. The CoNLL 2017 corpus is composed of Estonian Wikipedia and webcrawl. Finnish dataset contains articles by Finnish news agency STT, Finnish part of the CoNLL 2017 dataset, and Ylilauta downloadable version BIBREF11. STT news articles were published between years 1992 and 2018. Ylilauta is a Finnish online discussion board; the corpus contains parts of the discussions from 2012 to 2014. Latvian dataset consists only of the Latvian portion of the ConLL 2017 corpus. Lithuanian dataset is composed of Lithuanian Wikipedia articles from 2018, DGT-UD corpus, and LtTenTen. DGT-UD is a parallel corpus of 23 official languages of the EU, composed of JRC DGT translation memory of European law, automatically annotated with UD-Pipe 1.2. LtTenTen is Lithuanian web corpus made up of texts collected from the internet in April 2014 BIBREF12. Slovene dataset is formed from the Gigafida 2.0 corpus BIBREF13. It is a general language corpus composed of various sources, mostly newspapers, internet pages, and magazines, but also fiction and non-fiction prose, textbooks, etc. Swedish dataset is composed of STT Swedish articles and Swedish part of CoNLL 2017. The Finnish news agency STT publishes some of its articles in Swedish language. They were made available to us through partnership in a joint project. The corpus contains those articles from 1992 to 2017. Preprocessing and Training Prior to training the ELMo models, we sentence and word tokenized all the datasets. The text was formatted in such a way that each sentence was in its own line with tokens separated by white spaces. CoNLL 2017, DGT-UD and LtTenTen14 corpora were already pre-tokenized. We tokenized the others using the NLTK library and its tokenizers for each of the languages. There is no tokenizer for Croatian in NLTK library, so we used Slovene tokenizer instead. After tokenization, we deduplicated the datasets for each language separately, using the Onion (ONe Instance ONly) tool for text deduplication. We applied the tool on paragraph level for corpora that did not have sentences shuffled and on sentence level for the rest. We considered 9-grams with duplicate content threshold of 0.9. For each language we prepared a vocabulary file, containing roughly one million most common tokens, i.e. tokens that appear at least $n$ times in the corpus, where $n$ is between 15 and 25, depending on the dataset size. We included the punctuation marks among the tokens. We trained each ELMo model using default values used to train the original English ELMo (large) model. Evaluation We evaluated the produced ELMo models for all languages using two evaluation tasks: a word analogy task and named entity recognition (NER) task. Below, we first shortly describe each task, followed by the evaluation results. Evaluation ::: Word Analogy Task The word analogy task was popularized by mikolov2013distributed. The goal is to find a term $y$ for a given term $x$ so that the relationship between $x$ and $y$ best resembles the given relationship $a : b$. There are two main groups of categories: 5 semantic and 10 syntactic. To illustrate a semantic relationship, consider for example that the word pair $a : b$ is given as “Finland : Helsinki”. The task is to find the term $y$ corresponding to the relationship “Sweden : $y$”, with the expected answer being $y=$ Stockholm. In syntactic categories, the two words in a pair have a common stem (in some cases even same lemma), with all the pairs in a given category having the same morphological relationship. For example, given the word pair “long : longer”, we see that we have an adjective in its base form and the same adjective in a comparative form. That task is then to find the term $y$ corresponding to the relationship “dark : $y$”, with the expected answer being $y=$ darker, that is a comparative form of the adjective dark. In the vector space, the analogy task is transformed into vector arithmetic and search for nearest neighbours, i.e. we compute the distance between vectors: d(vec(Finland), vec(Helsinki)) and search for word $y$ which would give the closest result in distance d(vec(Sweden), vec($y$)). In the analogy dataset the analogies are already pre-specified, so we are measuring how close are the given pairs. In the evaluation below, we use analogy datasets for all tested languages based on the English dataset by BIBREF14 . Due to English-centered bias of this dataset, we used a modified dataset which was first written in Slovene language and then translated into other languages BIBREF15. As each instance of analogy contains only four words, without any context, the contextual models (such as ELMo) do not have enough context to generate sensible embeddings. We therefore used some additional text to form simple sentences using the four analogy words, while taking care that their noun case stays the same. For example, for the words "Rome", "Italy", "Paris" and "France" (forming the analogy Rome is to Italy as Paris is to $x$, where the correct answer is $x=$France), we formed the sentence "If the word Rome corresponds to the word Italy, then the word Paris corresponds to the word France". We generated embeddings for those four words in the constructed sentence, substituted the last word with each word in our vocabulary and generated the embeddings again. As typical for non-contextual analogy task, we measure the cosine distance ($d$) between the last word ($w_4$) and the combination of the first three words ($w_2-w_1+w_3$). We use the CSLS metric BIBREF16 to find the closest candidate word ($w_4$). If we find the correct word among the five closest words, we consider that entry as successfully identified. The proportion of correctly identified words forms a statistic called accuracy@5, which we report as the result. We first compare existing Latvian ELMo embeddings from ELMoForManyLangs project with our Latvian embeddings, followed by the detailed analysis of our ELMo embeddings. We trained Latvian ELMo using only CoNLL 2017 corpora. Since this is the only language, where we trained the embedding model on exactly the same corpora as ELMoForManyLangs models, we chose it for comparison between our ELMo model with ELMoForManyLangs. In other languages, additional or other corpora were used, so a direct comparison would also reflect the quality of the corpora used for training. In Latvian, however, only the size of the training dataset is different. ELMoForManyLangs uses only 20 million tokens and we use the whole corpus of 270 million tokens. The Latvian ELMo model from ELMoForManyLangs project performs significantly worse than EMBEDDIA ELMo Latvian model on all categories of word analogy task (Figure FIGREF16). We also include the comparison with our Estonian ELMo embeddings in the same figure. This comparison shows that while differences between our Latvian and Estonian embeddings can be significant for certain categories, the accuracy score of ELMoForManyLangs is always worse than either of our models. The comparison of Estonian and Latvian models leads us to believe that a few hundred million tokens is a sufficiently large corpus to train ELMo models (at least for word analogy task), but 20-million token corpora used in ELMoForManyLangs are too small. The results for all languages and all ELMo layers, averaged over semantic and syntactic categories, are shown in Table TABREF17. The embeddings after the first LSTM layer perform best in semantic categories. In syntactic categories, the non-contextual CNN layer performs the best. Syntactic categories are less context dependent and much more morphology and syntax based, so it is not surprising that the non-contextual layer performs well. The second LSTM layer embeddings perform the worst in syntactic categories, though still outperforming CNN layer embeddings in semantic categories. Latvian ELMo performs worse compared to other languages we trained, especially in semantic categories, presumably due to smaller training data size. Surprisingly, the original English ELMo performs very poorly in syntactic categories and only outperforms Latvian in semantic categories. The low score can be partially explained by English model scoring $0.00$ in one syntactic category “opposite adjective”, which we have not been able to explain. Evaluation ::: Named Entity Recognition For evaluation of ELMo models on a relevant downstream task, we used named entity recognition (NER) task. NER is an information extraction task that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. To allow comparison of results between languages, we used an adapted version of this task, which uses a reduced set of labels, available in NER datasets for all processed languages. The labels in the used NER datasets are simplified to a common label set of three labels (person - PER, location - LOC, organization - ORG). Each word in the NER dataset is labeled with one of the three mentioned labels or a label 'O' (other, i.e. not a named entity) if it does not fit any of the other three labels. The number of words having each label is shown in Table TABREF19. To measure the performance of ELMo embeddings on the NER task we proceeded as follows. We embedded the text in the datasets sentence by sentence, producing three vectors (one from each ELMo layer) for each token in a sentence. We calculated the average of the three vectors and used it as the input of our recognition model. The input layer was followed by a single LSTM layer with 128 LSTM cells and a dropout layer, randomly dropping 10% of the neurons on both the output and the recurrent branch. The final layer of our model was a time distributed softmax layer with 4 neurons. We used ADAM optimiser BIBREF17 with the learning rate 0.01 and $10^{-5}$ learning rate decay. We used categorical cross-entropy as a loss function and trained the model for 3 epochs. We present the results using the Macro $F_1$ score, that is the average of $F_1$-scores for each of the three NE classes (the class Other is excluded). Since the differences between the tested languages depend more on the properties of the NER datasets than on the quality of embeddings, we can not directly compare ELMo models. For this reason, we take the non-contextual fastText embeddings as a baseline and predict named entities using them. The architecture of the model using fastText embeddings is the same as the one using ELMo embeddings, except that the input uses 300 dimensional fastText embedding vectors, and the model was trained for 5 epochs (instead of 3 as for ELMo). In both cases (ELMo and fastText) we trained and evaluated the model five times, because there is some random component involved in initialization of the neural network model. By training and evaluating multiple times, we minimise this random component. The results are presented in Table TABREF21. We included the evaluation of the original ELMo English model in the same table. NER models have little difficulty distinguishing between types of named entities, but recognizing whether a word is a named entity or not is more difficult. For languages with the smallest NER datasets, Croatian and Lithuanian, ELMo embeddings show the largest improvement over fastText embeddings. However, we can observe significant improvements with ELMo also on English and Finnish, which are among the largest datasets (English being by far the largest). Only on Slovenian dataset did ELMo perform slightly worse than fastText, on all other EMBEDDIA languages, the ELMo embeddings improve the results. Conclusion We prepared precomputed ELMo contextual embeddings for seven languages: Croatian, Estonian, Finnish, Latvian, Lithuanian, Slovenian, and Swedish. We present the necessary background on embeddings and contextual embeddings, the details of training the embedding models, and their evaluation. We show that the size of used training sets importantly affects the quality of produced embeddings, and therefore the existing publicly available ELMo embeddings for the processed languages are inadequate. We trained new ELMo embeddings on larger training sets and analysed their properties on the analogy task and on the NER task. The results show that the newly produced contextual embeddings produce substantially better results compared to the non-contextual fastText baseline. In future work, we plan to use the produced contextual embeddings on the problems of news media industry. The pretrained ELMo models will be deposited to the CLARIN repository by the time of the final version of this paper. Acknowledgments The work was partially supported by the Slovenian Research Agency (ARRS) core research programme P6-0411. This paper is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this publication reflects only the authors' view and the EU Commission is not responsible for any use that may be made of the information it contains.
How larger are the training sets of these versions of ELMo compared to the previous ones?
By 14 times.
3,294
qasper
4k
Introduction Social media with abundant user-generated posts provide a rich platform for understanding events, opinions and preferences of groups and individuals. These insights are primarily hidden in unstructured forms of social media posts, such as in free-form text or images without tags. Named entity recognition (NER), the task of recognizing named entities from free-form text, is thus a critical step for building structural information, allowing for its use in personalized assistance, recommendations, advertisement, etc. While many previous approaches BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 on NER have shown success for well-formed text in recognizing named entities via word context resolution (e.g. LSTM with word embeddings) combined with character-level features (e.g. CharLSTM/CNN), several additional challenges remain for recognizing named entities from extremely short and coarse text found in social media posts. For instance, short social media posts often do not provide enough textual contexts to resolve polysemous entities (e.g. “monopoly is da best ", where `monopoly' may refer to a board game (named entity) or a term in economics). In addition, noisy text includes a huge number of unknown tokens due to inconsistent lexical notations and frequent mentions of various newly trending entities (e.g. “xoxo Marshmelloooo ", where `Marshmelloooo' is a mis-spelling of a known entity `Marshmello', a music producer), making word embeddings based neural networks NER models vulnerable. To address the challenges above for social media posts, we build upon the state-of-the-art neural architecture for NER with the following two novel approaches (Figure FIGREF1 ). First, we propose to leverage auxiliary modalities for additional context resolution of entities. For example, many popular social media platforms now provide ways to compose a post in multiple modalities - specifically image and text (e.g. Snapchat captions, Twitter posts with image URLs), from which we can obtain additional context for understanding posts. While “monopoly" in the previous example is ambiguous in its textual form, an accompanying snap image of a board game can help disambiguate among polysemous entities, thereby correctly recognizing it as a named entity. Second, we also propose a general modality attention module which chooses per decoding step the most informative modality among available ones (in our case, word embeddings, character embeddings, or visual features) to extract context from. For example, the modality attention module lets the decoder attenuate the word-level signals for unknown word tokens (“Marshmellooooo" with trailing `o's) and amplifies character-level features intsead (capitalized first letter, lexical similarity to other known named entity token `Marshmello', etc.), thereby suppressing noise information (“UNK" token embedding) in decoding steps. Note that most of the previous literature in NER or other NLP tasks combine word and character-level information with naive concatenation, which is vulnerable to noisy social media posts. When an auxiliary image is available, the modality attention module determines to amplify this visual context in disambiguating polysemous entities, or to attenuate visual contexts when they are irrelevant to target named entities, selfies, etc. Note that the proposed modality attention module is distinct from how attention is used in other sequence-to-sequence literature (e.g. attending to a specific token within an input sequence). Section SECREF2 provides the detailed literature review. Our contributions are three-fold: we propose (1) an LSTM-CNN hybrid multimodal NER network that takes as input both image and text for recognition of a named entity in text input. To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks. (2) We propose a general modality attention module that selectively chooses modalities to extract primary context from, maximizing information gain and suppressing irrelevant contexts from each modality (we treat words, characters, and images as separate modalities). (3) We show that the proposed approaches outperform the state-of-the-art NER models (both with and without using additional visual contexts) on our new MNER dataset SnapCaptions, a large collection of informal and extremely short social media posts paired with unique images. Related Work Neural models for NER have been recently proposed, producing state-of-the-art performance on standard NER tasks. For example, some of the end-to-end NER systems BIBREF4 , BIBREF2 , BIBREF3 , BIBREF0 , BIBREF1 use a recurrent neural network usually with a CRF BIBREF5 , BIBREF6 for sequence labeling, accompanied with feature extractors for words and characters (CNN, LSTMs, etc.), and achieve the state-of-the-art performance mostly without any use of gazetteers information. Note that most of these work aggregate textual contexts via concatenation of word embeddings and character embeddings. Recently, several work have addressed the NER task specifically on noisy short text segments such as Tweets, etc. BIBREF7 , BIBREF8 . They report performance gains from leveraging external sources of information such as lexical information (POS tags, etc.) and/or from several preprocessing steps (token substitution, etc.). Our model builds upon these state-of-the-art neural models for NER tasks, and improves the model in two critical ways: (1) incorporation of visual contexts to provide auxiliary information for short media posts, and (2) addition of the modality attention module, which better incorporates word embeddings and character embeddings, especially when there are many missing tokens in the given word embedding matrix. Note that we do not explore the use of gazetteers information or other auxiliary information (POS tags, etc.) BIBREF9 as it is not the focus of our study. Attention modules are widely applied in several deep learning tasks BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . For example, they use an attention module to attend to a subset within a single input (a part/region of an image, a specific token in an input sequence of tokens, etc.) at each decoding step in an encoder-decoder framework for image captioning tasks, etc. BIBREF14 explore various attention mechanisms in NLP tasks, but do not incorporate visual components or investigate the impact of such models on noisy social media data. BIBREF15 propose to use attention for a subset of discrete source samples in transfer learning settings. Our modality attention differs from the previous approaches in that we attenuate or amplifies each modality input as a whole among multiple available modalities, and that we use the attention mechanism essentially to map heterogeneous modalities in a single joint embedding space. Our approach also allows for re-use of the same model for predicting labels even when some of the modalities are missing in input, as other modalities would still preserve the same semantics in the embeddings space. Multimodal learning is studied in various domains and applications, aimed at building a joint model that extracts contextual information from multiple modalities (views) of parallel datasets. The most relevant task to our multimodal NER system is the task of multimodal machine translation BIBREF16 , BIBREF17 , which aims at building a better machine translation system by taking as input a sentence in a source language as well as a corresponding image. Several standard sequence-to-sequence architectures are explored (a target-language LSTM decoder that takes as input an image first). Other previous literature include study of Canonical Correlation Analysis (CCA) BIBREF18 to learn feature correlations among multiple modalities, which is widely used in many applications. Other applications include image captioning BIBREF10 , audio-visual recognition BIBREF19 , visual question answering systems BIBREF20 , etc. To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks. Proposed Methods Figure FIGREF2 illustrates the proposed multimodal NER (MNER) model. First, we obtain word embeddings, character embeddings, and visual features (Section SECREF3 ). A Bi-LSTM-CRF model then takes as input a sequence of tokens, each of which comprises a word token, a character sequence, and an image, in their respective representation (Section SECREF4 ). At each decoding step, representations from each modality are combined via the modality attention module to produce an entity label for each token ( SECREF5 ). We formulate each component of the model in the following subsections. Notations: Let INLINEFORM0 a sequence of input tokens with length INLINEFORM1 , with a corresponding label sequence INLINEFORM2 indicating named entities (e.g. in standard BIO formats). Each input token is composed of three modalities: INLINEFORM3 for word embeddings, character embeddings, and visual embeddings representations, respectively. Features Similar to the state-of-the-art NER approaches BIBREF0 , BIBREF1 , BIBREF8 , BIBREF4 , BIBREF2 , BIBREF3 , we use both word embeddings and character embeddings. Word embeddings are obtained from an unsupervised learning model that learns co-occurrence statistics of words from a large external corpus, yielding word embeddings as distributional semantics BIBREF21 . Specifically, we use pre-trained embeddings from GloVE BIBREF22 . Character embeddings are obtained from a Bi-LSTM which takes as input a sequence of characters of each token, similarly to BIBREF0 . An alternative approach for obtaining character embeddings is using a convolutional neural network as in BIBREF1 , but we find that Bi-LSTM representation of characters yields empirically better results in our experiments. Visual embeddings: To extract features from an image, we take the final hidden layer representation of a modified version of the convolutional network model called Inception (GoogLeNet) BIBREF23 , BIBREF24 trained on the ImageNet dataset BIBREF25 to classify multiple objects in the scene. Our implementation of the Inception model has deep 22 layers, training of which is made possible via “network in network" principles and several dimension reduction techniques to improve computing resource utilization. The final layer representation encodes discriminative information describing what objects are shown in an image, which provide auxiliary contexts for understanding textual tokens and entities in accompanying captions. Incorporating this visual information onto the traditional NER system is an open challenge, and multiple approaches can be considered. For instance, one may provide visual contexts only as an initial input to decoder as in some encoder-decoder image captioning systems BIBREF26 . However, we empirically observe that an NER decoder which takes as input the visual embeddings at every decoding step (Section SECREF4 ), combined with the modality attention module (Section SECREF5 ), yields better results. Lastly, we add a transform layer for each feature INLINEFORM0 before it is fed to the NER entity LSTM. Bi-LSTM + CRF for Multimodal NER Our MNER model is built on a Bi-LSTM and CRF hybrid model. We use the following implementation for the entity Bi-LSTM. it = (Wxiht-1 + Wcict-1) ct = (1-it) ct-1 + it tanh(Wxcxt + Whcht-1) ot = (Wxoxt + Whoht-1 + Wcoct) ht = LSTM(xt) = ot tanh(ct) where INLINEFORM0 is a weighted average of three modalities INLINEFORM1 via the modality attention module, which will be defined in Section SECREF5 . Bias terms for gates are omitted here for simplicity of notation. We then obtain bi-directional entity token representations INLINEFORM0 by concatenating its left and right context representations. To enforce structural correlations between labels in sequence decoding, INLINEFORM1 is then passed to a conditional random field (CRF) to produce a label for each token maximizing the following objective. y* = y p(y|h; WCRF) p(y|h; WCRF) = t t (yt-1,yt;h) y' t t (y't-1,y't;h) where INLINEFORM0 is a potential function, INLINEFORM1 is a set of parameters that defines the potential functions and weight vectors for label pairs ( INLINEFORM2 ). Bias terms are omitted for brevity of formulation. The model can be trained via log-likelihood maximization for the training set INLINEFORM0 : L(WCRF) = i p(y|h; W) Modality Attention The modality attention module learns a unified representation space for multiple available modalities (words, characters, images, etc.), and produces a single vector representation with aggregated knowledge among multiple modalities, based on their weighted importance. We motivate this module from the following observations. A majority of the previous literature combine the word and character-level contexts by simply concatenating the word and character embeddings at each decoding step, e.g. INLINEFORM0 in Eq. SECREF4 . However, this naive concatenation of two modalities (word and characters) results in inaccurate decoding, specifically for unknown word token embeddings (an all-zero vector INLINEFORM1 or a random vector INLINEFORM2 is assigned for any unknown token INLINEFORM3 , thus INLINEFORM4 or INLINEFORM5 ). While this concatenation approach does not cause significant errors for well-formatted text, we observe that it induces performance degradation for our social media post datasets which contain a significant number of missing tokens. Similarly, naive merging of textual and visual information ( INLINEFORM0 ) yields suboptimal results as each modality is treated equally informative, whereas in our datasets some of the images may contain irrelevant contexts to textual modalities. Hence, ideally there needs a mechanism in which the model can effectively turn the switch on and off the modalities adaptive to each sample. To this end, we propose a general modality attention module, which adaptively attenuates or emphasizes each modality as a whole at each decoding step INLINEFORM0 , and produces a soft-attended context vector INLINEFORM1 as an input token for the entity LSTM. [at(w),at(c),at(v)] = (Wm[xt(w); xt(c); xt(v)] + bm ) t(m) = (at(m))m'{w,c,v}(at(m')) m {w,c,v} xt = m{w,c,v} t(m)xt(m) where INLINEFORM0 is an attention vector at each decoding step INLINEFORM1 , and INLINEFORM2 is a final context vector at INLINEFORM3 that maximizes information gain for INLINEFORM4 . Note that the optimization of the objective function (Eq. SECREF4 ) with modality attention (Eq. SECREF5 ) requires each modality to have the same dimension ( INLINEFORM5 ), and that the transformation via INLINEFORM6 essentially enforces each modality to be mapped into the same unified subspace, where the weighted average of which encodes discrimitive features for recognition of named entities. When visual context is not provided with each token (as in the traditional NER task), we can define the modality attention for word and character embeddings only in a similar way: [at(w),at(c)] = (Wm[xt(w); xt(c)] + bm ) t(m) = (at(m))m'{w,c}(at(m')) m {w,c} xt = m{w,c} t(m)xt(m) Note that while we apply this modality attention module to the Bi-LSTM+CRF architecture (Section SECREF4 ) for its empirical superiority, the module itself is flexible and thus can work with other NER architectures or for other multimodal applications. SnapCaptions Dataset The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are “New York Story” or “Thanksgiving Story”, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities. Baselines Task: given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) BIBREF27 . We report the performance of the following state-of-the-art NER models as baselines, as well as several configurations of our proposed approach to examine contributions of each component (W: word, C: char, V: visual). Bi-LSTM/CRF (W only): only takes word token embeddings (Stanford GloVE) as input. The rest of the architecture is kept the same. Bi-LSTM/CRF + Bi-CharLSTM (C only): only takes a character sequence of each word token as input. (No word embeddings) Bi-LSTM/CRF + Bi-CharLSTM (W+C) BIBREF0 : takes as input both word embeddings and character embeddings extracted from a Bi-CharLSTM. Entity LSTM takes concatenated vectors of word and character embeddings as input tokens. Bi-LSTM/CRF + CharCNN (W+C) BIBREF1 : uses character embeddings extracted from a CNN instead. Bi-LSTM/CRF + CharCNN (W+C) + Multi-task BIBREF8 : trains the model to perform both recognition (into multiple entity types) as well as segmentation (binary) tasks. (proposed) Bi-LSTM/CRF + Bi-CharLSTM with modality attention (W+C): uses the modality attention to merge word and character embeddings. (proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception (W+C+V): takes as input visual contexts extracted from InceptionNet as well, concatenated with word and char vectors. (proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception with modality attention (W+C+V): uses the modality attention to merge word, character, and visual embeddings as input to entity LSTM. Results: SnapCaptions Dataset Table TABREF6 shows the NER performance on the Snap Captions dataset. We report both entity types recognition (PER, LOC, ORG, MISC) and named entity segmentation (named entity or not) results. Parameters: We tune the parameters of each model with the following search space (bold indicate the choice for our final model): character embeddings dimension: {25, 50, 100, 150, 200, 300}, word embeddings size: {25, 50, 100, 150, 200, 300}, LSTM hidden states: {25, 50, 100, 150, 200, 300}, and INLINEFORM0 dimension: {25, 50, 100, 150, 200, 300}. We optimize the parameters with Adagrad BIBREF28 with batch size 10, learning rate 0.02, epsilon INLINEFORM1 , and decay 0.0. Main Results: When visual context is available (W+C+V), we see that the model performance greatly improves over the textual models (W+C), showing that visual contexts are complimentary to textual information in named entity recognition tasks. In addition, it can be seen that the modality attention module further improves the entity type recognition performance for (W+C+V). This result indicates that the modality attention is able to focus on the most effective modality (visual, words, or characters) adaptive to each sample to maximize information gain. Note that our text-only model (W+C) with the modality attention module also significantly outperform the state-of-the-art baselines BIBREF8 , BIBREF1 , BIBREF0 that use the same textual modalities (W+C), showing the effectiveness of the modality attention module for textual models as well. Error Analysis: Table TABREF17 shows example cases where incorporation of visual contexts affects prediction of named entities. For example, the token `curry' in the caption “The curry's " is polysemous and may refer to either a type of food or a famous basketball player `Stephen Curry', and the surrounding textual contexts do not provide enough information to disambiguate it. On the other hand, visual contexts (visual tags: `parade', `urban area', ...) provide similarities to the token's distributional semantics from other training examples (snaps from “NBA Championship Parade Story"), and thus the model successfully predicts the token as a named entity. Similarly, while the text-only model erroneously predicts `Apple' in the caption “Grandma w dat lit Apple Crisp" as an organization (Apple Inc.), the visual contexts (describing objects related to food) help disambiguate the token, making the model predict it correctly as a non-named entity (a fruit). Trending entities (musicians or DJs such as `CID', `Duke Dumont', `Marshmello', etc.) are also recognized correctly with strengthened contexts from visual information (describing concert scenes) despite lack of surrounding textual contexts. A few cases where visual contexts harmed the performance mostly include visual tags that are unrelated to a token or its surrounding textual contexts. Visualization of Modality Attention: Figure FIGREF19 visualizes the modality attention module at each decoding step (each column), where amplified modality is represented with darker color, and attenuated modality is represented with lighter color. For the image-aided model (W+C+V; upper row in Figure FIGREF19 ), we confirm that the modality attention successfully attenuates irrelevant signals (selfies, etc.) and amplifies relevant modality-based contexts in prediction of a given token. In the example of “disney word essential = coffee" with visual tags selfie, phone, person, the modality attention successfully attenuates distracting visual signals and focuses on textual modalities, consequently making correct predictions. The named entities in the examples of “Beautiful night atop The Space Needle" and “Splash Mountain" are challenging to predict because they are composed of common nouns (space, needle, splash, mountain), and thus they often need additional contexts to correctly predict. In the training data, visual contexts make stronger indicators for these named entities (space needle, splash mountain), and the modality attention module successfully attends more to stronger signals. For text-only model (W+C), we observe that performance gains mostly come from the modality attention module better handling tokens unseen during training or unknown tokens from the pre-trained word embeddings matrix. For example, while WaRriOoOrs and Kooler Matic are missing tokens in the word embeddings matrix, it successfully amplifies character-based contexts (capitalized first letters, similarity to known entities `Golden State Warriors') and suppresses word-based contexts (word embeddings for unknown tokens `WaRriOoOrs'), leading to correct predictions. This result is significant because it shows performance of the model, with an almost identical architecture, can still improve without having to scale the word embeddings matrix indefinitely. Figure FIGREF19 (b) shows the cases where the modality attention led to incorrect predictions. For example, the model predicts missing tokens HUUUGE and Shampooer incorrectly as named entities by amplifying misleading character-based contexts (capitalized first letters) or visual contexts (concert scenes, associated contexts of which often include named entities in the training dataset). Sensitivity to Word Embeddings Vocabulary Size: In order to isolate the effectiveness of the modality attention module on textual models in handling missing tokens, we report the performance with varying word embeddings vocabulary sizes in Table TABREF20 . By increasing the number of missing tokens artificially by randomly removing words from the word embeddings matrix (original vocab size: 400K), we observe that while the overall performance degrades, the modality attention module is able to suppress the peformance degradation. Note also that the performance gap generally gets bigger as we decrease the vocabulary size of the word embeddings matrix. This result is significant in that the modality attention is able to improve the model more robust to missing tokens without having to train an indefinitely large word embeddings matrix for arbitrarily noisy social media text datasets. Conclusions We proposed a new multimodal NER (MNER: image + text) task on short social media posts. We demonstrated for the first time an effective MNER system, where visual information is combined with textual information to outperform traditional text-based NER baselines. Our work can be applied to myriads of social media posts or other articles across multiple platforms which often include both text and accompanying images. In addition, we proposed the modality attention module, a new neural mechanism which learns optimal integration of different modes of correlated information. In essence, the modality attention learns to attenuate irrelevant or uninformative modal information while amplifying the primary modality to extract better overall representations. We showed that the modality attention based model outperforms other state-of-the-art baselines when text was the only modality available, by better combining word and character level information.
Does their NER model learn NER from both text and images?
Yes
3,784
qasper
4k
A special tribute to Del Bigtree (pictured) and his team at ICAN for his stunning 88 page letter to the HHS regarding vaccine safety. As Del reported - in the latest edition of Highwire - the letter, in response to an earlier reply from the then acting Director National Vaccine Program Office, Melinda Wharton, took virtually a year to compile, and is a meticulous piece of research. Most sensationally they researched the HHS claim through US government archives that at least some pediatric vaccines had been trialed against genuine placebo, and came to a negative conclusion. Not only that, they established that none of the vaccines those vaccines had been trialed against had ever been trialed against genuine placebo either. At the end of the line the toxic products were only being compared with other toxic products, rather than against saline. Leave aside the sceptics, for any believer in the vaccine program as a necessary intervention in public health, this should be a devastating finding. Fundamentally, the research into the safety of any of the products before marketing was simply not there. The manufacturers apparently had no faith that their proto-products could withstand this scrutiny, and for the rest they just did not care: under the alleged imperative of protecting the population it seems anything went. So even before all the sham monitoring procedures and reviews which Del and his team dismantle in forensic detail we are left with the proposition that none of the present products being given to US children – and frequently other children across most of the developed world – have any meaningful pre-marketing safety data all. If you are believer in the program you have been let down: if you wanted a program with any pretensions to safety - supposing such a thing to be possible - it looks like you would have to start from scratch. The manufacturers did this: the governments, the politicians and the regulators (internationally) let it happen. This damning document is published simultaneously with a demand in the UK from the Royal Society for Public Health (which I had never heard of) to shut down comment about vaccines on the web. It echoes calls from Seth Berkley of GAVI, Heidi Larson of the Vaccine Confidence Project and the European Parliament. The pamphlet airily dismisses concerns that vaccines have side effects or that you could possibly have too many. It is pure public relations, and if the RSPH claims to be "independent" it also admits that the publication was paid for by Merck, a detail which was reported by British Medical Journal and the Guardian, but not true to form by the BBC. We have, in truth, been building to this moment for two decades: as the evidence piles up that every single aspect of the program lacks integrity or is simply rotten to the core all the perpetrators can do is call for the silencing of their critics, and maintain the products are safe because they say so. Please help give the ICAN letter the widest possible distribution, particularly to politicians. "The outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system." Nope. This makes no sense. Lots of people who seemed vibrant will get a very severe case of the same illness that a vulnerable baby overcomes in a day. And under the germ theory it doesn't matter how strong your immune system *was*. Once it's been overcome by the pathogen it is every bit as weak as anybody else's with that pathogen. What you say makes no sense. There's no reason for me to reply to you again. "Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared?" Why do you keep asking this question when I've already provided the answer hundreds of times? Why are you so desperate to believe the people who you already recognize are harming our children? Why would Walter Reed be any more trustworthy than Paul Offit or Senator Pan? Why would Jenner or Pasteur? And you went no way to explaining my arguments against germ theory. If we are attacked by billions of viruses every day then if even a tiny fraction of them are pathogenic then we couldn't possibly survive. And even if we could, we would already be immune rendering every vaccine pointless. Once we had survived our first few days on earth, then we could never get sick again. If that's wrong then we must conclude that precisely 0% of germs are pathogenic. Plus your comment about the immune system completely misunderstood my point. The immune system does not allow us to overcome our math problem. In fact, it makes it worse. You did provide one solitary example of a patient with what are presumably yellow fever symptoms but you didn't say whether they had been given any toxic medical treatments. And like I said before, the whole "incubation period" is more than a little suspicious. Clearly they never found what they thought they would and just rigged the results to tell them what they want to hear. Like every other germ theorist/vaccine promoter in history. Many kinds of bacteria are constantly evolving and changing, like flu viruses. Others are more stable over time, like the yellow fever virus. Those that change develop new ways of infiltrating the cells of the organism being attacked (from our point of view, from its unconscious point of view, it's just carrying out its need to replicate, which it can only do inside the cells of its host). The changes which allow it to better infiltrate are more successful and result in more viruses with those traits. Our immune system is designed to detect and destroy potentially dangerous invading pathogens. Many bacteria are usually harmless and absolutely necessary. The minority are dangerous, and most people's immune systems do a good job of analyzing them and killing them, often with no signs of disease. Others experience a clinical infection, and the immune system usually mounts a successful attack on them. The outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system. Vaccines are usually effective in giving immunity to the targeted diseases. They also have many dangers which everyone should be aware of, and vaccines should be avoided whenever possible. But in the case of the most dangerous diseases, everyone should learn about them and think about what he wants to do to protect himself and his children from them, considering all the factors involved. And no one can have 100% certainty that he has made the right decision, but that's life. But if you live in the Congo and many people around you are currently dying of yellow fever, then that means that you yourself are at risk of being bitten by a loaded mosquito and getting, often dying, of yellow fever. The yellow fever vaccine is very effective at preventing yellow fever. From there, each person must make a choice. At the end of this stage there is a remission of two or three days. About 80% of those with clinical disease recover at this point, with permanent immunity. The other 20% enter the toxic stage, with a return of the fever, black vomit (coffee-ground emesis), diarrhea, a slowing of the pulse (Faget's sign), jaundice, yellow eyes, yellow skin, and failure of the kidneys, liver, and heart. The patient gets a strange hiccup (like with Ebola, a related disease), falls into a coma, and dies. About half of those patients who enter the toxic stage dies, even now, even with the best of hospital care. The Faget's sign can also occur at the end of the first stage. You asked specifically about the symptoms of the Americans on Dr. Reed's team who got yellow fever in Cuba in 1900. I'll give the passage from The American Plague (162-5), which describes the course of Jesse Lazear's illness. "In his logbook, Lazear wrote an unusual entry on September 13. In all cases before those, page after page of records, Lazear had used the soldier's name and simply the date he was bitten, with no other attention to the mosquito. A one-line entry with a name and a date. On that day, however, in his elegant hand, Lazear did not write the soldier's name, but instead wrote 'Guinea Pig No. 1.' He went on to write that this guinea pig had been bitten by a mosquito that developed from an egg laid by a mosquito that developed from an egg laid by a mosquito that fed on a number of yellow fever cases: Suarez, Hernández, De Long, Ferández. It was a precise, detailed history that proved beyond doubt that the mosquito was loaded with the virus when it bit a healthy soldier...(If he had entered his name, then his death would have been considered medical suicide by the insurance company, and his wife and two children would not have gotten any payment.) For the next few days, Lazear's life continued much as it had over the last few months in Cuba. He fed and cared for the mosquitoes in the lab. ..Then he began to lose his appetite. He skipped a few meals in the mess hall. He didn't mention it to anyone, nor did he ask to see one of the yellow fever doctors; instead, he worked hard in the lab trying to ignore the oncoming headache. "On September 18, he complained of feeling 'out of sorts,' and stayed in his officer's quarters. His head pounded and L. decided to write a letter. ..(he wrote to his mother, and referred to his one-year old son Houston and the baby his wife Mabel was about to have: they were staying with his mother in the US). ..That night, L. started to feel chilled as the fever came on. He never went to sleep but worked at his desk all through the night, trying to get all the information about the mosquitoes organized. By morning, he showed all the signs of a severe attack of yellow fever. The camp doctors made the diagnosis, and L. agreed to go to the yellow fever ward. ..L. was carried by litter out of the two-room, white pine board house in which he had lived since he and Mabel first arrived in Cuba. ..(In the yellow fever ward, in a separate one-room building), Lena Warner (the immune nurse who had survived the yellow fever in 1878, when she was nine, and was found in her boarded-up house by a former slave who first thought she was dead, and carried her to safety) nursed J.L., recording his vitals. (I put up a link to his case record and vital signs last week. The surgeon general required that this record be made for every yellow fever patient.)... (On September 25,) Lena Warner braced L's arms with all of her weight, shouting for help. Still he bolted from the bed, darting around the small frame-wood room as wildly as a trapped insect beating against glass. Two soldiers ran into the ward, pinning L to his bed, tying restraints around his wrists and elbows. ..Warner sponged his body with iced whiskey and water. She recorded his temperature, which had held at 104 degrees for days, on the chart beside his bed. ..(Warner watched him sleep.) But the quiet did not last. L's body began to lurch, and black vomit rolled from his mouth; through the bar hanging above his hospital cot. He writhed in the bed, and his skin grew deep yellow. His 104 temperature slowly fell, leveling out 99 degrees, and JL died at 8:45 p.m. at the age of thirty-four." As is obvious, there are many problems with vaccines. But, that being said, most of them usually work for a period of time to prevent the targeted diseases. The basic science behind vaccines is correct. Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared? In the case of the routine childhood diseases, this was a bad thing, but it is a true thing. Vaccines usually don't cause any obvious reactions. While they usually prevent the diseases, and that's why people continue to get them. With the increasing vaccination schedule, more and more are severely and permanently damaged, and it is immoral to mandate any vaccine for anyone for this reason. But it would also be immoral to prohibit vaccines for those who want them enough to take the risk. Your article said as though it had any probative value that 90% of those who get pertussis had been vaxxed. The old DPT vaccine was MUCH more effective at preventing pertussis, but it was so dangerous (again, not to most, but to many), that developed countries replaced it with the acellular version, DTaP. From the beginning about twenty years ago, it was clear that it was not very effective and that huge numbers of vaxxed people got pertussis anyway, including my daughter who got pertussis at eight month old after having gotten three DTaPs. The pertussis vaccine continues to be very dangerous, and I do not recommend that anyone get it. It used to be a killer disease, but evolved to become much milder, to the extent that the disease is very rarely dangerous (usually only to newborns under three months old), while the vaccine is very dangerous. And they're trying to see how they can go back to the old DPT. This does not show that vaccine science has collapsed, but rather that the vaccine they developed to replace the DPT turned out to be much less effective than they first thought, while continuing to be much more dangerous than they first thought. Your article extrapolated from that that modern medical science in general has collapsed, but that, again, is going too far. A older woman in Mexico City who is like my mother to me had a pacemaker inserted about two months ago to aid her failing heart, and it has restored her to optimism and energy, when she was despondent, weak, and close to death. I took my daughter to the dentist yesterday, who said she has three wisdom teeth coming in and that she said that the lower right one was sore. So, although I am cautious about X-rays, I made an appointment for a panoramic X-ray in a month to assess the wisdom teeth, and, if it seems appropriate, I'll take her to an oral surgeon to have one or more extracted under IV sedation, in his office, if possible (the dentist thought that it would be). And I am confident that there will be no serious problems, but this is thanks to technology and training in modern medicine that haven't been available for that long. I think that everyone should inform himself on all medical procedures before agreeing to anything, but I also think that he should have access to any medical procedure which is reasonable (and opinions can differ as to that). One problem is that you have not said how you think people should protect themselves against tetanus, bacterial meningitis, and yellow fever in the relevant cases, for example. These are diseases which healthy, well-nourished people used to die from very readily. If most people stopped vaxxing and the mortality from these diseases rose to something like pre-vaccine levels, do you think they should just accept dying from them? I put that in a separate paragraph because it is the crucial issue. balinaheuchter Air Traffic Control You Tube - Colin Campbell example of - How to "Fudge a Nudge" -"Deal" or "No Deal" "Not in a month of Sundays" "No exceptions/no compromise?" -make a trade off -do an exception- everyone get's a good deal /good outcome! Hans, you are right that we are looking at one of the biggest crimes in all history. When I read the story of that poor girl who was so healthy and is now confined to a wheelchair after getting her third Gardasil shot I could not believe that Merck could produce such a toxic vaccine and give it out to girls like it was something they absolutely had to have only to be mislead and made into cripples. Merck should be prosecuted for the damage they have done to so many girls who got the Gardasil vaccine and were physically debilitated for life. There is a place for the people who perpetrated this crime on young girls and women and it is called hell. They have destroyed people's lives and gotten away with it. My heart goes out to those who have suffered this damage for no damn good reason except to help make huge profits for Merck! Here is the reason that the germ theory is nonsense. 1) Everyday we are bombarded with billions of germs. Presumably at least some of them are of the kind that germ theorists believe are dangerous (otherwise we would have to conclude that none of them are dangerous). So how do we survive? 2) Let's just say that we ignore 1 and imagine that, by way of magic, none of the billions of viruses we get bombarded with are pathogenic but all those that are are tucked away somewhere. Ok. But presumably they reside in sick people right? So where are there lots of sick people? Doctor offices and hospitals! So everybody must be dying the moment they enter these places right? 3) I love this one because I have never seen anybody else ever raise it. Under the germ theory there are no negative feedbacks. This makes a stable biological system by definition impossible. The immune system is *not* a negative feedback it is the opposite. It actually reinforces our math problem because the immune system will weaken as the number of pathogens increase. There is no way of resolving this problem without a discontinuity. A Deus ex Machina as The Almighty Pill so beautifully put it. So the germ theory is quite literally, mathematically impossible. There is as much chance of it being true as 2+2 = 5. There are plenty of other massive problems with germ theory such as why did things like SARS and bird flu magically disappear? Why do we have the symptoms that we do? Is our body controlling the symptoms to help fight the germs and if so, why would suppressing the symptoms with antibiotics or Tamiflu be considered a good idea? If the virus is causing the symptoms then why would it cause these kinds of things?
What were the vaccines trialed against?
Other toxic products.
3,141
multifieldqa_en
4k
Paper Info Title: Conflict Optimization for Binary CSP Applied to Minimum Partition into Plane Subgraphs and Graph Coloring Publish Date: 25 Mar 2023 Author List: Loïc Crombez (from LIMOS, Université Clermont Auvergne), Guilherme Da Fonseca (from LIS, Aix-Marseille Université), Florian Fontan (from Independent Researcher), Yan Gerard (from LIMOS, Université Clermont Auvergne), Aldo Gonzalez-Lorenzo (from LIS, Aix-Marseille Université), Pascal Lafourcade (from LIMOS, Université Clermont Auvergne), Luc Libralesso (from LIMOS, Université Clermont Auvergne), Benjamin Momège (from Independent Researcher), Jack Spalding-Jamieson (from David R. Cheriton School of Computer Science, University of Waterloo), Brandon Zhang (from Independent Researcher), Da Zheng (from Department of Computer Science, University of Illinois at Urbana-Champaign) Figure Figure 1: A partition of the input graph of the CG:SHOP2022 instance vispecn2518 into 57 plane graphs.It is the smallest instance of the challenge with 2518 segments.On top left, you see all 57 colors together.On top right, you see a clique of size 57, hence the solution is optimal.Each of the 57 colors is then presented in small figures. Figure 2: Number of colors over time for the instance vispecn13806 using different values p.The algorithm uses σ = 0.15, easy vertices, q max = 59022, but does not use the BDFS nor any clique. Figure 3: Number of colors over time with different values of q max obtained on the instance vispecn13806.Parameters are σ = 0.15, p = 1.2, no clique knowledge, and no BDFS. Figure 4: Number of colors over time with and without clique knowledge and BDFS obtained on the instance vispecn13806.Parameters are σ = 0.15, p = 1.2, and q max = 1500000. Figure 5: Number of colors over time for the instance vispecn13806 for different values of σ.In both figures the algorithm uses p = 1.2, easy vertices, q max = 59022, but does not use the BDFS nor any clique.For σ ≥ 0.25, no solution better than 248 colors is found. Figure 6: Number of colors over time (in hours) for the instance vispecn13806. Several CG:SHOP 2022 results.We compare the size of the largest known clique to the smallest coloring found by each team on a selection of 14 CG:SHOP 2022 instances. [20][21][22][23][24][25] with state-of-the-art graph coloring algorithms.The conflict optimizer underperforms except on the geometric graphs r* and dsjr*.CE39-0007), SEVERITAS (ANR-20-CE39-0005) and by the French government IDEX-ISITE initiative 16-IDEX-0001 (CAP[20][21][22][23][24][25].The work of Luc Libralesso is supported by the French ANR PRC grant DECRYPT (ANR-18-CE39-0007). abstract CG:SHOP is an annual geometric optimization challenge and the 2022 edition proposed the problem of coloring a certain geometric graph defined by line segments. Surprisingly, the top three teams used the same technique, called conflict optimization. This technique has been introduced in the 2021 edition of the challenge, to solve a coordinated motion planning problem. In this paper, we present the technique in the more general framework of binary constraint satisfaction problems (binary CSP). Then, the top three teams describe their different implementations of the same underlying strategy. We evaluate the performance of those implementations to vertex color not only geometric graphs, but also other types of graphs. Introduction The CG:SHOP challenge (Computational Geometry: Solving Hard Optimization Problems) is an annual geometric optimization competition, whose first edition took place in 2019. The 2022 edition proposed a problem called minimum partition into plane subgraphs. The input is a graph G embedded in the plane with edges drawn as straight line segments, and the goal is to partition the set of edges into a small number of plane graphs (Fig. ) . This goal can be formulated as a vertex coloring problem on a graph G defined as follows. The vertices of G are the segments defining the edges of G, and the edges of G correspond to pairs of crossing segments (segments that intersect only at a common endpoint are not considered crossing). The three top-ranking teams (Lasa, Gitastrophe, and Shadoks) on the CG:SHOP 2022 challenge all used a common approach called conflict optimization while the fourth team used a SAT-Boosted Tabu Search . Conflict optimization is a technique used by Shadoks to obtain the first place in the CG:SHOP 2021 challenge for low-makespan coordinated motion planning , and the main ideas of the technique lent themselves well to the 2022 challenge. Next, we describe the conflict optimizer as a metaheuristic to solve constraint satisfaction problems (CSP) . We start by describing a CSP. A CSP is a triple of • variables X = (x 1 , . . . , x n ), Each of the 57 colors is then presented in small figures. • domains D = (D 1 , . . . , D n ), and • constraints R. Each variable x i must be assigned a value in the corresponding domain D i such that all constraints are satisfied. In general, the constraints may forbid arbitrary subsets of values. We restrict our attention to a particular type of constraints (binary CSP ), which only involve pairs of assignments. A partial evaluation is an assignment of a subset of the variables, called evaluated, with the remaining variables called non-evaluated. All constraints involving a non-evaluated variable are satisfied by default. We only consider assignments and partial assignments that satisfy all constraints. The conflict optimizer iteratively modifies a partial evaluation with the goal of emptying the set S of non-evaluated variables, at which point it stops. At each step, a variable x i is removed from S. If there exists a value x ∈ D i that satisfies all constraints, then we assign the value x to the variable x i . Otherwise, we proceed as follows. For each possible value x ∈ D i , we consider the set K(i, x) of variables (other than x i ) that are part of constraints violated by the assignment x i = x. We assign to x i the value x that minimizes where w(j) is a weight function to be described later. The variables x j ∈ K(i, x) become non-evaluated and added to S. The weight function should be such that w(j) increases each time x j is added to S, in order to avoid loops that keep moving the same variables back and forth from S. Let q(j) be the number of times x j became non-evaluated. A possible weight function is w(j) = q(j). More generally, we can have w(j) = q(j) p for some exponent p (typically between 1 and 2). Of course, several details of the conflict optimizer are left open. For example, which element to choose from S, whether some random noise should be added to w, and the decision to restart the procedure from scratch after a certain time. The CSP as is, does not apply to optimization problems. However, we can, impose a maximum value k of the objective function in order to obtain a CSP. The conflict optimizer was introduced in a low makespan coordinated motion planning setting. In that setting, the variables are the robots, the domains are their paths (of length at most k) and the constraints forbid collisions between two paths. In the graph coloring setting, the domains are the k colors of the vertices and the constraints forbid adjacent vertices from having the same color. The conflict optimizer can be adapted to non-binary CSP, but in that case multiple variables may be unassigned for a single violated constraint. The strategy has some resemblance to the similarly named min-conflicts algorithm , but notable differences are that a partial evaluation is kept instead of an invalid evaluation and the weight function that changes over time. While the conflict optimization strategy is simple, there are different ways to apply it to the graph coloring problem. The goal of the paper is to present how the top three teams applied it or complemented it with additional strategies. We compare the relative benefits of each variant on the instances given in the CG:SHOP 2022 challenge. We also compare them to baselines on some instances issued from graph coloring benchmarks. The paper is organized as follows. Section 2 presents the details of the conflict optimization strategy applied to graph coloring. In the three sections that follow, the three teams Lasa, Gitastrophe, and Shadoks present the different parameters and modified strategies that they used to make the algorithm more efficient for the CG:SHOP 2022 challenge. The last section is devoted to the experimental results. Literature Review The study of graph coloring goes back to the 4-color problem (1852) and it has been intensively studied since the 1970s (see for surveys). Many heuristics have been proposed , as well as exact algorithms . We briefly present two classes of algorithms: greedy algorithms and exact algorithms. Greedy algorithms. These algorithms are used to find good quality initial solutions in a short amount of time. The classic greedy heuristic considers the vertices in arbitrary order and colors each vertex with the smallest non-conflicting color. The two most famous modern greedy heuristics are DSATUR and Recursive Largest First (RLF ) . At each step (until all vertices are colored), DSATUR selects the vertex v that has the largest number of different colors in its neighbourhood. Ties are broken by selecting a vertex with maximum degree. The vertex v is colored with the smallest non-conflicting color. RLF searches for a large independent set I, assigns the vertices I the same color, removes I from G , and repeats until all vertices are colored. Exact algorithms. Some exact methods use a branch-and-bound strategy, for example extending the DSATUR heuristic by allowing it to backtrack . Another type of exact method (branch-and-cut-and-price) decomposes the vertex coloring problem into an iterative resolution of two sub-problems . The "master problem" maintains a small set of valid colors using a set-covering formulation. The "pricing problem" finds a new valid coloring that is promising by solving a maximum weight independent set problem. Exact algorithms are usually able to find the optimal coloring for graphs with a few hundred vertices. However, even the smallest CG:SHOP 2022 competition instances involve at least a few thousands vertices. Conflict Optimization for Graph Coloring Henceforth, we will only refer to the intersection conflict graph G induced by the instance. Vertices will refer to the vertices V (G ), and edges will refer to the edges E(G ). Our goal is to partition the vertices using a minimum set of k color classes C = {C 1 , . . . , C k }, where no two vertices in the same color class C i are incident to a common edge. Conflict Optimization TABUCOL inspired neighbourhood One classical approach for the vertex coloring involves allowing solutions with conflicting vertices (two adjacent vertices with the same color). It was introduced in 1987 and called TABUCOL. It starts with an initial solution, removes a color (usually the one with the least number of vertices), and assigns uncolored vertices with a new color among the remaining ones. This is likely to lead to some conflicts (i.e. two adjacent vertices sharing a same color). The local search scheme selects a conflicting vertex, and tries to swap its color, choosing the new coloring that minimises the number of conflicts. If it reaches a state with no conflict, it provides a solution with one color less than the initial solution. The process is repeated until the stopping criterion is met. While the original TABUCOL algorithm includes a "tabu-list" mechanism to avoid cycling, it is not always sufficient, and requires some hyper-parameter tuning in order to obtain a good performance on a large variety of instances. To overcome this issue, we use a neighbourhood, but replace the "tabu-list" by the conflict optimizer scheme presented above. PARTIALCOL inspired neighbourhood PARTIALCOL another local search algorithm solving the vertex coloring problem was introduced in 2008. This algorithm proposes a new local search scheme that allows partial coloring (thus allowing uncolored vertices). The goal is to minimize the number of uncolored vertices. Similarly to TABUCOL, PARTIALCOL starts with an initial solution, removes one color (unassigning its vertices), and performs local search iterations until no vertex is left uncolored. When coloring a vertex, the adjacent conflicting vertices are uncolored. Then, the algorithm repeats the process until all vertices are colored, or the stopping criterion is met. This neighbourhood was also introduced alongside a tabu-search procedure. The tabu-search scheme is also replaced by a conflict-optimization scheme. Note that this neighbourhood was predominantly used by the other teams. Finding Initial Solutions Lasa team used two approaches to find initial solutions: 1. DSATUR is the classical graph coloring algorithm presented in Section 1. 2. Orientation greedy is almost the only algorithm where the geometry of the segments is used. If segments are almost parallel, it is likely that they do not intersect (thus forming an independent set). This greedy algorithm first sorts the segments by orientation, ranging from − π 2 to π 2 . For each segment in this order, the algorithm tries to color it using the first available color. If no color has been found, a new color is created for coloring the considered segment. This algorithm is efficient, produces interesting initial solutions and takes into account the specificities of the competition. Solution Initialization The gitastrophe team uses the traditional greedy algorithm of Welsh and Powell to obtain initial solutions: order the vertices in decreasing order of degree, and assign each vertex the minimum-label color not used by its neighbors. During the challenge Gitastrophe attempted to use different orderings for the greedy algorithm, such as sorting by the slope of the line segment associated with each vertex (as the orientation greedy initialization presented in Section 3), and also tried numerous other strategies. Ultimately, after running the solution optimizer for approximately the same amount of time, all initializations resulted in an equal number of colors. Modifications to the Conflict Optimizer Taking inspiration from memetic algorithms, which alternate between an intensification and a diversification stage, the algorithm continually switched between a phase using the above conflict score, and one minimizing only the number of conflicts. Thus during the conflict-minimization phase, the random variables f (C j ) and w(u) are both fixed equal to 1 leading to a conflict score Each phase lasted for 10 5 iterations. Adding the conflict-minimization phase gave minor improvements to some of the challenge instances. Shadoks In this section, we describe the choices used by the Shadoks team for the options described in Section 2.1. The Shadoks generally chose to eliminate the color with the smallest number of elements. However, if the multistart option is toggled on, then a random color is used each time. The conflict set S is stored in a queue. The Shadoks tried other strategies, but found that the queue gives the best results. The weight function used is w(u) = 1 + q(u) p , mostly with p = 1.2. The effect of the parameter p is shown in Fig. . Notice that in all figures, the number of colors shown is the average of ten executions of the code using different random seeds. The algorithm uses σ = 0.15, easy vertices, q max = 59022, but does not use the BDFS nor any clique. If q(u) is larger than a threshold q max , the Shadoks set w(u) = ∞ so that the vertex u never reenters S. If at some point an uncolored vertex v is adjacent to some vertex u of infinite weight in every color class, then the conflict optimizer is restarted. When restarting, the initial coloring is shuffled by moving some vertices from their initial color class to a new one. Looking at Fig. , the value of q max does not seem to have much influence as long as it is not too small. Throughout the challenge the Shadoks almost exclusively used q max = 2000 • (75000/m) 2 , where m is the number of vertices. This value roughly ensures a restart every few hours. q max =0.5k q max =5k q max =50k q max =100k q max =250k The Shadoks use the function f as a Gaussian random variable of mean 1 and variance σ. A good default value is σ = 0.15. The effect of the variance is shown in Fig. . Notice that setting σ = 0 gives much worse results. Option (e) The goal of BDFS is to further optimize very good solutions that the conflict optimizer is not able to improve otherwise. Fig. shows the influence of BDFS. While on this figure, the advantages of BDFS cannot be noticed, its use near the end of the challenge improved about 30 solutions. The bounded depth-first search (BDFS) algorithm tries to improve the dequeuing process. The goal is to prevent a vertex in conflict with some adjacent colored vertices from entering in the conflict set. At the first level, the algorithm searches for a recoloring of some adjacent vertices which allows us to directly recolor the conflict vertex. If no solution is found, the algorithm In both figures the algorithm uses p = 1.2, easy vertices, q max = 59022, but does not use the BDFS nor any clique. For σ ≥ 0.25, no solution better than 248 colors is found. could recolor some vertices at larger distances from the conflict vertex. To do so, a local search is performed by trying to recolor vertices at a bounded distance from the conflict vertex in the current partial solution. The BDFS algorithm has two parameters: adjacency bound a max and depth d. In order to recolor a vertex v, BDFS gets the set C of color classes with at most a max neighbors of v. If a class in C has no neighbor of v, v is assigned to C. Otherwise, for each class C ∈ C, BDFS tries to recolor the vertices in C which are adjacent to v by recursively calling itself with depth d − 1. At depth d = 0 the algorithm stops trying to color the vertices. During the challenge the Shadoks used BDFS with parameters a max = 3 and d = 3. The depth was increased to 5 (resp. 7) when the number of vertices in the queue was 2 (resp. 1). Degeneracy order Given a target number of colors k, we call easy vertices a set of vertices Y such that, if the remainder of the vertices of G are colored using k colors, then we are guaranteed to be able to color all vertices of G with k colors. This is obtained using the degeneracy order Y . To obtain Y we iteratively remove from the graph a vertex v that has at most k − 1 neighbors, appending v to the end of Y . We repeat until no other vertex can be added to Y . Notice that, once we color the remainder of the graph with at least k colors, we can use a greedy coloring for Y in order from last to first without increasing the number of colors used. Removing the easy vertices reduces the total number of vertices, making the conflict optimizer more effective. The Shadoks always toggle this option on (the challenge instances contain from 0 to 23% easy vertices). Results We provide the results of the experiments performed with the code from the three teams on two classes of instances. First, we present the results on some selected CG:SHOP 2022 instances. These instances are intersection graphs of line segments. Second, we execute the code on graphs that are not intersection graphs, namely the classic DIMACS graphs , comparing the results of our conflict optimizer implementations to previous solutions. The source code for the three teams is available at: • Lasa: https://github.com/librallu/dogs-color • Gitastrophe: https://github.com/jacketsj/cgshop2022-gitastrophe • Shadoks: https://github.com/gfonsecabr/shadoks-CGSHOP2022 CG:SHOP 2022 Instances We selected 14 instances (out of 225) covering the different types of instances given in the CG:SHOP 2022 challenge. The results are presented in Table . For comparison, we executed the HEAD code on some instances using the default parameters. The table shows the smallest number of colors for which HEAD found a solution. We ran HEAD for 1 hour of repetitions for each target number of colors on a single CPU core (the HEAD solver takes the target number of colors as a parameter and we increased this parameter one by one). At the end of the challenge, 8 colorings computed by Lasa, 11 colorings computed by Gitastrophe, and 23 colorings computed by Shadoks over 225 instances have been proved optimal (their number of colors is equal to the size of a clique). In order to compare the efficiency of the algorithms, we executed the different implementations on the CG:SHOP instance vispecn13806. The edge density of this graph is 19%, the largest clique that we found has 177 vertices and the best coloring found during the challenge uses 218 colors. Notice that vispecn13806 is the same instance used in other Shadoks experiments in Section 5. Notice also that HEAD algorithm provides 283 colors after one hour compared to less than 240 colors for the conflict optimizers. We ran the three implementations on three different servers and compared the results shown in Figure . For each implementation, the x coordinate is the running time in hours, while the y coordinate is the smallest number of colors found at that time. Results on DIMACS Graphs We tested the implementation of each team on the DIMACS instances to gauge the performance of the conflict optimizer on other classes of graphs. We compared our results to the best known bounds and to the state of the art coloring algorithms HEAD and QACOL . The time limit for Lasa's algorithms is 1 hour. CWLS is Lasa's conflict optimizer with the neighbourhood presented in TABUCOL , while PWLS is the optimizer with the neighbourhood presented in PARTIALCOL . Gitastrophe algorithm ran 10 minutes after which the number of colors no longer decreases. Shadoks algorithm ran for 1 hour without the BDFS option (results with BDFS are worse). Results are presented in Table . We only kept the difficult DIMACS instances. For the other instances, all the results match the best known bounds. The DIMACS instances had comparatively few edges (on the order of thousands or millions); the largest intersection graphs considered in the CG:SHOP challenge had over 1.5 billion edges. We notice that the conflict optimizer works extremely poorly on random graphs, but it is fast and appears to perform well on geometric graphs (r250.5, r1000.1c, r1000.5, dsjr500.1c and dsjr500.5), matching the best-known results . Interestingly, these geometric graphs are not intersection graphs as in the CG:SHOP challenge, but are generated based on a distance threshold. On the DIMACS graphs, Lasa implementation shows better performance than the other implementations.
What are the three teams that used conflict optimization in the challenge?
Lasa, Gitastrophe, and Shadoks.
3,791
multifieldqa_en
4k
\section{Introduction and main results} In this note we are interested in the existence versus non-existence of stable sub- and super-solutions of equations of the form \begin{equation} \label{eq1} -div( \omega_1(x) \nabla u ) = \omega_2(x) f(u) \qquad \mbox{in $ {\mathbb{R}}^N$,} \end{equation} where $f(u)$ is one of the following non-linearities: $e^u$, $ u^p$ where $ p>1$ and $ -u^{-p}$ where $ p>0$. We assume that $ \omega_1(x)$ and $ \omega_2(x)$, which we call \emph{weights}, are smooth positive functions (we allow $ \omega_2$ to be zero at say a point) and which satisfy various growth conditions at $ \infty$. Recall that we say that a solution $ u $ of $ -\Delta u = f(u)$ in $ {\mathbb{R}}^N$ is stable provided \[ \int f'(u) \psi^2 \le \int | \nabla \psi|^2, \qquad \forall \psi \in C_c^2,\] where $ C_c^2$ is the set of $ C^2$ functions defined on $ {\mathbb{R}}^N$ with compact support. Note that the stability of $u$ is just saying that the second variation at $u$ of the energy associated with the equation is non-negative. In our setting this becomes: We say a $C^2$ sub/super-solution $u$ of (\ref{eq1}) is \emph{stable} provided \begin{equation} \label{stable} \int \omega_2 f'(u) \psi^2 \le \int \omega_1 | \nabla \psi|^2 \qquad \forall \psi \in C_c^2. \end{equation} One should note that (\ref{eq1}) can be re-written as \begin{equation*} - \Delta u + \nabla \gamma(x) \cdot \nabla u ={ \omega_2}/{\omega_1}\ f(u) \qquad \text{ in $ \mathbb{R}^N$}, \end{equation*} where $\gamma = - \log( \omega_1)$ and on occasion we shall take this point of view. \begin{remark} \label{triv} Note that if $ \omega_1$ has enough integrability then it is immediate that if $u$ is a stable solution of (\ref{eq1}) we have $ \int \omega_2 f'(u) =0 $ (provided $f$ is increasing). To see this let $ 0 \le \psi \le 1$ be supported in a ball of radius $2R$ centered at the origin ($B_{2R}$) with $ \psi =1$ on $ B_R$ and such that $ | \nabla \psi | \le \frac{C}{R}$ where $ C>0$ is independent of $ R$. Putting this $ \psi$ into $ (\ref{stable})$ one obtains \[ \int_{B_R} \omega_2 f'(u) \le \frac{C}{R^2} \int_{R < |x| <2R} \omega_1,\] and so if the right hand side goes to zero as $ R \rightarrow \infty$ we have the desired result. \end{remark} The existence versus non-existence of stable solutions of $ -\Delta u = f(u)$ in $ {\mathbb{R}}^N$ or $ -\Delta u = g(x) f(u)$ in $ {\mathbb{R}}^N$ is now quite well understood, see \cite{dancer1, farina1, egg, zz, f2, f3, wei, ces, e1, e2}. We remark that some of these results are examining the case where $ \Delta $ is replaced with $ \Delta_p$ (the $p$-Laplacian) and also in many cases the authors are interested in finite Morse index solutions or solutions which are stable outside a compact set. Much of the interest in these Liouville type theorems stems from the fact that the non-existence of a stable solution is related to the existence of a priori estimates for stable solutions of a related equation on a bounded domain. In \cite{Ni} equations similar to $ -\Delta u = |x|^\alpha u^p$ where examined on the unit ball in $ {\mathbb{R}}^N$ with zero Dirichlet boundary conditions. There it was shown that for $ \alpha >0$ that one can obtain positive solutions for $ p $ supercritical with respect to Sobolev embedding and so one can view that the term $ |x|^\alpha$ is restoring some compactness. A similar feature happens for equations of the form \[ -\Delta u = |x|^\alpha f(u) \qquad \mbox{in $ {\mathbb{R}}^N$};\] the value of $ \alpha$ can vastly alter the existence versus non-existence of a stable solution, see \cite{e1, ces, e2, zz, egg}. We now come to our main results and for this we need to define a few quantities: \begin{eqnarray*} I_G&:=& R^{-4t-2} \int_{ R < |x|<2R} \frac{ \omega_1^{2t+1}}{\omega_2^{2t}}dx , \\ J_G&:=& R^{-2t-1} \int_{R < |x| <2R} \frac{| \nabla \omega_1|^{2t+1} }{\omega_2^{2t}} dx ,\\I_L&:=& R^\frac{-2(2t+p-1)}{p-1} \int_{R<|x|<2R }{ \left( \frac{w_1^{p+2t-1}}{w_2^{2t}} \right)^{\frac{1}{p-1} } } dx,\\ J_L&:= &R^{-\frac{p+2t-1}{p-1} } \int_{R<|x|<2R }{ \left( \frac{|\nabla w_1|^{p+2t-1}}{w_2^{2t}} \right)^{\frac{1}{p-1} } } dx,\\ I_M &:=& R^{-2\frac{p+2t+1}{p+1} } \int_{R<|x|<2R }{ \left( \frac{w_1^{p+2t+1}}{w_2^{2t}} \right)^{\frac{1}{p+1} } } \ dx, \\ J_M &:= & R^{-\frac{p+2t+1}{p+1} } \int_{R<|x|<2R }{ \left( \frac{|\nabla w_1|^{p+2t+1}}{w_2^{2t}} \right)^{\frac{1}{p+1} } } dx. \end{eqnarray*} The three equations we examine are \[ -div( \omega_1 \nabla u ) = \omega_2 e^u \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (G), \] \[ -div( \omega_1 \nabla u ) = \omega_2 u^p \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (L), \] \[ -div( \omega_1 \nabla u ) = - \omega_2 u^{-p} \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (M),\] and where we restrict $(L)$ to the case $ p>1$ and $(M)$ to $ p>0$. By solution we always mean a $C^2$ solution. We now come to our main results in terms of abstract $ \omega_1 $ and $ \omega_2$. We remark that our approach to non-existence of stable solutions is the approach due to Farina, see \cite{f2,f3,farina1}. \begin{thm} \label{main_non_exist} \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ I_G, J_G \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<2$. \item There is no positive stable sub-solution (super-solution) of $(L)$ if $ I_L,J_L \rightarrow 0$ as $ R \rightarrow \infty$ for some $p- \sqrt{p(p-1)} < t<p+\sqrt{p(p-1)} $ ($0<t<\frac{1}{2}$). \item There is no positive stable super-solution of (M) if $ I_M,J_M \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<p+\sqrt{p(p+1)}$. \end{enumerate} \end{thm} If we assume that $ \omega_1$ has some monotonicity we can do better. We will assume that the monotonicity conditions is satisfied for big $x$ but really all ones needs is for it to be satisfied on a suitable sequence of annuli. \begin{thm} \label{mono} \begin{enumerate} \item There is no stable sub-solution of $(G)$ with $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$ if $ I_G \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<2$. \item There is no positive stable sub-solution of $(L)$ provided $ I_L \rightarrow 0$ as $ R \rightarrow \infty$ for either: \begin{itemize} \item some $ 1 \le t < p + \sqrt{p(p-1)}$ and $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$, or \\ \item some $ p - \sqrt{p(p-1)} < t \le 1$ and $ \nabla \omega_1(x) \cdot x \ge 0$ for big $ x$. \end{itemize} There is no positive super-solution of $(L)$ provided $ I_L \rightarrow 0$ as $ R \rightarrow \infty$ for some $ 0 < t < \frac{1}{2}$ and $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$. \item There is no positive stable super-solution of $(M)$ provided $ I_M \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<p+\sqrt{p(p+1)}$. \end{enumerate} \end{thm} \begin{cor} \label{thing} Suppose $ \omega_1 \le C \omega_2$ for big $ x$, $ \omega_2 \in L^\infty$, $ \nabla \omega_1(x) \cdot x \le 0$ for big $ x$. \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ N \le 9$. \item There is no positive stable sub-solution of $(L)$ if $$N<2+\frac{4}{p-1} \left( p+\sqrt{p(p-1)} \right).$$ \item There is no positive stable super-solution of $(M)$ if $$N<2+\frac{4}{p+1} \left( p+\sqrt{p(p+1)} \right).$$ \end{enumerate} \end{cor} If one takes $ \omega_1=\omega_2=1$ in the above corollary, the results obtained for $(G)$ and $(L)$, and for some values of $p$ in $(M)$, are optimal, see \cite{f2,f3,zz}. We now drop all monotonicity conditions on $ \omega_1$. \begin{cor} \label{po} Suppose $ \omega_1 \le C \omega_2$ for big $x$, $ \omega_2 \in L^\infty$, $ | \nabla \omega_1| \le C \omega_2$ for big $x$. \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ N \le 4$. \item There is no positive stable sub-solution of $(L)$ if $$N<1+\frac{2}{p-1} \left( p+\sqrt{p(p-1)} \right).$$ \item There is no positive super-solution of $(M)$ if $$N<1+\frac{2}{p+1} \left( p+\sqrt{p(p+1)} \right).$$ \end{enumerate} \end{cor} Some of the conditions on $ \omega_i$ in Corollary \ref{po} seem somewhat artificial. If we shift over to the advection equation (and we take $ \omega_1=\omega_2$ for simplicity) \[ -\Delta u + \nabla \gamma \cdot \nabla u = f(u), \] the conditions on $ \gamma$ become: $ \gamma$ is bounded from below and has a bounded gradient. In what follows we examine the case where $ \omega_1(x) = (|x|^2 +1)^\frac{\alpha}{2}$ and $ \omega_2(x)= g(x) (|x|^2 +1)^\frac{\beta}{2}$, where $ g(x) $ is positive except at say a point, smooth and where $ \lim_{|x| \rightarrow \infty} g(x) = C \in (0,\infty)$. For this class of weights we can essentially obtain optimal results. \begin{thm} \label{alpha_beta} Take $ \omega_1 $ and $ \omega_2$ as above. \begin{enumerate} \item If $ N+ \alpha - 2 <0$ then there is no stable sub-solution for $(G)$, $(L)$ (here we require it to be positive) and in the case of $(M)$ there is no positive stable super-solution. This case is the trivial case, see Remark \ref{triv}. \\ \textbf{Assumption:} For the remaining cases we assume that $ N + \alpha -2 > 0$. \item If $N+\alpha-2<4(\beta-\alpha+2)$ then there is no stable sub-solution for $ (G)$. \item If $N+\alpha-2<\frac{ 2(\beta-\alpha+2) }{p-1} \left( p+\sqrt{p(p-1)} \right)$ then there is no positive stable sub-solution of $(L)$. \item If $N+\alpha-2<\frac{2(\beta-\alpha+2) }{p+1} \left( p+\sqrt{p(p+1)} \right)$ then there is no positive stable super-solution of $(M)$. \item Further more 2,3,4 are optimal in the sense if $ N + \alpha -2 > 0$ and the remaining inequality is not satisfied (and in addition we assume we don't have equality in the inequality) then we can find a suitable function $ g(x)$ which satisfies the above properties and a stable sub/super-solution $u$ for the appropriate equation. \end{enumerate} \end{thm} \begin{remark} Many of the above results can be extended to the case of equality in either the $ N + \alpha - 2 \ge 0$ and also the other inequality which depends on the equation we are examining. We omit the details because one cannot prove the results in a unified way. \end{remark} In showing that an explicit solution is stable we will need the weighted Hardy inequality given in \cite{craig}. \begin{lemma} \label{Har} Suppose $ E>0$ is a smooth function. Then one has \[ (\tau-\frac{1}{2})^2 \int E^{2\tau-2} | \nabla E|^2 \phi^2 + (\frac{1}{2}-\tau) \int (-\Delta E) E^{2\tau-1} \phi^2 \le \int E^{2\tau} | \nabla \phi|^2,\] for all $ \phi \in C_c^\infty({\mathbb{R}}^N)$ and $ \tau \in {\mathbb{R}}$. \end{lemma} By picking an appropriate function $E$ this gives, \begin{cor} \label{Hardy} For all $ \phi \in C_c^\infty$ and $ t , \alpha \in {\mathbb{R}}$. We have \begin{eqnarray*} \int (1+|x|^2)^\frac{\alpha}{2} |\nabla\phi|^2 &\ge& (t+\frac{\alpha}{2})^2 \int |x|^2 (1+|x|^2)^{-2+\frac{\alpha}{2}}\phi^2\\ &&+(t+\frac{\alpha}{2})\int (N-2(t+1) \frac{|x|^2}{1+|x|^2}) (1+|x|^2)^{-1+\frac{\alpha} {2}} \phi^2. \end{eqnarray*} \end{cor} \section{Proof of main results} \textbf{ Proof of Theorem \ref{main_non_exist}.} (1). Suppose $ u$ is a stable sub-solution of $(G)$ with $ I_G,J_G \rightarrow 0$ as $ R \rightarrow \infty$ and let $ 0 \le \phi \le 1$ denote a smooth compactly supported function. Put $ \psi:= e^{tu} \phi$ into (\ref{stable}), where $ 0 <t<2$, to arrive at \begin{eqnarray*} \int \omega_2 e^{(2t+1)u} \phi^2 &\le & t^2 \int \omega_1 e^{2tu} | \nabla u|^2 \phi^2 \\ && +\int \omega_1 e^{2tu}|\nabla \phi|^2 + 2 t \int \omega_1 e^{2tu} \phi \nabla u \cdot \nabla \phi. \end{eqnarray*} Now multiply $(G)$ by $ e^{2tu} \phi^2$ and integrate by parts to arrive at \[ 2t \int \omega_1 e^{2tu} | \nabla u|^2 \phi^2 \le \int \omega_2 e^{(2t+1) u} \phi^2 - 2 \int \omega_1 e^{2tu} \phi \nabla u \cdot \nabla \phi,\] and now if one equates like terms they arrive at \begin{eqnarray} \label{start} \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^2 & \le & \int \omega_1 e^{2tu} \left( | \nabla \phi |^2 - \frac{ \Delta \phi}{2} \right) dx \nonumber \\ && - \frac{1}{2} \int e^{2tu} \phi \nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} Now substitute $ \phi^m$ into this inequality for $ \phi$ where $ m $ is a big integer to obtain \begin{eqnarray} \label{start_1} \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^{2m} & \le & C_m \int \omega_1 e^{2tu} \phi^{2m-2} \left( | \nabla \phi |^2 + \phi |\Delta \phi| \right) dx \nonumber \\ && - D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi \end{eqnarray} where $ C_m$ and $ D_m$ are positive constants just depending on $m$. We now estimate the terms on the right but we mention that when ones assume the appropriate monotonicity on $ \omega_1$ it is the last integral on the right which one is able to drop. \begin{eqnarray*} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi|^2 & = & \int \omega_2^\frac{2t}{2t+1} e^{2tu} \phi^{2m-2} \frac{ \omega_1 }{\omega_2^\frac{2t}{2t+1}} | \nabla \phi|^2 \\ & \le & \left( \int \omega_2 e^{(2t+1) u} \phi^{(2m-2) \frac{(2t+1)}{2t}} dx \right)^\frac{2t}{2t+1}\\ &&\left( \int \frac{ \omega_1 ^{2t+1}}{\omega_2^{2t}} | \nabla \phi |^{2(2t+1)} \right)^\frac{1}{2t+1}. \end{eqnarray*} Now, for fixed $ 0 <t<2$ we can take $ m $ big enough so $ (2m-2) \frac{(2t+1)}{2t} \ge 2m $ and since $ 0 \le \phi \le 1$ this allows us to replace the power on $ \phi$ in the first term on the right with $2m$ and hence we obtain \begin{equation} \label{three} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi|^2 \le \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{2t}{2t+1} \left( \int \frac{ \omega_1 ^{2t+1}}{\omega_2^{2t}} | \nabla \phi |^{2(2t+1)} \right)^\frac{1}{2t+1}. \end{equation} We now take the test functions $ \phi$ to be such that $ 0 \le \phi \le 1$ with $ \phi $ supported in the ball $ B_{2R}$ with $ \phi = 1 $ on $ B_R$ and $ | \nabla \phi | \le \frac{C}{R}$ where $ C>0$ is independent of $ R$. Putting this choice of $ \phi$ we obtain \begin{equation} \label{four} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi |^2 \le \left( \int \omega_2 e^{(2t+1)u} \phi^{2m} \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}. \end{equation} One similarly shows that \[ \int \omega_1 e^{2tu} \phi^{2m-1} | \Delta \phi| \le \left( \int \omega_2 e^{(2t+1)u} \phi^{2m} \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}.\] So, combining the results we obtain \begin{eqnarray} \label{last} \nonumber \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^{2m} &\le& C_m \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}\\ &&- D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} We now estimate this last term. A similar argument using H\"{o}lder's inequality shows that \[ \int e^{2tu} \phi^{2m-1} | \nabla \omega_1| | \nabla \phi| \le \left( \int \omega_2 \phi^{2m} e^{(2t+1) u} dx \right)^\frac{2t}{2t+1} J_G^\frac{1}{2t+1}. \] Combining the results gives that \begin{equation} \label{last} (2-t) \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{1}{2t+1} \le I_G^\frac{1}{2t+1} + J_G^\frac{1}{2t+1}, \end{equation} and now we send $ R \rightarrow \infty$ and use the fact that $ I_G, J_G \rightarrow 0$ as $ R \rightarrow \infty$ to see that \[ \int \omega_2 e^{(2t+1) u} =0, \] which is clearly a contradiction. Hence there is no stable sub-solution of $(G)$. (2). Suppose that $u >0$ is a stable sub-solution (super-solution) of $(L)$. Then a similar calculation as in (1) shows that for $ p - \sqrt{p(p-1)} <t < p + \sqrt{p(p-1)}$, $( 0 <t<\frac{1}{2})$ one has \begin{eqnarray} \label{shit} (p -\frac{t^2}{2t-1} )\int \omega_2 u^{2t+p-1} \phi^{2m} & \le & D_m \int \omega_1 u^{2t} \phi^{2(m-1)} (|\nabla\phi|^2 +\phi |\Delta \phi |) \nonumber \\ && +C_m \frac{(1-t)}{2(2t-1)} \int u^{2t} \phi^{2m-1}\nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} One now applies H\"{o}lder's argument as in (1) but the terms $ I_L$ and $J_L$ will appear on the right hand side of the resulting equation. This shift from a sub-solution to a super-solution depending on whether $ t >\frac{1}{2}$ or $ t < \frac{1}{2}$ is a result from the sign change of $ 2t-1$ at $ t = \frac{1}{2}$. We leave the details for the reader. (3). This case is also similar to (1) and (2). \hfill $ \Box$ \textbf{Proof of Theorem \ref{mono}.} (1). Again we suppose there is a stable sub-solution $u$ of $(G)$. Our starting point is (\ref{start_1}) and we wish to be able to drop the term \[ - D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi, \] from (\ref{start_1}). We can choose $ \phi$ as in the proof of Theorem \ref{main_non_exist} but also such that $ \nabla \phi(x) = - C(x) x$ where $ C(x) \ge 0$. So if we assume that $ \nabla \omega_1 \cdot x \le 0$ for big $x$ then we see that this last term is non-positive and hence we can drop the term. The the proof is as before but now we only require that $ \lim_{R \rightarrow \infty} I_G=0$. (2). Suppose that $ u >0$ is a stable sub-solution of $(L)$ and so (\ref{shit}) holds for all $ p - \sqrt{p(p-1)} <t< p + \sqrt{p(p-1)}$. Now we wish to use monotonicity to drop the term from (\ref{shit}) involving the term $ \nabla \omega_1 \cdot \nabla \phi$. $ \phi$ is chosen the same as in (1) but here one notes that the co-efficient for this term changes sign at $ t=1$ and hence by restriction $t$ to the appropriate side of 1 (along with the above condition on $t$ and $\omega_1$) we can drop the last term depending on which monotonicity we have and hence to obtain a contraction we only require that $ \lim_{R \rightarrow \infty} I_L =0$. The result for the non-existence of a stable super-solution is similar be here one restricts $ 0 < t < \frac{1}{2}$. (3). The proof here is similar to (1) and (2) and we omit the details. \hfill $\Box$ \textbf{Proof of Corollary \ref{thing}.} We suppose that $ \omega_1 \le C \omega_2$ for big $ x$, $ \omega_2 \in L^\infty$, $ \nabla \omega_1(x) \cdot x \le 0$ for big $ x$. \\ (1). Since $ \nabla \omega_1 \cdot x \le 0$ for big $x$ we can apply Theorem \ref{mono} to show the non-existence of a stable solution to $(G)$. Note that with the above assumptions on $ \omega_i$ we have that \[ I_G \le \frac{C R^N}{R^{4t+2}}.\] For $ N \le 9$ we can take $ 0 <t<2$ but close enough to $2$ so the right hand side goes to zero as $ R \rightarrow \infty$. Both (2) and (3) also follow directly from applying Theorem \ref{mono}. Note that one can say more about (2) by taking the multiple cases as listed in Theorem \ref{mono} but we have choice to leave this to the reader. \hfill $ \Box$ \textbf{Proof of Corollary \ref{po}.} Since we have no monotonicity conditions now we will need both $I$ and $J$ to go to zero to show the non-existence of a stable solution. Again the results are obtained immediately by applying Theorem \ref{main_non_exist} and we prefer to omit the details. \hfill $\Box$ \textbf{Proof of Theorem \ref{alpha_beta}.} (1). If $ N + \alpha -2 <0$ then using Remark \ref{triv} one easily sees there is no stable sub-solution of $(G)$ and $(L)$ (positive for $(L)$) or a positive stable super-solution of $(M)$. So we now assume that $ N + \alpha -2 > 0$. Note that the monotonicity of $ \omega_1$ changes when $ \alpha $ changes sign and hence one would think that we need to consider separate cases if we hope to utilize the monotonicity results. But a computation shows that in fact $ I$ and $J$ are just multiples of each other in all three cases so it suffices to show, say, that $ \lim_{R \rightarrow \infty} I =0$. \\ (2). Note that for $ R >1$ one has \begin{eqnarray*} I_G & \le & \frac{C}{R^{4t+2}} \int_{R <|x| < 2R} |x|^{ \alpha (2t+1) - 2t \beta} \\ & \le & \frac{C}{R^{4t+2}} R^{N + \alpha (2t+1) - 2t \beta}, \end{eqnarray*} and so to show the non-existence we want to find some $ 0 <t<2$ such that $ 4t+2 > N + \alpha(2t+1) - 2 t \beta$, which is equivalent to $ 2t ( \beta - \alpha +2) > (N + \alpha -2)$. Now recall that we are assuming that $ 0 < N + \alpha -2 < 4 ( \beta - \alpha +2) $ and hence we have the desired result by taking $ t <2$ but sufficiently close. The proof of the non-existence results for (3) and (4) are similar and we omit the details. \\ (5). We now assume that $N+\alpha-2>0$. In showing the existence of stable sub/super-solutions we need to consider $ \beta - \alpha + 2 <0$ and $ \beta - \alpha +2 >0$ separately. \begin{itemize} \item $(\beta - \alpha + 2 <0)$ Here we take $ u(x)=0$ in the case of $(G)$ and $ u=1$ in the case of $(L)$ and $(M)$. In addition we take $ g(x)=\E$. It is clear that in all cases $u$ is the appropriate sub or super-solution. The only thing one needs to check is the stability. In all cases this reduces to trying to show that we have \[ \sigma \int (1+|x|^2)^{\frac{\alpha}{2} -1} \phi^2 \le \int (1+|x|^2)^{\frac{\alpha}{2}} | \nabla\phi |^2,\] for all $ \phi \in C_c^\infty$ where $ \sigma $ is some small positive constant; its either $ \E$ or $ p \E$ depending on which equation were are examining. To show this we use the result from Corollary \ref{Hardy} and we drop a few positive terms to arrive at \begin{equation*} \int (1+|x|^2)^\frac{\alpha}{2} |\nabla\phi|^2\ge (t+\frac{\alpha}{2})\int \left (N-2(t+1) \frac{|x|^2}{1+|x|^2}\right) (1+|x|^2)^{-1+\frac{\alpha} {2}} \end{equation*} which holds for all $ \phi \in C_c^\infty$ and $ t,\alpha \in {\mathbb{R}}$. Now, since $N+\alpha-2>0$, we can choose $t$ such that $-\frac{\alpha}{2}<t<\frac{n-2}{2}$. So, the integrand function in the right hand side is positive and since for small enough $\sigma$ we have \begin{equation*} \sigma \le (t+\frac{\alpha}{2})(N-2(t+1) \frac{|x|^2}{1+|x|^2}) \ \ \ \text {for all} \ \ x\in \mathbb{R}^N \end{equation*} we get stability. \item ($\beta-\alpha+2>0$) In the case of $(G)$ we take $u(x)=-\frac{\beta-\alpha+2}{2} \ln(1+|x|^2)$ and $g(x):= (\beta-\alpha+2)(N+(\alpha-2)\frac{|x|^2}{1+|x|^2})$. By a computation one sees that $u$ is a sub-solution of $(G)$ and hence we need now to only show the stability, which amounts to showing that \begin{equation*} \int \frac{g(x)\psi^2}{(1+|x|^{2 })^{-\frac{\alpha}{2}+1}}\le \int\frac{|\nabla\psi|^2}{ (1+|x|^2)^{-\frac{\alpha}{2}} }, \end{equation*} for all $ \psi \in C_c^\infty$. To show this we use Corollary \ref{Hardy}. So we need to choose an appropriate $t$ in $-\frac{\alpha}{2}\le t\le\frac{N-2}{2}$ such that for all $x\in {\mathbb{R}}^N$ we have \begin{eqnarray*} (\beta-\alpha+2)\left( N+ (\alpha-2)\frac{|x|^2}{1+|x|^2}\right) &\le& (t+\frac{\alpha}{2})^2 \frac{ |x|^2 }{(1+|x|^2}\\ &&+(t+\frac{\alpha}{2}) \left(N-2(t+1) \frac{|x|^2}{1+|x|^2}\right). \end{eqnarray*} With a simple calculation one sees we need just to have \begin{eqnarray*} (\beta-\alpha+2)&\le& (t+\frac{\alpha}{2}) \\ (\beta-\alpha+2) \left( N+ \alpha-2\right) & \le& (t+\frac{\alpha}{2}) \left(N-t-2+\frac{\alpha}{2}) \right). \end{eqnarray*} If one takes $ t= \frac{N-2}{2}$ in the case where $ N \neq 2$ and $ t $ close to zero in the case for $ N=2$ one easily sees the above inequalities both hold, after considering all the constraints on $ \alpha,\beta$ and $N$. We now consider the case of $(L)$. Here one takes $g(x):=\frac {\beta-\alpha+2}{p-1}( N+ (\alpha-2-\frac{\beta-\alpha+2}{p-1}) \frac{|x|^2}{1+|x|^2})$ and $ u(x)=(1+|x|^2)^{ -\frac {\beta-\alpha+2}{2(p-1)} }$. Using essentially the same approach as in $(G)$ one shows that $u$ is a stable sub-solution of $(L)$ with this choice of $g$. \\ For the case of $(M)$ we take $u(x)=(1+|x|^2)^{ \frac {\beta-\alpha+2}{2(p+1)} }$ and $g(x):=\frac {\beta-\alpha+2}{p+1}( N+ (\alpha-2+\frac{\beta-\alpha+2}{p+1}) \frac{|x|^2}{1+|x|^2})$. \end{itemize} \hfill $ \Box$
What are the stability conditions for a solution of $-\Delta u = f(u)$?
$\int f'(u) \psi^2 \le \int | \nabla \psi|^2, \forall \psi \in C_c^2$.
3,743
multifieldqa_en
4k
A special tribute to Del Bigtree (pictured) and his team at ICAN for his stunning 88 page letter to the HHS regarding vaccine safety. As Del reported - in the latest edition of Highwire - the letter, in response to an earlier reply from the then acting Director National Vaccine Program Office, Melinda Wharton, took virtually a year to compile, and is a meticulous piece of research. Most sensationally they researched the HHS claim through US government archives that at least some pediatric vaccines had been trialed against genuine placebo, and came to a negative conclusion. Not only that, they established that none of the vaccines those vaccines had been trialed against had ever been trialed against genuine placebo either. At the end of the line the toxic products were only being compared with other toxic products, rather than against saline. Leave aside the sceptics, for any believer in the vaccine program as a necessary intervention in public health, this should be a devastating finding. Fundamentally, the research into the safety of any of the products before marketing was simply not there. The manufacturers apparently had no faith that their proto-products could withstand this scrutiny, and for the rest they just did not care: under the alleged imperative of protecting the population it seems anything went. So even before all the sham monitoring procedures and reviews which Del and his team dismantle in forensic detail we are left with the proposition that none of the present products being given to US children – and frequently other children across most of the developed world – have any meaningful pre-marketing safety data all. If you are believer in the program you have been let down: if you wanted a program with any pretensions to safety - supposing such a thing to be possible - it looks like you would have to start from scratch. The manufacturers did this: the governments, the politicians and the regulators (internationally) let it happen. This damning document is published simultaneously with a demand in the UK from the Royal Society for Public Health (which I had never heard of) to shut down comment about vaccines on the web. It echoes calls from Seth Berkley of GAVI, Heidi Larson of the Vaccine Confidence Project and the European Parliament. The pamphlet airily dismisses concerns that vaccines have side effects or that you could possibly have too many. It is pure public relations, and if the RSPH claims to be "independent" it also admits that the publication was paid for by Merck, a detail which was reported by British Medical Journal and the Guardian, but not true to form by the BBC. We have, in truth, been building to this moment for two decades: as the evidence piles up that every single aspect of the program lacks integrity or is simply rotten to the core all the perpetrators can do is call for the silencing of their critics, and maintain the products are safe because they say so. Please help give the ICAN letter the widest possible distribution, particularly to politicians. "The outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system." Nope. This makes no sense. Lots of people who seemed vibrant will get a very severe case of the same illness that a vulnerable baby overcomes in a day. And under the germ theory it doesn't matter how strong your immune system *was*. Once it's been overcome by the pathogen it is every bit as weak as anybody else's with that pathogen. What you say makes no sense. There's no reason for me to reply to you again. "Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared?" Why do you keep asking this question when I've already provided the answer hundreds of times? Why are you so desperate to believe the people who you already recognize are harming our children? Why would Walter Reed be any more trustworthy than Paul Offit or Senator Pan? Why would Jenner or Pasteur? And you went no way to explaining my arguments against germ theory. If we are attacked by billions of viruses every day then if even a tiny fraction of them are pathogenic then we couldn't possibly survive. And even if we could, we would already be immune rendering every vaccine pointless. Once we had survived our first few days on earth, then we could never get sick again. If that's wrong then we must conclude that precisely 0% of germs are pathogenic. Plus your comment about the immune system completely misunderstood my point. The immune system does not allow us to overcome our math problem. In fact, it makes it worse. You did provide one solitary example of a patient with what are presumably yellow fever symptoms but you didn't say whether they had been given any toxic medical treatments. And like I said before, the whole "incubation period" is more than a little suspicious. Clearly they never found what they thought they would and just rigged the results to tell them what they want to hear. Like every other germ theorist/vaccine promoter in history. Many kinds of bacteria are constantly evolving and changing, like flu viruses. Others are more stable over time, like the yellow fever virus. Those that change develop new ways of infiltrating the cells of the organism being attacked (from our point of view, from its unconscious point of view, it's just carrying out its need to replicate, which it can only do inside the cells of its host). The changes which allow it to better infiltrate are more successful and result in more viruses with those traits. Our immune system is designed to detect and destroy potentially dangerous invading pathogens. Many bacteria are usually harmless and absolutely necessary. The minority are dangerous, and most people's immune systems do a good job of analyzing them and killing them, often with no signs of disease. Others experience a clinical infection, and the immune system usually mounts a successful attack on them. The outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system. Vaccines are usually effective in giving immunity to the targeted diseases. They also have many dangers which everyone should be aware of, and vaccines should be avoided whenever possible. But in the case of the most dangerous diseases, everyone should learn about them and think about what he wants to do to protect himself and his children from them, considering all the factors involved. And no one can have 100% certainty that he has made the right decision, but that's life. But if you live in the Congo and many people around you are currently dying of yellow fever, then that means that you yourself are at risk of being bitten by a loaded mosquito and getting, often dying, of yellow fever. The yellow fever vaccine is very effective at preventing yellow fever. From there, each person must make a choice. At the end of this stage there is a remission of two or three days. About 80% of those with clinical disease recover at this point, with permanent immunity. The other 20% enter the toxic stage, with a return of the fever, black vomit (coffee-ground emesis), diarrhea, a slowing of the pulse (Faget's sign), jaundice, yellow eyes, yellow skin, and failure of the kidneys, liver, and heart. The patient gets a strange hiccup (like with Ebola, a related disease), falls into a coma, and dies. About half of those patients who enter the toxic stage dies, even now, even with the best of hospital care. The Faget's sign can also occur at the end of the first stage. You asked specifically about the symptoms of the Americans on Dr. Reed's team who got yellow fever in Cuba in 1900. I'll give the passage from The American Plague (162-5), which describes the course of Jesse Lazear's illness. "In his logbook, Lazear wrote an unusual entry on September 13. In all cases before those, page after page of records, Lazear had used the soldier's name and simply the date he was bitten, with no other attention to the mosquito. A one-line entry with a name and a date. On that day, however, in his elegant hand, Lazear did not write the soldier's name, but instead wrote 'Guinea Pig No. 1.' He went on to write that this guinea pig had been bitten by a mosquito that developed from an egg laid by a mosquito that developed from an egg laid by a mosquito that fed on a number of yellow fever cases: Suarez, Hernández, De Long, Ferández. It was a precise, detailed history that proved beyond doubt that the mosquito was loaded with the virus when it bit a healthy soldier...(If he had entered his name, then his death would have been considered medical suicide by the insurance company, and his wife and two children would not have gotten any payment.) For the next few days, Lazear's life continued much as it had over the last few months in Cuba. He fed and cared for the mosquitoes in the lab. ..Then he began to lose his appetite. He skipped a few meals in the mess hall. He didn't mention it to anyone, nor did he ask to see one of the yellow fever doctors; instead, he worked hard in the lab trying to ignore the oncoming headache. "On September 18, he complained of feeling 'out of sorts,' and stayed in his officer's quarters. His head pounded and L. decided to write a letter. ..(he wrote to his mother, and referred to his one-year old son Houston and the baby his wife Mabel was about to have: they were staying with his mother in the US). ..That night, L. started to feel chilled as the fever came on. He never went to sleep but worked at his desk all through the night, trying to get all the information about the mosquitoes organized. By morning, he showed all the signs of a severe attack of yellow fever. The camp doctors made the diagnosis, and L. agreed to go to the yellow fever ward. ..L. was carried by litter out of the two-room, white pine board house in which he had lived since he and Mabel first arrived in Cuba. ..(In the yellow fever ward, in a separate one-room building), Lena Warner (the immune nurse who had survived the yellow fever in 1878, when she was nine, and was found in her boarded-up house by a former slave who first thought she was dead, and carried her to safety) nursed J.L., recording his vitals. (I put up a link to his case record and vital signs last week. The surgeon general required that this record be made for every yellow fever patient.)... (On September 25,) Lena Warner braced L's arms with all of her weight, shouting for help. Still he bolted from the bed, darting around the small frame-wood room as wildly as a trapped insect beating against glass. Two soldiers ran into the ward, pinning L to his bed, tying restraints around his wrists and elbows. ..Warner sponged his body with iced whiskey and water. She recorded his temperature, which had held at 104 degrees for days, on the chart beside his bed. ..(Warner watched him sleep.) But the quiet did not last. L's body began to lurch, and black vomit rolled from his mouth; through the bar hanging above his hospital cot. He writhed in the bed, and his skin grew deep yellow. His 104 temperature slowly fell, leveling out 99 degrees, and JL died at 8:45 p.m. at the age of thirty-four." As is obvious, there are many problems with vaccines. But, that being said, most of them usually work for a period of time to prevent the targeted diseases. The basic science behind vaccines is correct. Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared? In the case of the routine childhood diseases, this was a bad thing, but it is a true thing. Vaccines usually don't cause any obvious reactions. While they usually prevent the diseases, and that's why people continue to get them. With the increasing vaccination schedule, more and more are severely and permanently damaged, and it is immoral to mandate any vaccine for anyone for this reason. But it would also be immoral to prohibit vaccines for those who want them enough to take the risk. Your article said as though it had any probative value that 90% of those who get pertussis had been vaxxed. The old DPT vaccine was MUCH more effective at preventing pertussis, but it was so dangerous (again, not to most, but to many), that developed countries replaced it with the acellular version, DTaP. From the beginning about twenty years ago, it was clear that it was not very effective and that huge numbers of vaxxed people got pertussis anyway, including my daughter who got pertussis at eight month old after having gotten three DTaPs. The pertussis vaccine continues to be very dangerous, and I do not recommend that anyone get it. It used to be a killer disease, but evolved to become much milder, to the extent that the disease is very rarely dangerous (usually only to newborns under three months old), while the vaccine is very dangerous. And they're trying to see how they can go back to the old DPT. This does not show that vaccine science has collapsed, but rather that the vaccine they developed to replace the DPT turned out to be much less effective than they first thought, while continuing to be much more dangerous than they first thought. Your article extrapolated from that that modern medical science in general has collapsed, but that, again, is going too far. A older woman in Mexico City who is like my mother to me had a pacemaker inserted about two months ago to aid her failing heart, and it has restored her to optimism and energy, when she was despondent, weak, and close to death. I took my daughter to the dentist yesterday, who said she has three wisdom teeth coming in and that she said that the lower right one was sore. So, although I am cautious about X-rays, I made an appointment for a panoramic X-ray in a month to assess the wisdom teeth, and, if it seems appropriate, I'll take her to an oral surgeon to have one or more extracted under IV sedation, in his office, if possible (the dentist thought that it would be). And I am confident that there will be no serious problems, but this is thanks to technology and training in modern medicine that haven't been available for that long. I think that everyone should inform himself on all medical procedures before agreeing to anything, but I also think that he should have access to any medical procedure which is reasonable (and opinions can differ as to that). One problem is that you have not said how you think people should protect themselves against tetanus, bacterial meningitis, and yellow fever in the relevant cases, for example. These are diseases which healthy, well-nourished people used to die from very readily. If most people stopped vaxxing and the mortality from these diseases rose to something like pre-vaccine levels, do you think they should just accept dying from them? I put that in a separate paragraph because it is the crucial issue. balinaheuchter Air Traffic Control You Tube - Colin Campbell example of - How to "Fudge a Nudge" -"Deal" or "No Deal" "Not in a month of Sundays" "No exceptions/no compromise?" -make a trade off -do an exception- everyone get's a good deal /good outcome! Hans, you are right that we are looking at one of the biggest crimes in all history. When I read the story of that poor girl who was so healthy and is now confined to a wheelchair after getting her third Gardasil shot I could not believe that Merck could produce such a toxic vaccine and give it out to girls like it was something they absolutely had to have only to be mislead and made into cripples. Merck should be prosecuted for the damage they have done to so many girls who got the Gardasil vaccine and were physically debilitated for life. There is a place for the people who perpetrated this crime on young girls and women and it is called hell. They have destroyed people's lives and gotten away with it. My heart goes out to those who have suffered this damage for no damn good reason except to help make huge profits for Merck! Here is the reason that the germ theory is nonsense. 1) Everyday we are bombarded with billions of germs. Presumably at least some of them are of the kind that germ theorists believe are dangerous (otherwise we would have to conclude that none of them are dangerous). So how do we survive? 2) Let's just say that we ignore 1 and imagine that, by way of magic, none of the billions of viruses we get bombarded with are pathogenic but all those that are are tucked away somewhere. Ok. But presumably they reside in sick people right? So where are there lots of sick people? Doctor offices and hospitals! So everybody must be dying the moment they enter these places right? 3) I love this one because I have never seen anybody else ever raise it. Under the germ theory there are no negative feedbacks. This makes a stable biological system by definition impossible. The immune system is *not* a negative feedback it is the opposite. It actually reinforces our math problem because the immune system will weaken as the number of pathogens increase. There is no way of resolving this problem without a discontinuity. A Deus ex Machina as The Almighty Pill so beautifully put it. So the germ theory is quite literally, mathematically impossible. There is as much chance of it being true as 2+2 = 5. There are plenty of other massive problems with germ theory such as why did things like SARS and bird flu magically disappear? Why do we have the symptoms that we do? Is our body controlling the symptoms to help fight the germs and if so, why would suppressing the symptoms with antibiotics or Tamiflu be considered a good idea? If the virus is causing the symptoms then why would it cause these kinds of things?
Who compiled the 88-page letter to the HHS regarding vaccine safety?
Del Bigtree and his team at ICAN.
3,150
multifieldqa_en
4k
Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government. A farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash. In November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election. John Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later. Early life English was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden. English attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington. After finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as "Rogernomics") were being implemented. English joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively. Fourth National Government (1990–1999) At the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the "brat pack", the "gang of four", and the "young Turks". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health. First period in cabinet (1996–1999) In early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a "shotgun marriage", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters. As Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to "balance sheets" and "user charges") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted. By early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet. English was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's "Rogernomics" and Ruth Richardson's "Ruthanasia") had focused on "fruitless, theoretical debates" when "people just want to see problems solved". Opposition (1999–2008) After the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent. Leader of the Opposition In October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times "there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension". Aged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as "the worst day of my political life". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support. By late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest. Shadow cabinet roles and deputy leader On 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education). In November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet. Fifth National Government (2008–2017) Deputy Prime Minister and Minister of Finance (2008–2016) At the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third. He was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. The pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK). English acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: "improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending. In April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP. Strong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax. Allowances issue In 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making "preliminary enquiries" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election. Prime Minister (2016–2017) John Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016. English appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same. In February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little. In his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were "natural partners" and would "continue to forge ties" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact. At a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation. On 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand. On 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal. During the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters. At the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October. Opposition (2017–2018) Leader of the Opposition English was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day. Post-premiership In 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets. Political and social views English is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any "liberalisation" of abortion law. In 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, "I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage". In 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes. Personal life English met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons. English is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics. In June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke. Honours In the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State. See also List of New Zealand governments Politics of New Zealand References External links Profile at National Party Profile on Parliament.nz Releases and speeches at Beehive.govt.nz |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- 1961 births 21st-century New Zealand politicians Candidates in the 2017 New Zealand general election Deputy Prime Ministers of New Zealand Leaders of the Opposition (New Zealand) Living people Members of the Cabinet of New Zealand Members of the New Zealand House of Representatives New Zealand farmers New Zealand finance ministers New Zealand list MPs New Zealand MPs for South Island electorates New Zealand National Party MPs New Zealand National Party leaders New Zealand Roman Catholics New Zealand people of Irish descent People educated at St. Patrick's College, Silverstream People from Dipton, New Zealand People from Lumsden, New Zealand Prime Ministers of New Zealand University of Otago alumni Victoria University of Wellington alumni Knights Companion of the New Zealand Order of Merit New Zealand politicians awarded knighthoods
In which electorate was Simon English elected to the New Zealand Parliament?
The Wallace electorate.
3,597
multifieldqa_en
4k
Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government. A farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash. In November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election. John Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later. Early life English was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden. English attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington. After finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as "Rogernomics") were being implemented. English joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively. Fourth National Government (1990–1999) At the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the "brat pack", the "gang of four", and the "young Turks". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health. First period in cabinet (1996–1999) In early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a "shotgun marriage", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters. As Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to "balance sheets" and "user charges") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted. By early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet. English was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's "Rogernomics" and Ruth Richardson's "Ruthanasia") had focused on "fruitless, theoretical debates" when "people just want to see problems solved". Opposition (1999–2008) After the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent. Leader of the Opposition In October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times "there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension". Aged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as "the worst day of my political life". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support. By late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest. Shadow cabinet roles and deputy leader On 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education). In November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet. Fifth National Government (2008–2017) Deputy Prime Minister and Minister of Finance (2008–2016) At the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third. He was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. The pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK). English acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: "improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending. In April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP. Strong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax. Allowances issue In 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making "preliminary enquiries" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election. Prime Minister (2016–2017) John Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016. English appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same. In February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little. In his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were "natural partners" and would "continue to forge ties" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact. At a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation. On 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand. On 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal. During the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters. At the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October. Opposition (2017–2018) Leader of the Opposition English was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day. Post-premiership In 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets. Political and social views English is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any "liberalisation" of abortion law. In 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, "I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage". In 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes. Personal life English met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons. English is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics. In June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke. Honours In the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State.
When did the 2017 general election be held?
23 September.
3,422
multifieldqa_en
4k
Paper Info Title: Interpretable reduced-order modeling with time-scale separation Interpretable reduced-order modeling with time-scale separation Publish Date: 7 March 2023 Author List: Sebastian Kaltenbach (from CSE-Lab, ETH Zurich, Harvard SEAS), Phaedon-Stelios Koutsourelakis (from CSE-Lab, ETH Zurich, Harvard SEAS), Petros Koumoutsakos (from CSE-Lab, ETH Zurich, Harvard SEAS), Harvard Seas (from CSE-Lab, ETH Zurich, Harvard SEAS) Figure FIG. 5. Comparison between the phase-space of the reference solution (left) and the phase-space of the predictions FIG. 7. Comparison between predictions and reference solutions for a new initial condition fort = 1.25, 3.75, 7.5, 12.5, 20, 30 (from left to right and top to down).We note that with longer prediction time the uncertainty bounds increases.Despite the chaotic nature of the KS equation, the predictive posterior mean is close to the reference solution for t ≤ 12.5 abstract Partial Differential Equations (PDEs) with high dimensionality are commonly encountered in computational physics and engineering. However, finding solutions for these PDEs can be computationally expensive, making model-order reduction crucial. We propose such a data-driven scheme that automates the identification of the time-scales involved and, can produce stable predictions forward in time as well as under different initial conditions not included in the training data. To this end, we combine a non-linear autoencoder architecture with a time-continuous model for the latent dynamics in the complex space. It readily allows for the inclusion of sparse and irregularly sampled training data. The learned, latent dynamics are interpretable and reveal the different temporal scales involved. We show that this data-driven scheme can automatically learn the independent processes that decompose a system of linear ODEs along the eigenvectors of the system's matrix. Apart from this, we demonstrate the applicability of the proposed framework in a hidden Markov Model and the (discretized) Kuramoto-Shivashinsky (KS) equation. Additionally, we propose a probabilistic version, which captures predictive uncertainties and further improves upon the results of the deterministic framework. INTRODUCTION High-fidelity simulations of critical phenomena such as ocean dynamics and epidemics have become essential for decision-making. They are based on physically-motivated PDEs expressing system dynamics that span multiple spatiotemporal scales and which necessitate cumbersome computations . In recent years there is increased attention to the development of data-driven models that can accelerate the solution of these PDEs as well as reveal salient, lower-dimensional features that control the long-term evolution. In most cases, data-driven reduced-order models are not interpretable. In particular, models based on neural networks despite good predictive capabilities , they offer a black-box description of the system dynamics. A possible remedy is applying a symbolic regression to the learned neural network representation , but this adds additional computational cost due to the two-step procedure. A number of frameworks such as SINDy allows to learn interpretable dynamics but it relies on the a-priori availability of lower-dimensional descriptors and of time-derivatives which can be very noisy for both simulation and experimental data. Other frameworks are tailored to specific problems such as molecular dynamics . Here, we present a framework that only needs the value of the observables, and not their derivatives, as training data and is capable of identifying interpretable latent dynamics. The deployment of interpretable latent dynamics ensures that conservation of important properties of that are reflected in the reduced-order model . The present method is related to approaches based on the Koopman-operator extended Dynamic Mode Decomposition (eDMD) but uses continuous complex-valued latent space dynamics and only requires one scalar variable per latent dimension to describe the latent space dynamics. Therefore we do not have to enforce any parametrizations on the Koopman matrix . The time-continuous formulation moreover allows to incorporate sparse and irregularly sampled training data and fast generation of predictions after the training phase. By using a complex-valued latent space we can also incorporate harmonic effects and reduce the number of latent variables needed. Linear and non-linear autoencoders are used to map the observed, high-dimensional time-series to the lower-dimensional, latent representation and we identify simultaneously the autoencoder as well as the latent dynamics by optimizing a combined loss function. Hence the to tasks of dimensionality reduction and discovery of the reduced dynamics are unified while other frameworks treat the two parts separately . Apart from using an architecture based on autoencoders to identify the latent space, projection-based methods could also be employed . We are also proposing a probabilistic version of our algorithm ) that makes use of probabilistic Slow Feature Analysis . This allows for a latent representation that arart from being time-continuous, can quantify the predictive uncertainty and hierarchically decompose the dynamics into their pertinent scales while promoting the discovery of slow processes that control the system's evolution over long time horizons. The rest of the paper is structured as follows: We introduce the methodological framework as well as algorithmic details in section II. Particular focus is paid on the interpretability of the inferred lower-dimensional dynamics. In section III we present three numerical illustrations, i.e. a system of linear ODEs, a hidden Markov Model and the discretized KS-equation. We then present in section IV the probabilistic extension of the framework and apply it to the KS-equation. We conclude with a summary and a short discussion about possible next steps. We introduce the autoencoders deployed in this work, followed by the interpretable latent space dynamic and discuss the training process. We consider data from high-dimensional time series x n ∈ R f with n = 1, ..., T . We remark that the intervals between the different states do not need to be uniformly spaced. Autoencoder A core assumption of the method is that each high-dimensional state x n can be compressed to a lower-dimensional representation z n ∈ C c with c << f . We identify this lower-dimensional representation by an autoencoder consisiting of a parameterized encoder and decoder. The encoder maps the high-dimensional representation to the latent space as: The latent space is complex-valued. The decoder reconstructs the high-dimensional representation based on the latent variables as: We denote the parameters of the encoder as well as the decoder by θ. As discussed later in Section II C, both set of parameters are optimized simultaneously during training and therefore there is no need for differentiating them. Interpretable Latent Space Dynamics We employ a propagator in the latent space to capture the reduced-order dynamics of the system. In contrast to other time-extended variational autoencoder frameworks, our representation uses complex valued latent variables. In addition the latent variables are treated independently. The latter feature enables us to have an interpretable latent dynamics as well as a model that is especially suitable for being trained in the Small Data regime due to the small number of required parameters. This is in contrast to temporal propagators such as LSTMs . For each dimension i of the latent variable z we are using the following continuous ODE in the complex plane: By solving this ODE, we can define the operator: Interpretable reduced-order modeling with time-scale separation Here, λ is a vector containing all the individual λ's and ∆t n indicates the time-step between the latent states. The symbol is used to indicate a component-wise multiplication. We remark that the latent variables and the parameter governing the temporal evolution are complex numbers and their role in describing the system dynamics is similar to that of an eigenvalue. The real part is associated with growth and decay whereas the imaginary part is representing the periodic component. This approach has similarities with the Koopman-operator based methods and the extended dynamic mode decomposition . In contrast to the methods mentioned before we are using a continuous formulation in the latent space that allows us to incorporate scarce and irregularly sampled training data and directly rely on complex numbers in the latent space. Training and Predictions We optimize a loss function that combines both a reconstruction loss as well as a loss associated with the error of our learned propagator in the latent space: (5) We note that we could directly incorporate mini-batch training by only taking the summation over a subset of the N available training data. For new predictions of unseen states, we use the encoder to generate a latent representations which is then advanced in time by the learned propagator. At a designated time step we are using the decoder to reconstruct the high-dimensional solution. We applied our algorithm to three systems. First, we show that the algorithm is capable of exactly reproducing the solution of a linear ODE and to identify its eigenvalues. Afterwards we are applying the framework to a high-dimensional process generated by a complex latent dynamics, which is correctly identified. As a final test case, we are applying the algorithm to a Kuramoto Shivashinski (KS) equation. Interpretable reduced-order modeling with time-scale separation Linear ODE We are considering a two-dimensional ODE system for x = y 1 y 2 : Based on the obtained training data we run our algorithm using a linear encoder and decoder structure as well as two latent variables z. The loss function was optimized using the Adam algorithm . As we consider a linear ODE we can analytically compute the eigenvalues involved and compare it with the parameters λ identified by our algorithm. We observe in Figure that the algorithm was able to recover the correct values, i.e. the eigenvalues 7 and 3 of the given linear ODE. The system does not have a periodic component and the two imaginary parts correctly go to zero, whereas the real parts converge to the reference value. Moreover we are also able to identify for the linear mapping between our latent variables z and the training data a matrix consisting of a multiple of the eigenvectors (1,1) and (1,-1) and thus the correct solution. This example was chosen to show that the algorithm is able to quickly identify the exact solution of a linear ODE in terms of its linearly independent components. Hidden multiscale dynamics We consider eight-dimensional synthetic time series data produced by an underlying twodimensional complex valued process. In particular, the data points x are generated by first solving for the temporal evolution for the two complex-valued processes p 1 and p 2 and than mapping to the eight-dimensional space by using a randomly sampled linear mapping W . One of the two processes used to generate the data is chosen to be much slower than the other one and both processes have a periodic component. dp 2 dt = (−0.9 + 1.5i)p 2 (8) As training data we consider 40 time series with 150 data points each, obtained by simulating the described processes for a maximum of t = 15 s and then sampling from the obtained data points. Hence the training data consists of: • 40 time-series • with each consisting 150 observations of the x at a uniform time-step ∆t = 0.0025 The autoencoder obtained consists of one linear layer for both the decoder as well as the encoder. The model is trained for 5000 iterations using the Adam optimizer and a learning rate of 10 −3 . The results for the convergence of the parameters λ 1 and λ 2 can be found in Figure . We note that the process which is slower decaying and thus more responsible for the long-term evolution of the system has a higher convergence rate than the faster process. With the obtained parameters λ as well as the trained autoencoder, we compute predictions based on the last time step used for training, i.e. we apply the encoder to obtain our latent representation and than use the latent dynamics to advance the latent representation in time. Afterwards, we employ the decoder to reconstruct the full high-dimensional system. The results can be found in Figure and show very good agreement between predictions and reference data. This example shows that our model is successfully able to carry out dimensionality reduction and moreover indicates that the convergence rate between latent processes can be different. The latter is relevant when training models as for accurate predictions all latent processes and their dynamics should be converged. Kuramoto-Sivashinsky Finally, we applied our algorithm to the KS equation and aim to identify a reduced-order model for the solution u(y, t): We employed periodic boundary conditions, µ = 1 and a domain size y ∈ [0, 22]. For this domain-size, the KS-equation exhibits a structurally stable chaotic attractor as discussed in The black lines divides the area for which training data was given from the area without raining data. ; . The equation is discretized in space using a discretization step of 22 64 resulting in a state vector x of dimension 64 and a nonlinear system of coupled ODEs. This is solved using a stiff fourth-order solver We employed a non-linear encoder and decoder with four fully-connected layers each and ReLU-activation functions as well as Dropout Layers between the fully-connected layers. We trained the model for 200000 iterations using Adam and a learning rate of 5 • 10 4 and assuming a five-dimensional latent space. We obtained the λ's in Figure . Four latent variables have λ's close to zero and thus a slow temporal dynamic that is responsible for the long-term evolution whereas one latent variable is quickly decaying. Based on the obtained parameters, we do predictions based on an unseen initial condition not contained in the training data. We are able to reconstruct the correct phase space based on our predictions despite only using a very limited amount of training data. The results for the phase space can be seen in Figure . Although the small-scale fluctuations in the temporal dynamics are not well captured, the model identifies the correct manifold which has a good accuracy compared to the reference solution. All phase-spaces were obtained by using a finite-difference operator on the data or predictions. These results are in accordance Interpretable reduced-order modeling with time-scale separation with whose LSTM-based temporal dynamic model was also able to find the correct phase space but not to track the actual dynamics for long-term predictions. Our model is not able to account for noise in the temporal evolution and thus dealing with chaotic, small-scale fluctuations is challenging. We believe that a probabilistic version of our algorithm could be advantageous here. This section contains a fully probabilistic formulation for the deterministic model discussed before. We replace the Autoencoder with a Variational Autoencoder and the ODE in the latent space with a SDE. The loss function which we optimize is the Evidence-Lower Bound (ELBO). Model Structure We postulate the following relations for our probabilistic model using an Ornstein-Uhlenbeck (OU) for each dimension i of the latent space and a Wiener process W t in the latent space: We again assume that the latent variables z t are complex-valued and a priori independent. Complex variables were chosen as their evolution includes a harmonic components which are observed in many physical systems. We assume an initial conditions z 0,i ∼ CN (0, σ 2 0,i ). The total parameters associated with the latent space dynamics of our model are thus {σ 2 0,i , σ 2 i , λ i } c i=1 and will be denoted by θ together with all parameters responsible for the decoder mapping G (see next section). These parameters along with the state variables z t have to be inferred from the data x t . Based on probabilistic Slow Feature Analysis (SFA) , we set σ 2 i = 2; (λ j ) and σ 2 0,i = 1. As a consequence, a priori, the latent dynamics are stationary. A derivation and reasoning for this choice can be found in Appendix A. Hence the only independent parameters are the λ i , the imaginary part of which can account for periodic effects in the latent dynamics. Variational Autoencoder We employ a variational autoencoder to account for a probabilistic mappings from the lower-dimensional representation z n to the high-dimensional system x n . In particular we are employing a probabilistic decoder The encoder is used to infer the state variables z based on the given data and thus defined in the inference and learning section. Inference and Learning Given the probabilistic relations , our goal is to infer the latent variables z 0:T as well as all model parameters θ. We follow a hybrid Bayesian approach in which the posterior of the state variables is approximated using amortized Variational Inference and Maximum-A-Posteriori (MAP) point-estimates for θ are computed. The application of Bayes' rule for each data sequence x 0:T leads to the following posterior: where p(θ) denotes the prior on the model parameters. In the context of variational inference, we use the following factorization of the approximate posterior i.e. we infer only the mean µ and variance σ for each state variable based on the given data points. This conditional density used for inference is the encoder-counterpart to the probabilistic decoder defined in the section before. It can be readily shown that the optimal parameter values are found by maximizing the Evidence Lower Bound (ELBO) F(q φ (z 0:T ), θ) which is derived in Appendix B. We compute Monte Carlo estimates of the gradient of the ELBO with respect to φ and θ with the help of the reparametrization trick and carry out stochastic optimization with the ADAM algorithm . Results for the probabilistic extension We applied our probabilistic version to the KS-equation. We used the same settings as for the deterministic approach but considered up to 10 complex latent variables. The obtained λ's are in Figure . The probabilistic model allows us to quantify the uncertainty in predictions. In Figure predictions for various time-steps and the respective uncertainty bounds are shown for an unseen initial condition. Due to the chaotic nature of the KS-equation and the small amount of training data, the underlying linear dynamic of our model is only able to capture the full dynamics for a limited time horizon. Fortunately, due to the probabilistic approach the model is capable of capturing chaotic fluctuations with increasingly wide uncertainty bounds. We also computed the phase space representation for the KS-equation based on the predictions obtained by our model and compare it with the reference solution. The probabilistic model identifies the correct manifold with a better accuracy than the deterministic model. As some of the small-scale fluctuations are accounted as noise, the resulting manifold is more concentrated at the origin and the obtained values are slightly smaller than the reference manifold although their shape is very similar.
When was the paper published?
The paper was published on 7 March 2023.
3,080
multifieldqa_en
4k
Paper Info Title: An CUSUM Test with Observation-Adjusted Control Limits in Change Detection Publish Date: March 9, 2023 Author List: Fuquan Tang (from Department of Statistics, Shanghai Jiao Tong University), Dong Han (from Department of Statistics, Shanghai Jiao Tong University) Figure exp{−cg(µ)(θ − x Hv (θ) + o(1))} for 1 ≤ k ≤ ac − 1, bc ≤ n ≤ m, where Zi = −g ′ (µ)(Z i − µ)/a and Hv (θ) = ln hv (θ) + ( ac k − 1) ln ĥv (θ), ĥv (θ) = E v (e θ Zi ). i < cg(µ)(1 + o(1))) exp{−cg(µ)θ * v (1 + o(1))} (A. 5) for ac ≤ k ≤ bc − 1, bc ≤ n ≤ m,andP v ( i + g ′ (µ)a −1 Tc(g)−1 i=Tc(g)−ac (Z i − µ)] −→ µas c → ∞.By the uniform integrability of {T c (g)/c} and using Theorem A.1.1 in Gut's book(1988), we haveE v (T c (g)) = (1 + o(1)) cg(µ) µfor a large c.This completes the proof of Theorem 2.Proof of Theorem 4. Since g(x) < 0 for x > a * , a * ≤ µ * and µ * ≥ 0, it follows thatP v m Ẑm < cg( Ẑm ), Ẑm > a * ≤ P v ( Ẑm < µ * )andP v (T c (g) > m) = P v n i=n−k+1 Z i < cg( Ẑn ), 1 ≤ k ≤ n, 1 ≤ n ≤ m ≤ P v m Ẑm < cg( Ẑm ) = P v m Ẑm < cg( Ẑm ), Ẑm ≤ a * + P v m Ẑm < cg( Ẑm ), Ẑm > a * ≤ 2P v ( Ẑm < µ * ).Furthermore,P v ( Ẑm < µ * ) = P v ( m i −Z i > −mµ * ) = P v ( m i (µ − Z i ) > m(µ − µ * )) = P v (e θ m i (µ−Z i ) > e θm(µ−µ * ) ) ≤ e −m[θ(µ−µ * )−ln M (θ)] ,whereM(θ) = E v (e θ(µ−Z 1 )) and the last inequality follows from Chebychev's inequality.Note thath(θ) = θ(µ − µ * ) − ln M(θ) attains its maximum value h(θ * ) = θ * (µ − µ * ) − ln M(θ * ) > 0 at θ = θ * > 0, where h ′ (θ * ) = 0. So, E v (T c (g)) = 1 + ∞ m=1 P v (T c (g) > m) ≤ 1 + m=1 −m[θ * (µ−µ * )−ln M (θ * )] = e θ * (µ−µ * )−ln M (θ * ) + 1 e θ * (µ−µ * )−ln M (θ * ) − Let k > 1.It follows that E vk (T c (g) − k + 1) + = ∞ m=1 P vk (T c (g) > m + k − 1, T c (g) > k − 1) ≤ (a 0 + 1)(k − 1)P 0 (T c (g) > k − 1) + ∞ m≥(a 0 +1)(k−1) P vk (T c (g) > m + k − 1).Similarly, we haveP vk (T c (g) > m + k − 1) = P vk n i=n−k+1 Z i < cg( Ẑn ), 1 ≤ k ≤ n, 1 ≤ n ≤ m + k − 1 ≤ 2P vk ( Ẑm+k−1 < µ * ) − Z i ) > m(µ − µ * ) + (k − 1)(µ 0 − µ * ) ≤ 2 exp{−m θ * (µ − µ * ) − ln M(θ * ) + k − 1 m [µ 0 − µ * − ln M 0 (θ * )] } ≤ e −mb for m ≥ (a 0 + 1)(k − 1), since θ * (µ − µ * ) − ln M(θ * ) + k − 1 m [µ 0 − µ * − ln M 0 (θ * )] ≥ b for m ≥ (a 0 + 1)(k−1).Thus, E vk (T c (g) − k + 1) + ≤ (a 0 + 1)(k − 1)P 0 (T c (g) ≥ k) + m≥(a 0 +1)(k−1) e −mb ≤ (a 0 + 1)(k − 1)P 0 (T c (g) >≥ k) + 2e −(a 0 +1)(k−1)b 1 − e −b . Simulation of E τ i ,v and J ACE for detecting two mean shifts v = 0.1, v = 1.The parameters for T * M are k1=1, k2=150, r 1 = 5.2 * 10 −5 , r 2 = 1.1 * 10 −5 , and the expectation and standard deviation in both cases are 1717.06with 13459.80 and 3918.33 with 16893.25,respectively. abstract In this paper, we not only propose an new optimal sequential test of sum of logarithmic likelihood ratio (SLR) but also present the CUSUM sequential test (control chart, stopping time) with the observation-adjusted control limits (CUSUM-OAL) for monitoring quickly and adaptively the change in distribution of a sequential observations. Two limiting relationships between the optimal test and a series of the CUSUM-OAL tests are established. Moreover, we give the estimation of the in-control and the out-of-control average run lengths (ARLs) of the CUSUM-OAL test. The theoretical results are illustrated by numerical simulations in detecting mean shifts of the observations sequence. INTRODUCTION In order to quickly detect a change in distribution of observations sequence without exceeding a certain false alarm rate, a great variety of sequential tests have been proposed, developed and applied to various fields since proposed a control chart method, see, for example, , , One of popular used sequential tests is the following upper-sided CUSUM test which was proposed by . where c > 0 is a constant control limit, Z i = log[p v 1 (X i )/p v 0 (X i )], p v 0 (x) and p v 1 (x) are prechange and post-change probability density functions respectively for a sequence of mutually independent observations {X i , i ≥ 1}, that is, there is a unknown change-point τ ≥ 1 such that X 1 , ..., X τ −1 have the probability density function p v 0 , whereas, X τ , X τ +1 , ... have the probability density function p v 1 . By the renewal property of the CUSUM test T C we have , where E 1 (T C ) is the out-of-control average run length (ARL 1 ), P k and E k denote the probability and expectation respectively when the change from p v 0 to p v 1 occurs at the change-point τ = k for k ≥ 1. Though we know that the CUSUM test is optimal under Lorden's measure (see Moustakides 1986 and Ritov 1990), the out-of-control ARL 1 of the CUSUM test is not small, especially in detecting small mean shifts ( see Table in Section 4). In other words, the CUSUM test is insensitive in detecting small mean shifts. Then, how to increase the sensitivity of the CUSUM test ? Note that the control limit in the CUSUM test is a constant c which does not depend on the observation samples. Intuitively, if the control limit of the CUSUM test can become low as the samples mean of the observation sequence increases, then the alarm time of detecting the increasing mean shifts will be greatly shortened. Based on this idea, by selecting a decreasing function g(x) we may define the ( upper-sided ) CUSUM chart T C (cg) with the observation-adjusted control limits cg( Ẑn ) ( abbreviated to the CUSUM-OAL chart ) in the following where c > 0 is a constant and Ẑn = n i=1 Z i /n. In other words, the control limits cg( Ẑn ) of the CUSUM-OAL test can be adjusted adaptively according to the observation information { Ẑn }. Note that the control limits cg( Ẑn ) may be negative. In the special case, the CUSUM-OAL chart T C (cg) becomes into the conventional CUSUM chart T C (c) in (1) when g ≡ 1. Similarly, we can define a down-sided CUSUM-OAL test. In this paper, we consider only the upper-sided CUSUM-OAL test since the properties of the down-sided CUSUM-OAL test can be obtained by the similar method. The main purpose of the present paper is to show the good detection performance of the CUSUM-OAL test and to give the estimation of its the in-control and out-of-control ARLs. The paper is organized as follows. In Section 2, we first present an optimal SLR sequential test, then define two sequences of the CUSUM-OAL tests and prove that one of the two sequences of CUSUM-OAL tests converges to the optimal test, another sequences of CUSUM-OAL tests converges to a combination of the optimal test and the CUSUM test. The estimation of the in-control and out-of-control ARLs of the CUSUM-OAL tests and their comparison are given in Section 3. The detection performances of the three CUSUM-OAL tests and the conventional CUSUM test are illustrated in Section 4 by comparing their numerical out-ofcontrol ARLs. Section 5 provides some concluding remarks. Proofs of the theorems are given in the Appendix. AN OPTIMAL SLR TEST, TWO CUSUM-OAL TESTS AND THEIR LIMITING RELATIONSHIPS Let P 0 and E 0 denote the probability and the expectation respectively with the probability density p v 0 when there is no change for all the time. It is known that It follows from Proposition 2.38 in and (5.8)-(5.9) in Chow et al, P.108) that the following sequence test of sum of logarithmic likelihood ratio (SLR) for B > 1, is optimal in the following sense min for P 0 (T SLR < ∞) = α, where c = log B and 0 < α < 1. In particular, if P 0 is the standard normal distribution with mean shift µ > 0 after changepoint, we have Z j − µ 0 = µX j , where µ 0 = −µ 2 /2. It follows from proposition 4 in that the SLR test T SLR in (4) is also optimal (minimal ARL 1 ) with the same false alarm probability P 0 (T < τ ). It can be seen that the in-control average run length of T SLR is infinite, that is, ARL 0 = E 0 (T SLR ) = ∞. However, the minimal ARL 1 with finite ARL 0 is a widely used optimality criterion in statistical quality control (see ) and detection of abrupt changes (see . In order to get finite ARL 0 for T SLR , we replace the constant control limit c of T SLR in (3) or (4) with the dynamic control limit n(µ 0 − r) and obtain a modified SLR test T SLR (r) in the following for r ≥ 0. For comparison, the in-control ARL 0 of all candidate sequential tests are constrained to be equal to the same desired level of type I error, the test with the lowest out-of-control ARL v has the highest power or the fastest monitoring (detection) speed. In the following example 1, the numerical simulations of the out-of-control ARLs of the CUSUM-OAL tests T C (cg u,0 ) in detecting the mean shifts of observations with normal distribution will be compared with that of the SLR tests T * (r) and T * (0), and that of the CUSUM-SLR test T C (c) ∧ T * (0) := min{T C (c), T * (0)} in the following Table . These comparisons lead us to guess that there are some limiting relationships between T C (cg u,r ) and T * (r), and T C (c g u ) and T C (c) ∧ T * (0), respectively. Example 1. Let X 1 , X 2 , .... be mutually independent following the normal distribution N(0, 1) if there is no change. After the change-point τ = 1, the mean E µ (X k ) ( k ≥ 1 ) will change from v 0 = 0 to v = 0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 3. Here, we let , where v 1 = 1 is a given reference value which for the CUSUM test is the magnitude of a shift in the process mean to be detected quickly. We conducted the numerical simulation based on 1,000,000 repetitions. The following Table lists the simulation results of the ARLs of the tests T C (c), T C (c g u ) for u = 1, 10, 10 2 , 10 3 , 10 4 , T * (0.0007), T C (c) ∧ T * (0) and T * (0) for detecting the mean shifts, where the mean shift 0.0 means that there is no change which corresponds to the in-control ARL 0 and all tests have the common ARL 0 ≈ 1000 except the test T * (0) which has ARL 0 = ∞. The values in the parameters are the standard deviations of the tests. From the last row in Table , it's a little surprising that though the ARL 0 of T * (0) is infinite, that is, E 0 (T * (0)) = ∞, the detection speed of T * (0) is faster than that of the CUSUM chart T C for all mean shifts, in particular, for detecting the small mean shift 0.1, the speed of T * (0) is only 7.47 which is very faster than the speed, 439, of the CUSUM test. Moreover, both control charts T * (0.0007) and T C (11.9271) ∧ T * (0) not only have the nearly same detection performance as T * (0) but also can have the finite in-control ARL 0 . Note particularly that when the number u in g u is taken from 0 to 1, 10, 10 2 , 10 3 , 10 4 , the detection speed of T C (c g u ) is getting faster and faster, approaching to that of T C (c) ∧ T * (0). This inspires us to prove the following theoretic results. Let τ = 1 and {X k , k ≥ 1} be an i.i.d. observations sequence with Theorem 2 shows that when the constant control limit c of the CUSUM test T C (c) is replaced with the observation-adjusted control limits {cg u,r ( Ẑn )} and {c g u ( Ẑn )} respectively, the corresponding two CUSUM-OAL tests {T C (cg u,r )} and {T C (c g u )} will converge to the optimal SLR test T * (r) and the CUSUM-SLR test T C (c) ∧ T * (0) as u → ∞, respectively. In other words, the fastest alarm times that {T C (cg u,r )} and {T C (c g u )} can be reached are T * (r) and T C (c) ∧ T * (0), respectively. u ≥ 0} can be seen as two "long bridges" connecting T C (c) and T * (r), and T C (c) and T C (c) ∧ T * (0), respectively. ESTIMATION AND COMPARISON OF ARL OF THE CUSUM-OAL TEST In this section we will give an estimation of the ARLs of the following CUSUM-OAL test that can be written as where g(.) is a decreasing function, Ẑn (ac x] denotes the smallest integer greater than or equal to x. Here Ẑn (ac) is a sliding average of the statistics, Next we discuss on the the post-change probability distribution in order to estimate the ARLs of T C (cg). Usually we rarely know the post-change probability distribution P v of the observation process before it is detected. But the possible change domain and its boundary (including the size and form of the boundary) about v may be determined by engineering knowledge, practical experience or statistical data. So we may assume that the region of parameter space V and a probability distribution Q on V are known. If we have no prior knowledge of the possible value of v after the change time τ , we may assume that v occurs equally on V , that is, the probability distribution Q is an equal probability distribution (or uniform distribution ) on V . For example, let P v be the normal distribution and v = (µ, σ), where µ and σ denote the mean and standard deviation respectively, we can take the set V = {(µ, σ) : and Q is subject to the uniform distribution U(V ) on V if v occurs equally on V , where the numbers µ 1 , µ 2 , σ 1 and σ 2 are known. It means that we know the domain of the possible post-change distributions, P v , v ∈ V , i.e., the boundary ∂V of the parameter space V is known. Next we shall divide the parameter space V into three subsets V + , V 0 and V − by the Kullback-Leibler information distance. Let and are two Kullblak-Leibler information distances between P v , P v 0 and P v , P v 1 . Since I(p|q) = 0 if and only if p = q, where p and q are two probability measures, it follows that , it means that P v is closer to P v 0 than to P v 1 according to the Kullblak-Leibler information distance. There is a similar explanation for v ∈ V + or ∈ V 0 . Suppose the post-change distribution P v and the function g(x) satisfy the following conditions: (I) The probability P v is not a point mass at E v (Z 1 ) and P v (Z 1 > 0) > 0. (II) The moment-generating function h v (θ) = E v (e θZ 1 ) satisfies h v (θ) < ∞ for some θ > 0. (III) The function g(x) is decreasing, its second order derivative function g ′′ (x) is continuous and bounded, and there is a positive number x * such that g(x * ) = 0. ) and and therefore, Θ ′ (θ(u)) = −H(θ(u)) = −H(θ * v ) = 0, Θ ′ (θ(1/x)) > 0 for x > 1/u and Θ ′ (θ(1/x)) < 0 for x > 1/u. Hence, there exists a positive number b defined in (??). It can be seen, the main part of ARL v (T c (g)) will be an exponential function, square function, and linear function of c when the process {Z k : k ≥ 0} has no change or a "small change", a "medium change" and a "large change" from P v 0 to P v , respectively. Here, the "small change" (v ∈ V − ) means that P v is closer to P v 0 than to P v 1 , i.e., I(P v |P v 0 ) < I(P v |P v 1 ), and the "large change" is just the opposite. The "medium change" (v ∈ V 0 ) corresponds to In this paper, we will use another method to prove Theorem 3 since Wald's identity and the martingale method do not hold or can not work for showing the ARLs estimation of the test T c (g) when g is not constant. Next we compare the detection performance of the CUSUM-OAL test (ARL v (T c ′ (g))) with that of the CUSUM test (ARL v (T C (c))) by using (??) in Theorem 4.1. ) when µ 0 < µ < 0 and for θ * v 0 > g(µ)/g(µ 0 ) when µ ≥ 0. This means that ARL v (T c (g)) can be smaller than ARL v (T C (c)) as long as g(µ)/g(µ 0 ) is small for all µ > µ 0 . NUMERICAL SIMULATION AND A REAL EX-AMPLE ILLUSTRATION 4.1 Numerical Simulation of ARLs for τ ≥ 1 By the simulation results of ARLs in Table , we see that the detection performance of T * (r), T C (c)∧T * (0), T * (0) and T C (c g u ) for large u is much better than that of the conventional CUSUM test T C for τ = 1. The following Table illustrates the simulation values of E τ i ,v and J ACE of nine tests in detecting two mean shifts v = 0.1 and v = 1 after six change-points, τ i , 1 ≤ i ≤ 6 with ARL 0 (T ) = E 0 (T ) ≈ 500. Note that H v (θ) is a convex function and H ′ v (0) = µ < 0. This means that there is a unique positive number . It follows from (A.9) that for a large c. Taking θ ց θ * v and u ′ ց u, we have for a large c. Thus, by (A.11) we have as c → ∞. By the properties of exponential distribution, we have for a large c. To prove the downward inequality of (A.10), let where b is defined in (??) and without loss of generality, we assume that b > a. Obviously, Let k = xcg(µ). By Chebyshev's inequality, we have Since Hv (θ) and H v (θ) are two convex functions and Let m = tcg(µ)θ * v /bc for t > 0. By (A.13), (A.14), (A.15) and Theorem 5.1 in Esary, Proschan and Walkup (1967) we have Finally, as c → +∞, where θ 0 > 0 satisfies h v (θ 0 ) = 1. Thus as c → ∞. This implies that for a large c. This completes the proof of (A.10). Let v ∈ V 0 . Let m 1 = (cg(0)) 2 /σ 2 . It follows that Note that for a large c, where A = |g ′ (0)|/a, and , where Φ(.) is the standard normal distribution. Let m 2 = (cg(0)) 2 /(8σ 2 ln c). Note that as c → ∞, where the third inequality comes from Theorem 5.1 in Esary, Proschan and Walkup (1967). Thus, we have Let v ∈ V + and let The uniform integrability of {T c (g)/c} for c ≥ 1, follows from the well-known uniform integrability of {T 0 /c} (see Gut (1988)).
What are the three subsets into which the parameter space V is divided?
The three subsets are V+, V0, and V-, determined by the Kullback-Leibler information distance.
3,737
multifieldqa_en
4k
\section{Introduction}\label{sec1} \setcounter{equation}{0} Transport problems with highly forward-peaked scattering are prevalent in a variety of areas, including astrophysics, medical physics, and plasma physics \cite{HGK,aristova,multiphysics}. For these problems, solutions of the transport equation converge slowly when using conventional methods such as source iteration (SI) \cite{adamslarsen} and the generalized minimal residual method (GMRES) \cite{gmres}. Moreover, diffusion-based acceleration techniques like diffusion synthetic acceleration (DSA) \cite{alcouffe} and nonlinear diffusion acceleration (NDA) \cite{smithetall} are generally inefficient when tackling these problems, as they only accelerate up to the first moment of the angular flux \cite{JapanFPSA}. In fact, higher-order moments carry important information in problems with highly forward-peaked scattering and can be used to further accelerate convergence \cite{japanDiss}. This paper focuses on solution methods for the monoenergetic, steady-state transport equation in homogeneous slab geometry. Under these conditions, the transport equation is given by \begin{subequations}\label[pluraleq]{eq1} \begin{equation} \label{t1} \mu\frac{\partial}{\partial x} \psi(x,\mu) + \sigma_t \psi(x,\mu) = \int_{-1}^{1} d\mu' \sigma_s(\mu,\mu') \psi(x,\mu') + Q(x, \mu), \,\,\, x\in [0, X],-1\leq\mu\leq 1 ,\\ \end{equation} with boundary conditions \begin{align} \label{t2} \psi(0,\mu) &= \psi_L(\mu), \quad \mu > 0,\\ \label{t3} \psi(X,\mu) &= \psi_R(\mu), \quad \mu < 0. \end{align} \end{subequations} Here, $\psi(x,\mu)$ represents the angular flux at position $x$ and direction $\mu$, $\sigma_t$ is the macroscopic total cross section, $\sigma_s(\mu,\mu')$ is the differential scattering cross section, and $Q$ is an internal source. New innovations have paved the way to better solve this equation in systems with highly forward-peaked scattering. For instance, work has been done on modified $P_L$ equations and modified scattering cross section moments to accelerate convergence of anisotropic neutron transport problems \cite{khattab}. In order to speed up the convergence of radiative transfer in clouds, a quasi-diffusion method has been developed \cite{aristova}. In addition, the DSA-multigrid method was developed to solve problems in electron transport more efficiently \cite{trucksin}. One of the most recent convergence methods developed is Fokker-Planck Synthetic Acceleration (FPSA) \cite{JapanFPSA,japanDiss}. FPSA accelerates up to $N$ moments of the angular flux and has shown significant improvement in the convergence rate for the types of problems described above. The method returns a speed-up of several orders of magnitude with respect to wall-clock time when compared to DSA \cite{JapanFPSA}. In this paper, we introduce a new acceleration technique, called \textit{Nonlinear Fokker-Planck Acceleration} (NFPA). This method returns a modified Fokker-Planck (FP) equation that preserves the angular moments of the flux given by the transport equation. This preservation of moments is particularly appealing for applications to multiphysics problems \cite{multiphysics}, in which the coupling between the transport physics and the other physics can be done through the (lower-order) FP equation. To our knowledge, this is the first implementation of a numerical method that returns a Fokker-Planck-like equation that is discretely consistent with the linear Boltzmann equation. This paper is organized as follows. \Cref{sec2} starts with a brief description of FPSA. Then, we derive the NFPA scheme. In \cref{sec3}, we discuss the discretization schemes used in this work and present numerical results. These are compared against standard acceleration techniques. We conclude with a discussion in \cref{sec4}. \section{Fokker-Planck Acceleration}\label{sec2} \setcounter{equation}{0} In this section we briefly outline the theory behind FPSA, describe NFPA for monoenergetic, steady-state transport problems in slab geometry, and present the numerical methodology behind NFPA. The theory given here can be easily extended to higher-dimensional problems. Moreover, extending the method to energy-dependence shall not lead to significant additional theoretical difficulties. To solve the transport problem given by \cref{eq1} we approximate the in-scattering term in \cref{t1} with a Legendre moment expansion: \begin{equation} \label{transport1} \mu\frac{\partial}{\partial x} \psi(x,\mu) + \sigma_t \psi(x,\mu) = \sum_{l=0}^L \frac{(2l+1)}{2} P_l(\mu) \sigma_{s,l} \phi_l(x) + Q(x, \mu), \end{equation} with \begin{equation} \label{transport2} \phi_l(x) = \int_{-1}^{1} d\mu P_l(\mu) \psi(x,\mu). \end{equation} Here, $\phi_l$ is the $l^{th}$ Legendre moment of the angular flux, $ \sigma_{s,l}$ is the $l^{th}$ Legendre coefficient of the differential scattering cross section, and $P_l$ is the $l^{th}$-order Legendre polynomial. For simplicity, we will drop the notation $(x,\mu)$ in the remainder of this section. The solution to \cref{transport1} converges asymptotically to the solution of the following Fokker-Planck equation in the forward-peaked limit \cite{pomraning1}: \begin{equation} \label{fp1} \mu\frac{\partial \psi}{\partial x} + \sigma_a \psi = \frac{\sigma_{tr}}{2}\frac{\partial }{\partial \mu} (1-\mu^2) \frac{\partial \psi}{\partial \mu} + Q\,, \end{equation} where $\sigma_{tr}= \sigma_{s,0} -\sigma_{s,1}$ is the momentum transfer cross section and $\sigma_a = \sigma_t-\sigma_{s,0}$ is the macroscopic absorption cross section. Source Iteration \cite{adamslarsen} is generally used to solve \cref{transport1}, which can be rewritten in operator notation: \begin{equation} \label{si1} \mathcal{L} \psi^{m+1} = \mathcal{S} \psi^{m} + Q\,, \end{equation} where \begin{equation} \mathcal{L} = \mu \frac{\partial}{\partial x} + \sigma_t, \quad \mathcal{S} = \sum_{l=0}^L \frac{(2l+1)}{2} P_l(\mu) \sigma_{s,l} \int_{-1}^{1}d\mu P_l(\mu) , \label{trans1} \end{equation} and $m$ is the iteration index. This equation is solved iteratively until a tolerance criterion is met. The FP approximation shown in \cref{fp1} can be used to accelerate the convergence of \cref{transport1}. \subsection{FPSA: Fokker-Planck Synthetic Acceleration}\label{FPSA} In the FPSA scheme \cite{JapanFPSA,japanDiss}, the FP approximation is used as a preconditioner to synthetically accelerate convergence when solving \cref{transport1} (cf. \cite{adamslarsen} for a detailed description of synthetic acceleration). When solving \cref{si1}, the angular flux at each iteration $m$ has an error associated with it. FPSA systematically follows a predict, correct, iterate scheme. A transport sweep, one iteration in \cref{si1}, is made for a prediction. The FP approximation is used to correct the error in the prediction, and this iteration is performed until a convergence criterion is met. The equations used are: \begin{subequations} \label{fpsaeq} \begin{align} \label{predict} \mathrm{Predict}&: \mathcal{L} \psi^{m+\frac{1}{2}} = \mathcal{S} \psi^{m} + Q\,,\\ \label{correct} \mathrm{Correct}&: \psi^{m+1} = \psi^{m+\frac{1}{2}} + \mathcal{P}^{-1} \mathcal{S} \left( \psi^{m+\frac{1}{2}} - \psi^{m}\right), \end{align} \end{subequations} where we define $\mathcal{P}$ as \begin{equation} \label{FPSAsi1} \mathcal{P} = \mathcal{A}-\mathcal{F} =\underbrace{\left(\mu\frac{\partial}{\partial x} + \sigma_a\right)}_\mathcal{A} - \underbrace{\left(\frac{\sigma_{tr}}{2}\frac{\partial }{\partial \mu} (1-\mu^2) \frac{\partial }{\partial \mu}\right)}_\mathcal{F}, \end{equation} In this synthetic acceleration method, the FP approximation is used to correct the error in each iteration of the high-order (HO) equation (\ref{predict}). Therefore, there is no consistency between the angular moments of the flux in the HO and low-order (LO) equations. \subsection{NFPA: Nonlinear Fokker-Planck Acceleration}\label{NFPA} Similar to FPSA, NFPA uses the FP approximation to accelerate the convergence of the solution. We introduce the additive term $\hat{D}_F$ to \cref{fp1}, obtaining the modified FP equation \begin{equation} \label{mfp1} \mu\frac{\partial \psi}{\partial x} + \sigma_a \psi = \frac{\sigma_{tr}}{2}\frac{\partial }{\partial \mu} (1-\mu^2) \frac{\partial \psi}{\partial \mu} + \hat{D}_F + Q\,. \end{equation} The role of $\hat{D}_F$ is to force the transport and modified FP equations to be consistent. Subtracting \cref{mfp1} from \cref{transport1} and rearranging, we obtain the consistency term \begin{equation} \label{dfp} \hat{D}_F = \sum_{l=0}^L \frac{(2l+1)}{2} P_l \sigma_l \phi_l - \frac{\sigma_{tr}}{2}\frac{\partial}{\partial \mu} (1-\mu^2) \frac{\partial \psi}{\partial \mu} - \sigma_{s,0} \psi\,. \end{equation} The NFPA method is given by the following equations: \begin{subequations}\label[pluraleq]{holocons} \begin{align} \label{HO1} \text{HO}&: \mu\frac{\partial \psi_{HO}}{\partial x} + \sigma_t \psi_{HO} = \sum_{l=0}^L \frac{(2l+1)}{2} P_l \sigma_l \phi_{l, LO} + Q\,,\\ \label{LO11} \text{LO}&: \mu\frac{\partial \psi_{LO}}{\partial x} + \sigma_a \psi_{LO} = \frac{\sigma_{tr}}{2}\frac{\partial }{\partial \mu} (1-\mu^2) \frac{\partial \psi_{LO}}{\partial \mu} + \hat{D}_F + Q\,,\\ \label{con1} \text{Consistency term}&: \hat{D}_F = \sum_{l=0}^L \frac{(2l+1)}{2} P_l \sigma_l \phi_{l, HO}^m - \frac{\sigma_{tr}}{2}\frac{\partial }{\partial \mu} (1-\mu^2) \frac{\partial \psi_{HO}}{\partial \mu} - \sigma_{s,0} \psi_{HO}\,, \end{align} \end{subequations} where $\psi_{HO}$ is the angular flux obtained from the HO equation and $\psi_{LO}$ is the angular flux obtained from the LO equation. The nonlinear HOLO-plus-consistency system given by \cref{holocons} can be solved using any nonlinear solution technique \cite{kelley}. Note that the NFPA scheme returns a FP equation that is consistent with HO transport. Moreover, this modified FP equation accounts for large-angle scattering which the standard FP equation does not. The LO equation (\ref{fp1}) can then be integrated into multiphysics models in a similar fashion to standard HOLO schemes \cite{patelFBR}. To solve the HOLO-plus-consistency system above, we use Picard iteration \cite{kelley}: \begin{subequations} \begin{align} \label{H1} \text{Transport Sweep for HO}&: \mathcal{L} \psi_{HO}^{k+1} = \mathcal{S} \psi_{LO}^{k} + Q, \\ \label{L1} \text{Evaluate Consistency Term}&: \hat{D}_F^{k+1} = \left(\mathcal{S} - \mathcal{F} - \sigma_{s,0}\mathcal{I}\right) \psi_{HO}^{k+1}, \\ \label{c1} \text{Solve LO Equation}&: \psi_{LO}^{k+1} = \mathcal{P}^{-1} \left(\hat{D}_F^{k+1} + Q\right), \end{align} \end{subequations} where $\mathcal{L}$ and $\mathcal{S}$ are given in \cref{trans1}, $\mathcal{P}$ and $\mathcal{F}$ are given in \cref{FPSAsi1}, $\mathcal{I}$ is the identity operator, and $k$ is the iteration index. Iteration is done until a convergence criterion is met. The main advantage of setting up the LO equation in this fashion is that the stiffness matrix for LO needs to be setup and inverted \textit{only once}, just as with FPSA \cite{JapanFPSA, japanDiss}. This has a large impact on the method's performance. A flowchart of this algorithm is shown in \cref{Nalgorithm}. \begin{figure}[H] \centering \begin{tikzpicture}[node distance = 3cm, auto] \node [block] (init) {Initial guess of flux moments}; \node [cloud_HO, right of=init, node distance=4cm] (HOm) {HO}; \node [cloud_LO, below of=HOm, node distance=2cm] (LOm) {LO}; \node [HO, below of=init] (transport) {One sweep in transport equation}; \node [decision, below of=transport,node distance=4cm] (decide) {Flux moments converged?}; \node [LO, left of=decide, node distance=4cm] (dterm) {Solve for consistency term}; \node [LO, left of=dterm, node distance=3cm] (MFP) {Solve for FP angular flux}; \node [LO, above of=MFP, node distance=4cm] (moments) {Convert angular flux to moments}; \node [block, right of=decide, node distance=4cm] (stop) {Stop}; \path [line] (init) -- (transport); \path [line] (transport) -- (decide); \path [line] (decide) -- node {no} (dterm); \path [line] (dterm) -- (MFP); \path [line] (MFP) -- (moments); \path [line] (moments) -- (transport); \path [line] (decide) -- node {yes}(stop); \end{tikzpicture} \caption{NFPA algorithm} \label{Nalgorithm} \end{figure} \section{Numerical Experiments}\label{sec3} In \cref{sec31} we describe the discretization methods used to implement the algorithms. In \cref{sec32} we provide numerical results for 2 different choices of source $Q$ and boundary conditions. For each choice we solve the problem using 3 different scattering kernels, applying 3 different choices of parameters for each kernel. We provide NFPA numerical results for these 18 cases and compare them against those obtained from FPSA and other standard methods. All numerical experiments were performed using MATLAB. Runtime was tracked using the tic-toc functionality \cite{matlab17}, with only the solver runtime being taken into consideration in the comparisons. A 2017 MacBook Pro with a 2.8 GHz Quad-Core Intel Core i7 and 16 GB of RAM was used for all simulations. \subsection{Discretization}\label{sec31} The Transport and FP equations were discretized using linear discontinuous finite element discretization in space \cite{mpd1}, and discrete ordinates (S$_N$) in angle \cite{landm}. The Fokker-Planck operator $\mathcal{F}$ was discretized using moment preserving discretization (MPD) \cite{mpd1}. Details of the derivation of the linear discontinuous finite element discretization can be seen in \cite{japanDiss,martin}. The finite element discretization for the Fokker-Planck equation follows the same derivation. A brief review for the angular discretization used for the FP equation is given below. First, we use Gauss-Legendre quadrature to discretize the FP equation in angle: \begin{equation} \mu_n\frac{\partial \psi_n(x)}{\partial x} + \sigma_a \psi_n(x) - \frac{\sigma_{tr}}{2}\nabla^2_n \psi_n(x) = Q_n(x), \end{equation} for $n=1,..,N$. Here, $\nabla^2_n$ term is the discrete form of the angular Laplacian operator evaluated at angle $n$. The MPD scheme is then shown as \begin{equation} \nabla^2_n \psi_n = M \psi_n = V^{-1} L V \psi_n, \end{equation} where $M$ is the MPD discretized operator defined by \begin{subequations} \begin{equation} V_{i,j} = P_{i-1}(\mu_j)w_j, \end{equation} and \begin{equation} L_{i,j} = -i(i-1), \end{equation} \end{subequations} for $i,j=1,...,N$. Here, $P_l(\mu_j)$ are the Legendre polynomials evaluated at each angle $\mu_j$ and $w_j$ are the respective weights. $M$ is defined as a (N x N) operator for a vector of $N$ angular fluxes $ \psi(x)$, at spatial location $x$. In summary, if we write the FP equation as \begin{equation} \mathcal{H} \frac{\partial \psi}{\partial x}(x) + \sigma_a \psi(x) - \mathcal{F} \psi(x) = Q(x), \end{equation} then $\mathcal{H}$ is Diag$(\mu_n)$ for $n=1,...,N$, $Q(x)$ is a vector of source terms $Q_n(x)$, and $\mathcal{F}$ is represented by $\frac{\sigma_{tr}}{2}M$. \subsection{Numerical Results}\label{sec32} It is shown that for slowly converging problems, typical convergence methods like $L_\infty$ suffer from false convergence \cite{adamslarsen}. To work around this issue, the criterion is modified to use information about the current and previous iteration: \begin{equation} \label{falseconverge} \frac{|| \phi^{m}_0(x) - \phi^{m-1}_0(x) ||_2}{1-\frac{|| \phi^{m+1}_0(x) - \phi^{m}_0(x) ||_2}{|| \phi^{m}_0(x) - \phi^{m-1}_0(x) ||_2}} < 10^{-8}. \end{equation} Two problems were tested using 200 spatial cells, $X$ = 400, $\sigma_a = 0$, $L$ = 15, and $N$ = 16. Problem 1 has vacuum boundaries and a homogeneous isotropic source $Q$ for $0 < x < X$. Problem 2 has no internal source and an incoming beam at the left boundary. The source and boundary conditions used are shown in \cref{parameters}. \begin{table}[H] \begin{center} \scalebox{0.9}{ \begin{tabular}{c | c | c} \hline & Problem 1 & Problem 2 \\ \hline \hline Q(x) & 0.5 & 0 \\ $\psi_L$ & 0 & $\delta(\mu - \mu_N)$ \\ $\psi_R$ & 0 & 0 \\ \end{tabular}} \end{center} \caption{Problem Parameters} \label{parameters} \end{table} We consider three scattering kernels in this paper: Screened Rutherford \cite{pomraning1}, Exponential \cite{pomraning2}, and Henyey-Greenstein \cite{HGK}. Three cases for each kernel were tested. The results obtained with NFPA are compared with those obtained using GMRES, DSA, and FPSA with the MPD scheme. \subsubsection{SRK: Screened Rutherford Kernel} The Screened Rutherford Kernel \cite{pomraning1, JapanFPSA} is a widely used scattering kernel for modeling scattering behavior of electrons \cite{SRK}. The kernel depends on the parameter $\eta$, such that \begin{equation} \sigma^{SRK}_{s,l} = \sigma_s \int_{-1}^{1} d\mu P_l(\mu) \frac{\eta (\eta+1)}{(1+2\eta-\mu)^2}. \end{equation} The SRK has a valid FP limit as $\eta$ approaches 0 \cite{patelFBR}. Three different values of $\eta$ were used to generate the scattering kernels shown in \cref{SRK}. GMRES, DSA, FPSA, and NFPA all converged to the same solution for problems 1 and 2. \Cref{SRK_plots} shows the solutions for SRK with $\eta = 10^{-7}$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.1,angle=0]{SRK.jpg} \caption{Screened Rutherford Kernels} \label{SRK} \end{center} \end{figure} \begin{figure}[H] \centering \subfloat[Problem 1]{{\includegraphics[width=7cm]{s7_iso.jpg} }} \qquad \subfloat[Problem 2]{{\includegraphics[width=7cm]{s7_beam.jpg} }} \caption{Results for SRK Problems with $\eta = 10^{-7}$} \label{SRK_plots} \end{figure} \begin{table}[H] \begin{center} \scalebox{0.8}{ \begin{tabular}{c || c || c || c} \hline Parameter & Solver & Runtime (s) & Iterations \\ \hline \hline \multirow{4}{*}{$\eta = 10^{-5}$} & GMRES & 98.8 & 12 \\ & DSA & 2380 & 53585 \\ & FPSA & 1.21 & 26 \\ & NFPA & 1.39 & 26 \\ \hline \multirow{4}{*}{$\eta = 10^{-6}$} & GMRES & 208 & 84 \\ & DSA & 3040 & 69156 \\ & FPSA & 0.747 & 16 \\ & NFPA & 0.857 & 16 \\ \hline \multirow{4}{*}{$\eta = 10^{-7}$} & GMRES & 174 & 124 \\ & DSA & 3270 & 73940 \\ & FPSA & 0.475 & 10 \\ & NFPA & 0.542 & 10 \\ \hline \end{tabular}} \end{center} \caption{Runtime and Iteration Counts for Problem 1 with SRK} \label{SRKresults1} \end{table} \begin{table}[H] \begin{center} \scalebox{0.8}{ \begin{tabular}{c || c || c || c} \hline Parameter & Solver & Runtime (s) & Iterations \\ \hline \hline \multirow{4}{*}{$\eta = 10^{-5}$} & GMRES & 52.4 & 187 \\ & DSA & 1107 & 25072 \\ & FPSA & 0.953 & 20 \\ & NFPA & 1.14 & 20 \\ \hline \multirow{4}{*}{$\eta = 10^{-6}$} & GMRES & 108 & 71 \\ & DSA & 1434 & 32562 \\ & FPSA & 0.730 & 14 \\ & NFPA & 0.857 & 14 \\ \hline \multirow{4}{*}{$\eta = 10^{-7}$} & GMRES & 94.1 & 185 \\ & DSA & 1470 & 33246 \\ & FPSA & 0.438 & 8 \\ & NFPA & 0.484 & 8 \\ \hline \end{tabular}} \end{center} \caption{Runtime and Iteration Counts for Problem 2 with SRK} \label{SRKresults2} \end{table} The results of all solvers are shown in \cref{SRKresults1,SRKresults2}. We see that NFPA and FPSA tremendously outperform GMRES and DSA in runtime for all cases. FPSA is a simpler method than NFPA, requiring less calculations per iteration; therefore, it is expected that it outperforms NFPA in runtime. We see a reduction in runtime and iterations for FPSA and NFPA as the FP limit is approached, with DSA and GMRES requiring many more iterations by comparison as $\eta$ approaches 0. An advantage that NFPA offers is that the angular moments of the flux in the LO equation will remain consistent with those of the transport equation even as a problem becomes less forward-peaked. On the other hand, the moments found using only the FP equation and source iteration lose accuracy. To illustrate this, Problem 1 was tested using different Screened Rutherford Kernels with increasing $\eta$ parameters. The percent errors (relative to the transport solution) for the scalar flux obtained with the LO equation and with the standard FP equation at the center of the slab are shown in \cref{momcomp}. It can be seen that the percent relative errors in the scalar flux of the FP solution is orders of magnitude larger than the error produced using the LO equation. The same trend can be seen when using the exponential and Henyey-Greenstein kernels. \begin{figure}[H] \begin{center} \includegraphics[scale=0.15,angle=0]{relerrorlog.jpg} \caption{Log Scale of $\%$ Relative Error vs $\eta$ for Problem 1 at the Center of the Slab with SRK} \label{momcomp} \end{center} \end{figure} \subsubsection{EK: Exponential Kernel} The exponential kernel \cite{pomraning2, JapanFPSA} is a fictitious kernel made for problems that have a valid Fokker-Planck limit \cite{pomraning1}. The zero$^{\text{th}}$ moment, $\sigma^{EK}_{s,0}$, is chosen arbitrarily; we define $\sigma^{EK}_{s,0}$ as the same zero$^{\text{th}}$ moment from the SRK. The $\Delta$ parameter determines the kernel: the first and second moments are given by \begin{subequations} \begin{align} \sigma^{EK}_{s,1} &= \sigma^{EK}_{s,0} (1-\Delta),\\ \sigma^{EK}_{s,2} &= \sigma^{EK}_{s,0} (1-3\Delta+3\Delta^2), \end{align} and the relationship for $l\geq 3$ is \begin{equation} \sigma^{EK}_{s,l} = \sigma^{EK}_{s,l-2} - \Delta(2l+1) \sigma^{EK}_{s,l-1}. \end{equation} \end{subequations} As $\Delta$ is reduced, the scattering kernel becomes more forward-peaked. The EK has a valid FP limit as $\Delta$ approaches 0 \cite{patelFBR}. Three different values of $\Delta$ were used to generate the scattering kernels shown in \cref{EXP}. The generated scattering kernels are shown in \cref{EXP}. GMRES, DSA, FPSA, and NFPA all converged to the same solution for problems 1 and 2. \Cref{EK_plots} shows the solutions for EK with $\Delta = 10^{-7}$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.1,angle=0]{EXP.jpg} \caption{Exponential Kernels} \label{EXP} \end{center} \end{figure} \begin{figure}[H] \centering \subfloat[Problem 1]{{\includegraphics[width=7cm]{dta7_iso.jpg} }} \qquad \subfloat[Problem 2]{{\includegraphics[width=7cm]{dta7_beam.jpg} }} \caption{Results for EK Problems with $\Delta = 10^{-7}$} \label{EK_plots} \end{figure} The runtimes and iterations for GMRES, DSA, FPSA, and NFPA are shown in \cref{Expresults1,Expresults2}. We see a similar trend with the EK as seen with SRK. Smaller $\Delta$ values lead to a reduction in runtime and iterations for NFPA and FPSA, which greatly outperform DSA and GMRES in both categories. \begin{table}[h] \begin{center} \scalebox{0.8}{ \begin{tabular}{c || c || c || c} \hline Parameter & Solver & Runtime (s) & Iterations \\ \hline \hline \multirow{4}{*}{$\Delta = 10^{-5}$} & GMRES & 196 & 142 \\ & DSA & 3110 & 70140 \\ & FPSA & 0.514 & 11 \\ & NFPA & 0.630 & 11 \\\hline \multirow{4}{*}{$\Delta = 10^{-6}$} & GMRES & 156 & 132 \\ & DSA & 3120 & 70758 \\ & FPSA & 0.388 & 7 \\ & NFPA & 0.393 & 7 \\ \hline \multirow{4}{*}{$\Delta = 10^{-7}$} & GMRES & 81 & 127 \\ & DSA & 3120 & 70851 \\ & FPSA & 0.292 & 6 \\ & NFPA & 0.318 & 6 \\ \hline \end{tabular}} \end{center} \caption{Runtime and Iteration Counts for Problem 1 with EK} \label{Expresults1} \end{table} \begin{table}[h] \begin{center} \scalebox{0.8}{ \begin{tabular}{c || c || c || c} \hline Parameter & Solver & Runtime (s) & Iterations \\ \hline \hline \multirow{4}{*}{$\Delta = 10^{-5}$} & GMRES & 110 & 73 \\ & DSA & 1455 & 33033 \\ & FPSA & 0.492 & 10 \\ & NFPA & 0.613 & 10 \\ \hline \multirow{4}{*}{$\Delta = 10^{-6}$} & GMRES & 82.7 & 79 \\ & DSA & 1470 & 33309 \\ & FPSA & 0.358 & 7 \\ & NFPA & 0.431 & 7 \\ \hline \multirow{4}{*}{$\Delta = 10^{-7}$} & GMRES & 56.8 & 90 \\ & DSA & 1470 & 33339 \\ & FPSA & 0.273 & 5 \\ & NFPA & 0.319 & 5 \\ \hline \end{tabular}} \end{center} \caption{Runtime and Iteration Counts for Problem 2 with EK} \label{Expresults2} \end{table} \subsubsection{HGK: Henyey-Greenstein Kernel} The Henyey-Greenstein Kernel \cite{HGK,JapanFPSA} is most commonly used in light transport in clouds. It relies on the anisotropy factor $g$, such that \begin{equation} \sigma^{HGK}_{s,l} = \sigma_s g^l. \end{equation} As $g$ goes from zero to unity, the scattering shifts from isotropic to highly anisotropic. \begin{figure}[H] \begin{center} \includegraphics[scale=0.1,angle=0]{HGK.jpg} \caption{Henyey-Greenstein Kernels} \label{HGK} \end{center} \end{figure} \begin{figure}[H] \centering \subfloat[Problem 1]{{\includegraphics[width=7cm]{g099_iso.jpg} }} \qquad \subfloat[Problem 2]{{\includegraphics[width=7cm]{g099_beam.jpg} }} \caption{Results for HGK Problems with $g = 0.99$} \label{HGK_plots} \end{figure} The HGK does not have a valid FP limit \cite{patelFBR}. The three kernels tested are shown in \cref{HGK}. GMRES, DSA, FPSA, and NFPA all converged to the same solution for problems 1 and 2. \Cref{HGK_plots} shows the solutions for HGK with $g = 0.99$. The results of each solver are shown in \cref{HGKresults1,HGKresults2}. \begin{table}[h] \begin{center} \scalebox{0.8}{ \begin{tabular}{c || c || c || c} \hline Parameter & Solver & Runtime (s) & Iterations \\ \hline \hline \multirow{4}{*}{$g=0.9$} & GMRES & 9.88 & 76 \\ & DSA & 24.5 & 554 \\ & FPSA & 1.50 & 32 \\ & NFPA & 1.39 & 27 \\ \hline \multirow{4}{*}{$g=0.95$} & GMRES & 12.2 & 131 \\ & DSA & 47.7 & 1083 \\ & FPSA & 1.75 & 38 \\ & NFPA & 1.83 & 35 \\ \hline \multirow{4}{*}{$g=0.99$} & GMRES & 40.0 & 27 \\ & DSA & 243 & 5530 \\ & FPSA & 3.38 & 74 \\ & NFPA & 3.93 & 73 \\ \hline \end{tabular}} \end{center} \caption{Runtime and Iteration Counts for Problem 1 with HGK} \label{HGKresults1} \end{table} \begin{table}[h] \begin{center} \scalebox{0.8}{ \begin{tabular}{c || c || c || c} \hline Parameter & Solver & Runtime (s) & Iterations \\ \hline \hline \multirow{4}{*}{$g=0.9$} & GMRES & 24.3 & 135 \\ & DSA & 14.8 & 336 \\ & FPSA & 1.15 & 23 \\ & NFPA & 1.35 & 24 \\ \hline \multirow{4}{*}{$g=0.95$} & GMRES & 31.3 & 107 \\ & DSA & 29.7 & 675 \\ & FPSA & 1.56 & 32 \\ & NFPA & 1.90 & 33 \\ \hline \multirow{4}{*}{$g=0.99$} & GMRES & 41.4 & 126 \\ & DSA & 146 & 3345 \\ & FPSA & 3.31 & 67 \\ & NFPA & 3.99 & 67 \\ \hline \end{tabular}} \end{center} \caption{Runtime and Iteration Counts for Problem 2 with HGK} \label{HGKresults2} \end{table} Here we see that NFPA and FPSA do not perform as well compared to their results for the SRK and EK. Contrary to what happened in those cases, both solvers require more time and iterations as the problem becomes more anisotropic. This is somewhat expected, due to HGK not having a valid Fokker-Planck limit. However, both NFPA and FPSA continue to greatly outperform GMRES and DSA. Moreover, NFPA outperforms FPSA in iteration count for problem 1. \section{Discussion}\label{sec4} This paper introduced the Nonlinear Fokker-Planck Acceleration technique for steady-state, monoenergetic transport in homogeneous slab geometry. To our knowledge, this is the first nonlinear HOLO method that accelerates \textit{all $L$ moments} of the angular flux. Upon convergence, the LO and HO models are consistent; in other words, the (lower-order) modified Fokker-Planck equation \textit{preserves the same angular moments} of the flux obtained with the (higher-order) transport equation. NFPA was tested on a homogeneous medium with an isotropic internal source with vacuum boundaries, and in a homogeneous medium with no internal source and an incoming beam boundary. For both problems, three different scattering kernels were used. The runtime and iterations of NFPA and FPSA were shown to be similar. They both vastly outperformed DSA and GMRES for all cases by orders of magnitude. However, NFPA has the feature of preserving the angular moments of the flux in both the HO and LO equations, which offers the advantage of integrating the LO model into multiphysics models. In the future, we intend to test NFPA capabilities for a variety of multiphysics problems and analyze its performance. To apply NFPA to more realistic problems, it needs to be extended to include time and energy dependence. Additionally, the method needs to be adapted to address geometries with higher-order spatial dimensions. Finally, for the NFPA method to become mathematically ``complete", a full convergence examination using Fourier analysis must be performed. However, this is beyond the scope of this paper and must be left for future work. \section*{Acknowledgements} The authors acknowledge support under award number NRC-HQ-84-15-G-0024 from the Nuclear Regulatory Commission. The statements, findings, conclusions, and recommendations are those of the authors and do not necessarily reflect the view of the U.S. Nuclear Regulatory Commission. J.~K. Patel would like to thank Dr.~James Warsa for his wonderful transport class at UNM, as well as his synthetic acceleration codes. The authors would also like to thank Dr.~Anil Prinja for discussions involving Fokker-Planck acceleration.
How do the runtimes and iteration counts of NFPA and FPSA compare to GMRES and DSA in the numerical experiments?
NFPA and FPSA greatly outperform GMRES and DSA.
3,996
multifieldqa_en
4k
Paper Info Title: Crossed Nonlinear Dynamical Hall Effect in Twisted Bilayers Publish Date: 17 Mar 2023 Author List: Figure FIG. 1.(a) Schematics of experimental setup.(b, c) Valence band structure and intrinsic Hall conductivity with respect to in-plane input for tMoTe2 at twist angles (b) θ = 1.2 • and (c) θ = 2 • in +K valley.Color coding in (b) and (c) denotes the layer composition σ z n (k). FIG. 2. (a) The interlayer BCP G, and (b) its vorticity [∂ k × G]z on the first valence band from +K valley of 1.2 • tMoTe2.Background color and arrows in (a) denote the magnitude and vector flow, respectively.Grey curves in (b) show energy contours at 1/2 and 3/4 of the band width.The black dashed arrow denotes direction of increasing hole doping level.Black dashed hexagons in (a, b) denote the boundary of moiré Brillouin zone (mBZ). FIG. 3. (a-c) Three high-symmetry stacking registries for tBG with a commensurate twist angle θ = 21.8 • .Lattice geometries with rotation center on an overlapping atomic site (a, b) and hexagonal center (c).(d) Schematic of the moiré pattern when the twist angle slightly deviates from 21.8 • , here θ = 21 • .Red squares marked by A, B and C are the local regions that resemble commensurate 21.8 • patterns in (a), (b) and (c), respectively.(e, f) Low-energy band structures and intrinsic Hall conductivity of the two geometries [(a) and (b) are equivalent].The shaded areas highlight energy windows ∼ ω around band degeneracies where interband transitions, not considered here, may quantitatively affect the conductivity measured. FIG. S4.Band structure and layer composition σ z n in +K valley of tBG (left panel) and the intrinsic Hall conductivity (right panel) at three different twist angle θ.The shaded areas highlight energy windows ∼ ω around band degeneracies in which the conductivity results should not be considered.Here σH should be multiplied by a factor of 2 accounting for spin degeneracy. abstract We propose an unconventional nonlinear dynamical Hall effect characteristic of twisted bilayers. The joint action of in-plane and out-of-plane ac electric fields generates Hall currents j ∼ Ė⊥ × E in both sum and difference frequencies, and when the two orthogonal fields have common frequency their phase difference controls the on/off, direction and magnitude of the rectified dc Hall current. This novel intrinsic Hall response has a band geometric origin in the momentum space curl of interlayer Berry connection polarizability, arising from layer hybridization of electrons by the twisted interlayer coupling. The effect allows a unique rectification functionality and a transport probe of chiral symmetry in bilayer systems. We show sizable effects in twisted homobilayer transition metal dichalcogenides and twisted bilayer graphene over broad range of twist angles. Nonlinear Hall-type response to an in-plane electric field in a two dimensional (2D) system with time reversal symmetry has attracted marked interests . Intensive studies have been devoted to uncovering new types of nonlinear Hall transport induced by quantum geometry and their applications such as terahertz rectification and magnetic information readout . Restricted by symmetry , the known mechanisms of nonlinear Hall response in quasi-2D nonmagnetic materials are all of extrinsic nature, sensitive to fine details of disorders , which have limited their utilization for practical applications. Moreover, having a single driving field only, the effect has not unleashed the full potential of nonlinearity for enabling controlled gate in logic operation, where separable inputs (i.e., in orthogonal directions) are desirable. The latter, in the context of Hall effect, calls for control by both out-of-plane and in-plane electric fields. A strategy to introduce quantum geometric response to out-of-plane field in quasi-2D geometry is made possible in van der Waals (vdW) layered structures with twisted stacking . Taking homobilayer as an example, electrons have an active layer degree of freedom that is associated with an out-of-plane electric dipole , whereas interlayer quantum tunneling rotates this pseudospin about in-plane axes that are of topologically nontrivial textures in the twisted landscapes . Such layer pseudospin structures can underlie novel quantum geometric properties when coupled with out-ofplane field. Recent studies have found layer circular photogalvanic effect and layer-contrasted time-reversaleven Hall effect , arising from band geometric quantities. In this work we unveil a new type of nonlinear Hall effect in time-reversal symmetric twisted bilayers, where an intrinsic Hall current emerges under the combined action of an in-plane electric field E and an out-of-plane ac field E ⊥ (t): j ∼ Ė⊥ × E [see Fig. ]. Having the two driving fields (inputs) and the current response (output) all orthogonal to each other, the effect is dubbed as the crossed nonlinear dynamical Hall effect. This is also the first nonlinear Hall contribution of an intrinsic nature in nonmagnetic materials without external magnetic field, determined solely by the band structures, not relying on extrinsic factors such as disorders and relaxation times. The effect arises from the interlayer hybridization of electronic states under the chiral crystal symmetry characteristic of twisted bilayers, and has a novel band geometric origin in the momentum space curl of interlayer Berry connection polarizability (BCP). Having two driving fields of the same frequency, a dc Hall current develops, whose on/off, direction and magnitude can all be controlled by the phase difference of the two fields, which does not affect the magnitude of the double-frequency component. Such a characteristic tunability renders this effect a unique approach to rectification and transport probe of chiral bilayers. As examples, we show sizable effects in small angle twisted transition metal dichalcogenides (tTMDs) and twisted bilayer graphene (tBG), as well as tBG of large angles where Umklapp interlayer tunneling dominates. Geometric origin of the effect. A bilayer system couples to in-plane and out-of-plane driving electric fields in completely different ways. The in-plane field couples to the 2D crystal momentum, leading to Berry-phase effects in the 2D momentum space . In comparison, the outof-plane field is coupled to the interlayer dipole moment p in the form of −E ⊥ p, where p = ed 0 σz with σz as the Pauli matrix in the layer index subspace and d 0 the interlayer distance. When the system has a more than twofold rotational axis in the z direction, as in tBG and tTMDs, any in-plane current driven by the out-of-plane field alone is forbidden. It also prohibits the off-diagonal components of the symmetric part of the conductivity tensor σ ab = ∂j a /∂E ||,b with respect to the in-plane input and output. Since the antisymmetric part of σ ab is not allowed by the Onsager reciprocity in nonmagnetic systems, all the off-diagonal components of σ ab is forbidden, irrespective of the order of out-of-plane field. On the other hand, as we will show, an in-plane Hall conductivity σ xy = −σ yx can still be driven by the product of an in-plane field and the time variation rate of an outof-plane ac field, which is a characteristic effect of chiral bilayers. To account for the effect, we make use of the semiclassical theory . The velocity of an electron in a bilayer system is given by where k is the 2D crystal momentum. Here and hereafter we suppress the band index for simplicity, unless otherwise noted. The three contributions in this equation come from the band velocity, the anomalous velocities induced by the k -space Berry curvature Ω k and by the hybrid Berry curvature Ω kE ⊥ in the (k, E ⊥ ) space. For the velocity at the order of interest, the k-space Berry curvature is corrected to the first order of the variation rate of out-of-plane field Ė⊥ as Here A = u k |i∂ k |u k is the unperturbed k-space Berry connection, with |u k being the cell-periodic part of the Bloch wave, whereas is its gauge invariant correction , which can be identified physically as an in-plane positional shift of an electron induced by the time evolution of the out-of-plane field. For a band with index n, we have whose numerator involves the interband matrix elements of the interlayer dipole and velocity operators, and ε n is the unperturbed band energy. Meanwhile, up to the first order of in-plane field, the hybrid Berry curvature reads Here A E || is the k-space Berry connection induced by E || field , which represents an intralayer positional shift and whose detailed expression is not needed for our purpose. and is its first order correction induced by the in-plane field. In addition, ε = ε + δε, where δε = eE • G Ė⊥ is the field-induced electron energy . Given that A E || is the E ⊥ -space counterpart of intralayer shift A E || , and that E ⊥ is conjugate to the interlayer dipole moment, we can pictorially interpret A E || as the interlayer shift induced by in-plane field. It indeed has the desired property of flipping sign under the horizontal mirror-plane reflection, hence is analogous to the so-called interlayer coordinate shift introduced in the study of layer circular photogalvanic effect , which is nothing but the E ⊥ -space counterpart of the shift vector well known in the nonlinear optical phenomenon of shift current. Therefore, the E ⊥ -space BCP eG/ can be understood as the interlayer BCP. This picture is further augmented by the connotation that the interlayer BCP is featured exclusively by interlayer-hybridized electronic states: According to Eq. ( ), if the state |u n is fully polarized in a specific layer around some momentum k, then G (k) is suppressed. With the velocity of individual electrons, the charge current density contributed by the electron system can be obtained from where [dk] is shorthand for n d 2 k/(2π) 2 , and the distribution function is taken to be the Fermi function f 0 as we focus on the intrinsic response. The band geometric contributions to ṙ lead to a Hall current where is intrinsic to the band structure. This band geometric quantity measures the k-space curl of the interlayer BCP over the occupied states, and hence is also a characteristic of layer-hybridized electronic states. Via an integration by parts, it becomes clear that χ int is a Fermi surface property. Since χ int is a time-reversal even pseudoscalar, it is invariant under rotation, but flips sign under space inversion, mirror reflection and rotoreflection symmetries. As such, χ int is allowed if and only if the system possesses a chiral crystal structure, which is the very case of twisted bilayers . Moreover, since twisted structures with opposite twist angles are mirror images of each other, whereas the mirror reflection flips the sign of χ int , the direction of Hall current can be reversed by reversing twist direction. Hall rectification and frequency doubling. This effect can be utilized for the rectification and frequency doubling of an in-plane ac input E = E 0 cos ωt, provided that the out-of-plane field has the same frequency, namely E ⊥ = E 0 ⊥ cos (ωt + ϕ). The phase difference ϕ between the two fields plays an important role in determining the Hall current, which takes the form of j = j 0 sin ϕ + j 2ω sin(2ωt + ϕ). ( Here ω is required to be below the threshold for direct interband transition in order to validate the semiclassical treatment, and σ H has the dimension of conductance and quantifies the Hall response with respect to the in-plane input. In experiment, the Hall output by the crossed nonlinear dynamic Hall effect can be distinguished readily from the conventional nonlinear Hall effect driven by in-plane field alone, as they are odd and even, respectively, in the inplane field. One notes that while the double-frequency component appears for any ϕ, the rectified output is allowed only if the two crossed driving fields are not in-phase or antiphase. Its on/off, chirality (right or left), and magnitude are all controlled by the phase difference of the two fields. Such a unique tunability provides not only a prominent experimental hallmark of this effect, but also a controllable route to Hall rectification. In addition, reversing the direction of the out-of-plane field switches that of the Hall current, which also serves as a control knob. Application to tTMDs. We now study the effect quantitatively in tTMDs, using tMoTe 2 as an example (see details of the continuum model in ). For illustrative purposes, we take ω/2π = 0.1 THz and E 0 ⊥ d 0 = 10 mV in what follows. Figures ) and (c) present the electronic band structures along with the layer composition σ z n (k) at twist angles θ = 1.2 • and θ = 2 • . In both cases, the energy spectra exhibit isolated narrow bands with strong layer hybridization. At θ = 1.2 • , the conductivity shows two peaks ∼ 0.1e 2 /h at low energies associated with the first two valence bands. The third band does not host any sizable conductivity signal. At higher hole-doping levels, a remarkable conductivity peak ∼ e 2 /h appears near the gap separating the fourth and fifth bands. At θ = 2 • , the conductivity shows smaller values, but the overall trends are similar: A peak ∼ O(0.01)e 2 /h appears at low energies, while larger responses ∼ O(0.1)e 2 /h can be spotted as the Fermi level decreases. One can understand the behaviors of σ H from the interlayer BCP in Eq. ( ). It favors band near-degeneracy regions in k -space made up of strongly layer hybridized electronic states. As such, the conductivity is most pro- nounced when the Fermi level is located around such regions, which directly accounts for the peaks of response in Fig. that [∂ k × G] z is negligible at lower energies, and it is dominated by positive values as the doping increases, thus the conductivity rises initially. When the doping level is higher, regions with [∂ k × G] z < 0 start to contribute, thus the conductivity decreases after reaching a maximum. Application to tBG. The second example is tBG. We focus on commensurate twist angles in the large angle limit in the main text , which possess moiré-lattice assisted strong interlayer tunneling via Umklapp processes . This case is appealing because the Umklapp interlayer tunneling is a manifestation of discrete translational symmetry of moiré superlattice, which is irrelevant at small twist angles and not captured by the continuum model but plays important roles in physical contexts such as higher order topological insulator and moiré excitons . The Umklapp tunneling is strongest for the commensurate twist angles of θ = 21.8 • and θ = 38.2 • , whose corresponding periodic moiré superlattices have the smallest lattice constant ( √ 7 of the monolayer counterpart). Such a small moiré scale implies that the exact crystalline symmetry, which depends sensitively on fine details of rotation center, has critical influence on lowenergy response properties. To capture the Umklapp tunneling, we employ the tight-binding model . Figures ) and (c) show two distinct commensurate structures of tBG at θ = 21.8 • belonging to chiral point groups D 3 and D 6 , respectively. The atomic configurations in Figs. ) are equivalent, which are constructed by twisting AA-stacked bilayer graphene around an overlapping atom site, and that in Fig. ) is obtained by rotating around a hexagonal center. Band structures of these two configurations are drastically different within a low-energy window of ∼ 10 meV around the κ point . Remarkably, despite large θ, we still get σ H ∼ O(0.001) e 2 /h (D 3 ) and ∼ O(0.1) e 2 /h (D 6 ), which are comparable to those at small angles (cf. Fig. in the Supplemental Material ). Such sizable responses can be attributed to the strong interlayer coupling enabled by Umklapp processes . Apart from different intensities, the Hall conductivities in the two stacking configurations have distinct energy dependence: In Fig. , σ H shows a single peak centered at zero energy; In Fig. (f), it exhibits two antisymmetric peaks around zero. The peaks are centered around band degeneracies, and their profiles can be understood from the distribution of [∂ k × G] z . Figure (d) illustrates the atomic structure of tBG with a twist angle slightly deviating from θ = 21.8 • , forming a supermoiré pattern. In short range, the local stacking geometries resemble the commensurate configurations at θ = 21.8 • , while the stacking registries at different locales differ by a translation. Similar to the moiré landscapes in the small-angle limit, there also exist high-symmetry locales: Regions A and B enclose the D 3 structure, and region C contains the D 6 configuration. Position-dependent Hall response is therefore expected in such a supermoiré. As the intrinsic Hall signal from the D 6 configuration dominates [see Figs. 3(e) vs (f)], the net response mimics that in Fig. . Discussion. We have uncovered the crossed nonlinear dynamical intrinsic Hall effect characteristic of layer hybridized electronic states in twisted bilayers, and elucidated its geometric origin in the k -space curl of interlayer BCP. It offers a new tool for rectification and frequency doubling in chiral vdW bilayers, and is sizable in tTMD and tBG. Here our focus is on the intrinsic effect, which can be evaluated quantitatively for each material and provides a benchmark for experiments. There may also be extrinsic contributions, similar to the side jump and skew scattering ones in anomalous Hall effect. They typically have distinct scaling behavior with the relaxation time τ from the intrinsic effect, hence can be distinguished from the latter in experiments . Moreover, they are suppressed in the clean limit ωτ 1 [(ωτ ) 2 1, more precisely] . In high-quality tBG samples, τ ∼ ps at room temperature . Much longer τ can be obtained at lower temperatures. In fact, a recent theory explaining well the resistivity of tBG predicted τ ∼ 10 −8 s at 10 K . As such, high-quality tBG under low temperatures and sub-terahertz input (ω/2π = 0.1 THz) is located in the clean limit, rendering an ideal platform for isolating the intrinsic effect. This work paves a new route to driving in-plane response by out-of-plane dynamical control of layered vdW structures . The study can be generalized to other observables such as spin current and spin polarization, and the in-plane driving can be statistical forces, like temperature gradient. Such orthogonal controls rely critically on the nonconservation of layer pseudospin degree of freedom endowed by interlayer coupling, and constitute an emerging research field at the crossing of 2D vdW materials, layertronics, twistronics and nonlinear electronics. This work is supported by the Research Grant Council of Hong Kong (AoE/P-701/20, HKU SRFS2122-7S05), and the Croucher Foundation. W.Y. also acknowledges support by Tencent Foundation. Cong Chen, 1, 2, * Dawei Zhai, 1, 2, * Cong Xiao, 1, 2, † and Wang Yao 1, 2, ‡ 1 Department of Physics, The University of Hong Kong, Hong Kong, China 2 HKU-UCAS Joint Institute of Theoretical and Computational Physics at Hong Kong, China Extra figures for tBG at small twist angles Figure (a) shows the band structure of tBG with θ = 1.47 • obtained from the continuum model . The central bands are well separated from higher ones, and show Dirac points at κ/κ points protected by valley U (1) symmetry and a composite operation of twofold rotation and time reversal C 2z T . Degeneracies at higher energies can also be identified, for example, around ±75 meV at the γ point. As the two Dirac cones from the two layers intersect around the same area, such degeneracies are usually accompanied by strong layer hybridization [see the color in the left panel of Fig. ]. Additionally, it is well-known that the two layers are strongly coupled when θ is around the magic angle (∼ 1.08 • ), rendering narrow bandwidths for the central bands. As discussed in the main text, coexistence of strong interlayer hybridization and small energy separations is expected to contribute sharp conductivity peaks near band degeneracies, as shown in Fig. . In this case, the conductivity peak near the Dirac point can reach ∼ 0.1e 2 /h, while the responses around ±0.08 eV are smaller at ∼ 0.01e 2 /h. The above features are maintained when θ is enlarged, as illustrated in Figs. ) and (c) using θ = 2.65 • and θ = 6.01 • . Since interlayer coupling becomes weaker and the bands are more separated at low energies when θ increases, intensity of the conductivity drops significantly. We stress that G is not defined at degenerate points, and interband transitions may occur when energy separation satisfies |ε n − ε m | ∼ ω, the effects of which are not included in the current formulations. Consequently, the results around band degeneracies within energy ∼ ω [shaded areas in Fig. ] should be excluded.
What is the significance of the interlayer Berry connection polarizability?
The momentum space curl of the interlayer Berry connection polarizability generates the crossed nonlinear dynamical Hall effect.
3,508
multifieldqa_en
4k
consumption influences mercury: Topics by WorldWideScience.org Sample records for consumption influences mercury Epidemiologic confirmation that fruit consumption influences mercury exposure in riparian communities in the Brazilian Amazon Sousa Passos, Carlos Jose; Mergler, Donna; Fillion, Myriam; Lemire, Melanie; Mertens, Frederic; Guimaraes, Jean Remy Davee; Philibert, Aline Since deforestation has recently been associated with increased mercury load in the Amazon, the problem of mercury exposure is now much more widespread than initially thought. A previous exploratory study suggested that fruit consumption may reduce mercury exposure. The objectives of the study were to determine the effects of fruit consumption on the relation between fish consumption and bioindicators of mercury (Hg) exposure in Amazonian fish-eating communities. A cross-sectional dietary survey based on a 7-day recall of fish and fruit consumption frequency was conducted within 13 riparian communities from the Tapajos River, Brazilian Amazon. Hair samples were collected from 449 persons, and blood samples were collected from a subset of 225, for total and inorganic mercury determination by atomic absorption spectrometry. On average, participants consumed 6.6 fish meals/week and ate 11 fruits/week. The average blood Hg (BHg) was 57.1±36.3 μg/L (median: 55.1 μg/L), and the average hair-Hg (HHg) was 16.8±10.3 μg/g (median: 15.7 μg/g). There was a positive relation between fish consumption and BHg (r=0.48; P 2 =36.0%) and HHg levels (fish: β=1.2, P 2 =21.0%). ANCOVA models showed that for the same number of fish meals, persons consuming fruits more frequently had significantly lower blood and HHg concentrations. For low fruit consumers, each fish meal contributed 9.8 μg/L Hg increase in blood compared to only 3.3 μg/L Hg increase for the high fruit consumers. In conclusion, fruit consumption may provide a protective effect for Hg exposure in Amazonian riparians. Prevention strategies that seek to maintain fish consumption while reducing Hg exposure in fish-eating communities should be pursued Influence of mercury bioaccessibility on exposure assessment associated with consumption of cooked predatory fish in Spain. Torres-Escribano, Silvia; Ruiz, Antonio; Barrios, Laura; Vélez, Dinoraz; Montoro, Rosa Predatory fish tend to accumulate high levels of mercury (Hg). Food safety assessment of these fish has been carried out on the raw product. However, the evaluation of the risk from Hg concentrations in raw fish might be modified if cooking and bioaccessibility (the contaminant fraction that solubilises from its matrix during gastrointestinal digestion and becomes available for intestinal absorption) were taken into account. Data on Hg bioaccessibility in raw predatory fish sold in Spain are scarce and no research on Hg bioaccessibility in cooked fish is available. The aim of the present study was to evaluate Hg bioaccessibility in various kinds of cooked predatory fish sold in Spain to estimate their health risk. Both Hg and bioaccessible Hg concentrations were analysed in raw and cooked fish (swordfish, tope shark, bonito and tuna). There were no changes in Hg concentrations during cooking. However, Hg bioaccessibility decreased significantly after cooking (42 ± 26% in raw fish and 26 ± 16% in cooked fish), thus reducing in swordfish and tope shark the Hg concentration to which the human organism would be exposed. In future, cooking and bioaccessibility should be considered in risk assessment of Hg concentrations in predatory fish. Copyright © 2011 Society of Chemical Industry. Intake of mercury through fish consumption Sarmani, S.B.; Kiprawi, A.Z.; Ismail, R.B.; Hassan, R.B.; Wood, A.K.; Rahman, S.A. Fish has been known as a source of non-occupational mercury exposure to fish consuming population groups, and this is shown by the high hair mercury levels. In this study, hair samples collected from fishermen and their families, and commercial marine fishes were analyzed for mercury and methylmercury by neutron activation and gas chromatography. The results showed a correlation between hair mercury levels and fish consumption patterns. The levels of mercury found in this study were similar to those reported by other workers for fish consuming population groups worldwide. (author) Fish consumption limit for mercury compounds Abbas Esmaili-Sari Full Text Available Background and objectives: Methyl mercury can carry out harmful effects on the reproductive, respiratory, and nervous system of human. Moreover, mercury is known as the most toxic heavy metal in nature. Fish and seafood consumption is the major MeHg exposure route for human. The present study tries to cover researches which have been conducted on mercury levels in 21 species of fish from Persian Gulf, Caspian Sea and Anzali Wetland during the past 6 years, and in addition to stating mercury level, it provides recommendations about the restriction of monthly fish consumption for each species separately. Material and methods: Fish samples were transferred to the laboratory and stored in refrigerator under -20oC until they were dissected. Afterwards, the muscle tissues were separated and dried. The dried samples were ground and changed into a homogenous powder and then the mercury concentration rate has been determined by advanced mercury analyzer, model 254. Results: In general, mercury contamination in fishes caught from Anzali Wetland was much more than fishes from Caspian Sea. Also, from among all studied fishes, oriental sole (Euryglossa orientalis, caught from Persian Gulf, allocated the most mercury level to itself with the rate of 5.61ml per kg., therefore, it exercises a severe consumption restriction for pregnant women and vulnerable groups. Conclusion: Based on the calculations, about 50% of fishes, mostly with short food chain, can be easily consumed during the year. However, with regard to Oriental sole (Euryglossa orientalis and shark (Carcharhinus dussumieri, caught from Persian Gulf, special consideration should be taken in their consumption. On the other hand, careful planning should be made for the high rate of fish consumption among fishing community. Hair Mercury Concentrations and Fish Consumption Patterns in Florida Residents Adam M. Schaefer Full Text Available Mercury exposure through the consumption of fish and shellfish represents a significant public health concern in the United States. Recent research has demonstrated higher seafood consumption and subsequent increased risk of methylmercury exposure among subpopulations living in coastal areas. The identification of high concentrations of total mercury in blood and skin among resident Atlantic bottlenose dolphins (Tursiops truncatus in the Indian River Lagoon (IRL, a coastal estuary in Florida, alerted us to a potential public health hazard in the contiguous human population. Therefore, we analyzed hair mercury concentrations of residents living along the IRL and ascertained their sources and patterns of seafood consumption. The total mean mercury concentration for 135 residents was 1.53 ± 1.89 µg/g. The concentration of hair mercury among males (2.02 ± 2.38 µg/g was significantly higher than that for females (0.96 ± 0.74 µg/g (p < 0.01. Log transformed hair mercury concentration was significantly associated with the frequency of total seafood consumption (p < 0.01. Individuals who reported consuming seafood once a day or more were 3.71 (95% CI 0.84–16.38 times more likely to have a total hair mercury concentration over 1.0 µg/g, which corresponds approximately to the U.S. EPA reference dose, compared to those who consumed seafood once a week or less. Hair mercury concentration was also significantly higher among individuals who obtained all or most of their seafood from local recreational sources (p < 0.01. The elevated human mercury concentrations mirror the elevated concentrations observed in resident dolphins in the same geographical region. The current study is one of the first to apply the concept of a sentinel animal to a contiguous human population. Fish consumption and bioindicators of inorganic mercury exposure Sousa Passos, Carlos Jose; Mergler, Donna; Lemire, Melanie; Fillion, Myriam; Guimaraes, Jean Remy Davee Background: The direct and close relationship between fish consumption and blood and hair mercury (Hg) levels is well known, but the influence of fish consumption on inorganic mercury in blood (B-IHg) and in urine (U-Hg) is unclear. Objective: Examine the relationship between fish consumption, total, inorganic and organic blood Hg levels and urinary Hg concentration. Methods: A cross-sectional study was carried out on 171 persons from 7 riparian communities on the Tapajos River (Brazilian Amazon), with no history of inorganic Hg exposure from occupation or dental amalgams. During the rising water season in 2004, participants responded to a dietary survey, based on a seven-day recall of fish and fruit consumption frequency, and socio-demographic information was recorded. Blood and urine samples were collected. Total, organic and inorganic Hg in blood as well as U-Hg were determined by Atomic Absorption Spectrometry. Results: On average, participants consumed 7.4 fish meals/week and 8.8 fruits/week. Blood total Hg averaged 38.6 ± 21.7 μg/L, and the average percentage of B-IHg was 13.8%. Average organic Hg (MeHg) was 33.6 ± 19.4 μg/L, B-IHg was 5.0 ± 2.6 μg/L, while average U-Hg was 7.5 ± 6.9 μg/L, with 19.9% of participants presenting U-Hg levels above 10 μg/L. B-IHg was highly significantly related to the number of meals of carnivorous fish, but no relation was observed with non-carnivorous fish; it was negatively related to fruit consumption, increased with age, was higher among those who were born in the Tapajos region, and varied with community. U-Hg was also significantly related to carnivorous but not non-carnivorous fish consumption, showed a tendency towards a negative relation with fruit consumption, was higher among men compared to women and higher among those born in the region. U-Hg was strongly related to I-Hg, blood methyl Hg (B-MeHg) and blood total Hg (B-THg). The Odds Ratio (OR) for U-Hg above 10 μg/L for those who ate > 4 carnivorous fish Methyl mercury exposure in Swedish women with high fish consumption Bjoernberg, Karolin Ask [Division of Metals and Health, Institute of Environmental Medicine, Karolinska Institutet, Box 210, SE-171 77, Stockholm (Sweden); Vahter, Marie [Division of Metals and Health, Institute of Environmental Medicine, Karolinska Institutet, Box 210, SE-171 77, Stockholm (Sweden); Grawe, Kierstin Petersson [Toxicology Division, National Food Administration, Box 622, SE-751 26 Uppsala (Sweden); Berglund, Marika [Division of Metals and Health, Institute of Environmental Medicine, Karolinska Institutet, Box 210, SE-171 77, Stockholm (Sweden)]. E-mail: [email protected] We studied the exposure to methyl mercury (MeHg) in 127 Swedish women of childbearing age with high consumption of various types of fish, using total mercury (T-Hg) in hair and MeHg in blood as biomarkers. Fish consumption was assessed using a food frequency questionnaire (FFQ), including detailed information about consumption of different fish species, reflecting average intake during 1 year. We also determined inorganic mercury (I-Hg) in blood, and selenium (Se) in serum. The average total fish consumption, as reported in the food frequency questionnaire, was approximately 4 times/week (range 1.6-19 times/week). Fish species potentially high in MeHg, included in the Swedish dietary advisories, was consumed by 79% of the women. About 10% consumed such species more than once a week, i.e., more than what is recommended. Other fish species potentially high in MeHg, not included in the Swedish dietary advisories, was consumed by 54% of the women. Eleven percent never consumed fish species potentially high in MeHg. T-Hg in hair (median 0.70 mg/kg; range 0.08-6.6 mg/kg) was associated with MeHg in blood (median 1.7 {mu}g/L; range 0.30-14 {mu}g/L; r {sub s}=0.78; p<0.001). Hair T-Hg, blood MeHg and serum Se (median 70 {mu}g/L; range 46-154 {mu}g/L) increased with increasing total fish consumption (r {sub s}=0.32; p<0.001, r {sub s}=0.37; p<0.001 and r {sub s}=0.35; p=0.002, respectively). I-Hg in blood (median 0.24 {mu}g/L; range 0.01-1.6 {mu}g/L) increased with increasing number of dental amalgam fillings. We found no statistical significant associations between the various mercury species measured and the Se concentration in serum. Hair mercury levels exceeded the levels corresponding to the EPA reference dose (RfD) of 0.1 {mu}g MeHg/kg b.w. per day in 20% of the women. Thus, there seems to be no margin of safety for neurodevelopmental effects in fetus, for women with high fish consumption unless they decrease their intake of certain fish species. Bjoernberg, Karolin Ask; Vahter, Marie; Grawe, Kierstin Petersson; Berglund, Marika We studied the exposure to methyl mercury (MeHg) in 127 Swedish women of childbearing age with high consumption of various types of fish, using total mercury (T-Hg) in hair and MeHg in blood as biomarkers. Fish consumption was assessed using a food frequency questionnaire (FFQ), including detailed information about consumption of different fish species, reflecting average intake during 1 year. We also determined inorganic mercury (I-Hg) in blood, and selenium (Se) in serum. The average total fish consumption, as reported in the food frequency questionnaire, was approximately 4 times/week (range 1.6-19 times/week). Fish species potentially high in MeHg, included in the Swedish dietary advisories, was consumed by 79% of the women. About 10% consumed such species more than once a week, i.e., more than what is recommended. Other fish species potentially high in MeHg, not included in the Swedish dietary advisories, was consumed by 54% of the women. Eleven percent never consumed fish species potentially high in MeHg. T-Hg in hair (median 0.70 mg/kg; range 0.08-6.6 mg/kg) was associated with MeHg in blood (median 1.7 μg/L; range 0.30-14 μg/L; r s =0.78; p s =0.32; p s =0.37; p s =0.35; p=0.002, respectively). I-Hg in blood (median 0.24 μg/L; range 0.01-1.6 μg/L) increased with increasing number of dental amalgam fillings. We found no statistical significant associations between the various mercury species measured and the Se concentration in serum. Hair mercury levels exceeded the levels corresponding to the EPA reference dose (RfD) of 0.1 μg MeHg/kg b.w. per day in 20% of the women. Thus, there seems to be no margin of safety for neurodevelopmental effects in fetus, for women with high fish consumption unless they decrease their intake of certain fish species Fish Consumption and Mercury Exposure among Louisiana Recreational Anglers Lincoln, Rebecca A; Shine, James P; Chesney, Edward J Background: Methylmercury (MeHg) exposure assessments among average fish consumers in the U.S. may underestimate exposures among U.S. subpopulations with high intakes of regionally specific fish. Objectives: We examined relationships between fish consumption, estimated mercury (Hg) intake......, and measured Hg exposure among one such potentially highly-exposed group, recreational anglers in Louisiana USA. Methods: We surveyed 534 anglers in 2006 using interviews at boat launches and fishing tournaments combined with an internet-based survey method. Hair samples from 402 of these anglers were...... collected and analyzed for total Hg. Questionnaires provided information on species-specific fish consumption over 3 months prior to the survey. Results: Anglers' median hair-Hg concentration was 0.81 µg/g (n=398; range: 0.02-10.7 µg/g), with 40% of participants above 1 µg/g, the level that approximately... Umbilical cord blood and placental mercury, selenium and selenoprotein expression in relation to maternal fish consumption Gilman, Christy L.; Soon, Reni; Sauvage, Lynnae; Ralston, Nicholas V.C.; Berry, Marla J. Seafood is an important source of nutrients for fetal neurodevelopment. Most individuals are exposed to the toxic element mercury through seafood. Due to the neurotoxic effects of mercury, United States government agencies recommend no more than 340 g (12 oz) per week of seafood consumption during pregnancy. However, recent studies have shown that selenium, also abundant in seafood, can have protective effects against mercury toxicity. In this study, we analyzed mercury and selenium levels an... Factors that negatively influence consumption of traditionally ... Factors that negatively influence consumption of traditionally fermented milk ... in various countries of sub-Saharan Africa and a number of health benefits to human ... influence consumption of Mursik, a traditionally fermented milk product from ... Mercury exposure as a function of fish consumption in two Asian communities in coastal Virginia, USA. Xu, Xiaoyu; Newman, Michael C Fish consumption and associated mercury exposure were explored for two Asian-dominated church communities in coastal Virginia and compared with that of two non-Asian church communities. Seafood-consumption rates for the Chinese (36.9 g/person/day) and Vietnamese (52.7 g/person/day) church communities were greater than the general United States fish-consumption rate (12.8 g/person/day). Correspondingly, hair mercury concentrations for people from the Chinese (0.52 µg/g) and the Vietnamese church (1.46 µg/g) were greater than the overall level for United States women (0.20 µg/g) but lower than the published World Health Organization exposure threshold (14 µg/g). A conventional regression model indicated a positive relationship between seafood consumption rates and hair mercury concentrations suggesting the importance of mercury exposure through seafood consumption. The annual-average daily methylmercury intake rate for the studied communities calculated by Monte Carlo simulations followed the sequence: Vietnamese community > Chinese community > non-Asian communities. Regardless, their daily methylmercury intake rates were all lower than the United States Environmental Protection Agency reference dose of 0.1 µg/kg body weight-day. In conclusion, fish-consumption patterns differed among communities, which resulted in different levels of mercury exposure. The greater seafood and mercury ingestion rates of studied Asian groups compared with non-Asian groups suggest the need for specific seafood consumption advice for ethnic communities in the United States. Otherwise the health benefits from fish consumption could be perceived as trivial compared with the ill-defined risk of mercury exposure. Feather growth influences blood mercury level of young songbirds. Condon, Anne M; Cristol, Daniel A Dynamics of mercury in feathers and blood of free-living songbirds is poorly understood. Nestling eastern bluebirds (Sialia sialis) living along the mercury-contaminated South River (Virginia, USA) had blood mercury levels an order of magnitude lower than their parents (nestling: 0.09 +/- 0.06 mg/kg [mean +/- standard deviation], n = 156; adult: 1.21 +/- 0.57 mg/kg, n = 86). To test whether this low blood mercury was the result of mercury sequestration in rapidly growing feathers, we repeatedly sampled free-living juveniles throughout the period of feather growth and molt. Mean blood mercury concentrations increased to 0.52 +/- 0.36 mg/kg (n = 44) after the completion of feather growth. Some individuals had reached adult blood mercury levels within three months of leaving the nest, but levels dropped to 0.20 +/- 0.09 mg/kg (n = 11) once the autumn molt had begun. Most studies of mercury contamination in juvenile birds have focused on recently hatched young with thousands of rapidly growing feathers. However, the highest risk period for mercury intoxication in young birds may be during the vulnerable period after fledging, when feathers no longer serve as a buffer against dietary mercury. We found that nestling blood mercury levels were not indicative of the extent of contamination because a large portion of the ingested mercury ended up in feathers. The present study demonstrates unequivocally that in songbirds blood mercury level is influenced strongly by the growth and molt of feathers. High mercury seafood consumption associated with fatigue at specialty medical clinics on Long Island, NY Shivam Kothari Full Text Available We investigated the association between seafood consumption and symptoms related to potential mercury toxicity in patients presenting to specialty medical clinics at Stony Brook Medical Center on Long Island, New York. We surveyed 118 patients from April–August 2012 about their seafood consumption patterns, specifically how frequently they were eating each type of fish, to assess mercury exposure. We also asked about symptoms associated with mercury toxicity including depression, fatigue, balance difficulties, or tingling around the mouth. Of the 118 adults surveyed, 14 consumed high mercury seafood (tuna steak, marlin, swordfish, or shark at least weekly. This group was more likely to suffer from fatigue than other patients (p = 0.02. Logistic regression confirmed this association of fatigue with frequent high mercury fish consumption in both unadjusted analysis (OR = 5.53; 95% CI: 1.40–21.90 and analysis adjusted for age, race, sex, income, and clinic type (OR = 7.89; 95% CI: 1.63–38.15. No associations were observed between fish intake and depression, balance difficulties, or tingling around the mouth. Findings suggest that fatigue may be associated with eating high mercury fish but sample size is small. Larg
What was the conclusion of the study?
The conclusion was that fruit consumption may provide a protective effect for mercury exposure in Amazonian riparians.
3,247
multifieldqa_en
4k
Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government. A farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash. In November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election. John Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later. Early life English was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden. English attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington. After finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as "Rogernomics") were being implemented. English joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively. Fourth National Government (1990–1999) At the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the "brat pack", the "gang of four", and the "young Turks". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health. First period in cabinet (1996–1999) In early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a "shotgun marriage", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters. As Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to "balance sheets" and "user charges") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted. By early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet. English was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's "Rogernomics" and Ruth Richardson's "Ruthanasia") had focused on "fruitless, theoretical debates" when "people just want to see problems solved". Opposition (1999–2008) After the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent. Leader of the Opposition In October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times "there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension". Aged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as "the worst day of my political life". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support. By late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest. Shadow cabinet roles and deputy leader On 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education). In November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet. Fifth National Government (2008–2017) Deputy Prime Minister and Minister of Finance (2008–2016) At the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third. He was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. The pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK). English acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: "improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending. In April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP. Strong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax. Allowances issue In 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making "preliminary enquiries" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election. Prime Minister (2016–2017) John Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016. English appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same. In February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little. In his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were "natural partners" and would "continue to forge ties" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact. At a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation. On 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand. On 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal. During the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters. At the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October. Opposition (2017–2018) Leader of the Opposition English was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day. Post-premiership In 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets. Political and social views English is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any "liberalisation" of abortion law. In 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, "I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage". In 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes. Personal life English met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons. English is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics. In June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke. Honours In the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State. See also List of New Zealand governments Politics of New Zealand References External links Profile at National Party Profile on Parliament.nz Releases and speeches at Beehive.govt.nz |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- 1961 births 21st-century New Zealand politicians Candidates in the 2017 New Zealand general election Deputy Prime Ministers of New Zealand Leaders of the Opposition (New Zealand) Living people Members of the Cabinet of New Zealand Members of the New Zealand House of Representatives New Zealand farmers New Zealand finance ministers New Zealand list MPs New Zealand MPs for South Island electorates New Zealand National Party MPs New Zealand National Party leaders New Zealand Roman Catholics New Zealand people of Irish descent People educated at St. Patrick's College, Silverstream People from Dipton, New Zealand People from Lumsden, New Zealand Prime Ministers of New Zealand University of Otago alumni Victoria University of Wellington alumni Knights Companion of the New Zealand Order of Merit
When did Simon English become the leader of the National Party?
October 2001.
3,590
multifieldqa_en
4k
Paper Info Title: Interpretable reduced-order modeling with time-scale separation Interpretable reduced-order modeling with time-scale separation Publish Date: 7 March 2023 Author List: Sebastian Kaltenbach (from CSE-Lab, ETH Zurich, Harvard SEAS), Phaedon-Stelios Koutsourelakis (from CSE-Lab, ETH Zurich, Harvard SEAS), Petros Koumoutsakos (from CSE-Lab, ETH Zurich, Harvard SEAS), Harvard Seas (from CSE-Lab, ETH Zurich, Harvard SEAS) Figure FIG. 5. Comparison between the phase-space of the reference solution (left) and the phase-space of the predictions FIG. 7. Comparison between predictions and reference solutions for a new initial condition fort = 1.25, 3.75, 7.5, 12.5, 20, 30 (from left to right and top to down).We note that with longer prediction time the uncertainty bounds increases.Despite the chaotic nature of the KS equation, the predictive posterior mean is close to the reference solution for t ≤ 12.5 abstract Partial Differential Equations (PDEs) with high dimensionality are commonly encountered in computational physics and engineering. However, finding solutions for these PDEs can be computationally expensive, making model-order reduction crucial. We propose such a data-driven scheme that automates the identification of the time-scales involved and, can produce stable predictions forward in time as well as under different initial conditions not included in the training data. To this end, we combine a non-linear autoencoder architecture with a time-continuous model for the latent dynamics in the complex space. It readily allows for the inclusion of sparse and irregularly sampled training data. The learned, latent dynamics are interpretable and reveal the different temporal scales involved. We show that this data-driven scheme can automatically learn the independent processes that decompose a system of linear ODEs along the eigenvectors of the system's matrix. Apart from this, we demonstrate the applicability of the proposed framework in a hidden Markov Model and the (discretized) Kuramoto-Shivashinsky (KS) equation. Additionally, we propose a probabilistic version, which captures predictive uncertainties and further improves upon the results of the deterministic framework. INTRODUCTION High-fidelity simulations of critical phenomena such as ocean dynamics and epidemics have become essential for decision-making. They are based on physically-motivated PDEs expressing system dynamics that span multiple spatiotemporal scales and which necessitate cumbersome computations . In recent years there is increased attention to the development of data-driven models that can accelerate the solution of these PDEs as well as reveal salient, lower-dimensional features that control the long-term evolution. In most cases, data-driven reduced-order models are not interpretable. In particular, models based on neural networks despite good predictive capabilities , they offer a black-box description of the system dynamics. A possible remedy is applying a symbolic regression to the learned neural network representation , but this adds additional computational cost due to the two-step procedure. A number of frameworks such as SINDy allows to learn interpretable dynamics but it relies on the a-priori availability of lower-dimensional descriptors and of time-derivatives which can be very noisy for both simulation and experimental data. Other frameworks are tailored to specific problems such as molecular dynamics . Here, we present a framework that only needs the value of the observables, and not their derivatives, as training data and is capable of identifying interpretable latent dynamics. The deployment of interpretable latent dynamics ensures that conservation of important properties of that are reflected in the reduced-order model . The present method is related to approaches based on the Koopman-operator extended Dynamic Mode Decomposition (eDMD) but uses continuous complex-valued latent space dynamics and only requires one scalar variable per latent dimension to describe the latent space dynamics. Therefore we do not have to enforce any parametrizations on the Koopman matrix . The time-continuous formulation moreover allows to incorporate sparse and irregularly sampled training data and fast generation of predictions after the training phase. By using a complex-valued latent space we can also incorporate harmonic effects and reduce the number of latent variables needed. Linear and non-linear autoencoders are used to map the observed, high-dimensional time-series to the lower-dimensional, latent representation and we identify simultaneously the autoencoder as well as the latent dynamics by optimizing a combined loss function. Hence the to tasks of dimensionality reduction and discovery of the reduced dynamics are unified while other frameworks treat the two parts separately . Apart from using an architecture based on autoencoders to identify the latent space, projection-based methods could also be employed . We are also proposing a probabilistic version of our algorithm ) that makes use of probabilistic Slow Feature Analysis . This allows for a latent representation that arart from being time-continuous, can quantify the predictive uncertainty and hierarchically decompose the dynamics into their pertinent scales while promoting the discovery of slow processes that control the system's evolution over long time horizons. The rest of the paper is structured as follows: We introduce the methodological framework as well as algorithmic details in section II. Particular focus is paid on the interpretability of the inferred lower-dimensional dynamics. In section III we present three numerical illustrations, i.e. a system of linear ODEs, a hidden Markov Model and the discretized KS-equation. We then present in section IV the probabilistic extension of the framework and apply it to the KS-equation. We conclude with a summary and a short discussion about possible next steps. We introduce the autoencoders deployed in this work, followed by the interpretable latent space dynamic and discuss the training process. We consider data from high-dimensional time series x n ∈ R f with n = 1, ..., T . We remark that the intervals between the different states do not need to be uniformly spaced. Autoencoder A core assumption of the method is that each high-dimensional state x n can be compressed to a lower-dimensional representation z n ∈ C c with c << f . We identify this lower-dimensional representation by an autoencoder consisiting of a parameterized encoder and decoder. The encoder maps the high-dimensional representation to the latent space as: The latent space is complex-valued. The decoder reconstructs the high-dimensional representation based on the latent variables as: We denote the parameters of the encoder as well as the decoder by θ. As discussed later in Section II C, both set of parameters are optimized simultaneously during training and therefore there is no need for differentiating them. Interpretable Latent Space Dynamics We employ a propagator in the latent space to capture the reduced-order dynamics of the system. In contrast to other time-extended variational autoencoder frameworks, our representation uses complex valued latent variables. In addition the latent variables are treated independently. The latter feature enables us to have an interpretable latent dynamics as well as a model that is especially suitable for being trained in the Small Data regime due to the small number of required parameters. This is in contrast to temporal propagators such as LSTMs . For each dimension i of the latent variable z we are using the following continuous ODE in the complex plane: By solving this ODE, we can define the operator: Interpretable reduced-order modeling with time-scale separation Here, λ is a vector containing all the individual λ's and ∆t n indicates the time-step between the latent states. The symbol is used to indicate a component-wise multiplication. We remark that the latent variables and the parameter governing the temporal evolution are complex numbers and their role in describing the system dynamics is similar to that of an eigenvalue. The real part is associated with growth and decay whereas the imaginary part is representing the periodic component. This approach has similarities with the Koopman-operator based methods and the extended dynamic mode decomposition . In contrast to the methods mentioned before we are using a continuous formulation in the latent space that allows us to incorporate scarce and irregularly sampled training data and directly rely on complex numbers in the latent space. Training and Predictions We optimize a loss function that combines both a reconstruction loss as well as a loss associated with the error of our learned propagator in the latent space: (5) We note that we could directly incorporate mini-batch training by only taking the summation over a subset of the N available training data. For new predictions of unseen states, we use the encoder to generate a latent representations which is then advanced in time by the learned propagator. At a designated time step we are using the decoder to reconstruct the high-dimensional solution. We applied our algorithm to three systems. First, we show that the algorithm is capable of exactly reproducing the solution of a linear ODE and to identify its eigenvalues. Afterwards we are applying the framework to a high-dimensional process generated by a complex latent dynamics, which is correctly identified. As a final test case, we are applying the algorithm to a Kuramoto Shivashinski (KS) equation. Interpretable reduced-order modeling with time-scale separation Linear ODE We are considering a two-dimensional ODE system for x = y 1 y 2 : Based on the obtained training data we run our algorithm using a linear encoder and decoder structure as well as two latent variables z. The loss function was optimized using the Adam algorithm . As we consider a linear ODE we can analytically compute the eigenvalues involved and compare it with the parameters λ identified by our algorithm. We observe in Figure that the algorithm was able to recover the correct values, i.e. the eigenvalues 7 and 3 of the given linear ODE. The system does not have a periodic component and the two imaginary parts correctly go to zero, whereas the real parts converge to the reference value. Moreover we are also able to identify for the linear mapping between our latent variables z and the training data a matrix consisting of a multiple of the eigenvectors (1,1) and (1,-1) and thus the correct solution. This example was chosen to show that the algorithm is able to quickly identify the exact solution of a linear ODE in terms of its linearly independent components. Hidden multiscale dynamics We consider eight-dimensional synthetic time series data produced by an underlying twodimensional complex valued process. In particular, the data points x are generated by first solving for the temporal evolution for the two complex-valued processes p 1 and p 2 and than mapping to the eight-dimensional space by using a randomly sampled linear mapping W . One of the two processes used to generate the data is chosen to be much slower than the other one and both processes have a periodic component. dp 2 dt = (−0.9 + 1.5i)p 2 (8) As training data we consider 40 time series with 150 data points each, obtained by simulating the described processes for a maximum of t = 15 s and then sampling from the obtained data points. Hence the training data consists of: • 40 time-series • with each consisting 150 observations of the x at a uniform time-step ∆t = 0.0025 The autoencoder obtained consists of one linear layer for both the decoder as well as the encoder. The model is trained for 5000 iterations using the Adam optimizer and a learning rate of 10 −3 . The results for the convergence of the parameters λ 1 and λ 2 can be found in Figure . We note that the process which is slower decaying and thus more responsible for the long-term evolution of the system has a higher convergence rate than the faster process. With the obtained parameters λ as well as the trained autoencoder, we compute predictions based on the last time step used for training, i.e. we apply the encoder to obtain our latent representation and than use the latent dynamics to advance the latent representation in time. Afterwards, we employ the decoder to reconstruct the full high-dimensional system. The results can be found in Figure and show very good agreement between predictions and reference data. This example shows that our model is successfully able to carry out dimensionality reduction and moreover indicates that the convergence rate between latent processes can be different. The latter is relevant when training models as for accurate predictions all latent processes and their dynamics should be converged. Kuramoto-Sivashinsky Finally, we applied our algorithm to the KS equation and aim to identify a reduced-order model for the solution u(y, t): We employed periodic boundary conditions, µ = 1 and a domain size y ∈ [0, 22]. For this domain-size, the KS-equation exhibits a structurally stable chaotic attractor as discussed in The black lines divides the area for which training data was given from the area without raining data. ; . The equation is discretized in space using a discretization step of 22 64 resulting in a state vector x of dimension 64 and a nonlinear system of coupled ODEs. This is solved using a stiff fourth-order solver We employed a non-linear encoder and decoder with four fully-connected layers each and ReLU-activation functions as well as Dropout Layers between the fully-connected layers. We trained the model for 200000 iterations using Adam and a learning rate of 5 • 10 4 and assuming a five-dimensional latent space. We obtained the λ's in Figure . Four latent variables have λ's close to zero and thus a slow temporal dynamic that is responsible for the long-term evolution whereas one latent variable is quickly decaying. Based on the obtained parameters, we do predictions based on an unseen initial condition not contained in the training data. We are able to reconstruct the correct phase space based on our predictions despite only using a very limited amount of training data. The results for the phase space can be seen in Figure . Although the small-scale fluctuations in the temporal dynamics are not well captured, the model identifies the correct manifold which has a good accuracy compared to the reference solution. All phase-spaces were obtained by using a finite-difference operator on the data or predictions. These results are in accordance Interpretable reduced-order modeling with time-scale separation with whose LSTM-based temporal dynamic model was also able to find the correct phase space but not to track the actual dynamics for long-term predictions. Our model is not able to account for noise in the temporal evolution and thus dealing with chaotic, small-scale fluctuations is challenging. We believe that a probabilistic version of our algorithm could be advantageous here. This section contains a fully probabilistic formulation for the deterministic model discussed before. We replace the Autoencoder with a Variational Autoencoder and the ODE in the latent space with a SDE. The loss function which we optimize is the Evidence-Lower Bound (ELBO). Model Structure We postulate the following relations for our probabilistic model using an Ornstein-Uhlenbeck (OU) for each dimension i of the latent space and a Wiener process W t in the latent space: We again assume that the latent variables z t are complex-valued and a priori independent. Complex variables were chosen as their evolution includes a harmonic components which are observed in many physical systems. We assume an initial conditions z 0,i ∼ CN (0, σ 2 0,i ). The total parameters associated with the latent space dynamics of our model are thus {σ 2 0,i , σ 2 i , λ i } c i=1 and will be denoted by θ together with all parameters responsible for the decoder mapping G (see next section). These parameters along with the state variables z t have to be inferred from the data x t . Based on probabilistic Slow Feature Analysis (SFA) , we set σ 2 i = 2; (λ j ) and σ 2 0,i = 1. As a consequence, a priori, the latent dynamics are stationary. A derivation and reasoning for this choice can be found in Appendix A. Hence the only independent parameters are the λ i , the imaginary part of which can account for periodic effects in the latent dynamics. Variational Autoencoder We employ a variational autoencoder to account for a probabilistic mappings from the lower-dimensional representation z n to the high-dimensional system x n . In particular we are employing a probabilistic decoder The encoder is used to infer the state variables z based on the given data and thus defined in the inference and learning section. Inference and Learning Given the probabilistic relations , our goal is to infer the latent variables z 0:T as well as all model parameters θ. We follow a hybrid Bayesian approach in which the posterior of the state variables is approximated using amortized Variational Inference and Maximum-A-Posteriori (MAP) point-estimates for θ are computed. The application of Bayes' rule for each data sequence x 0:T leads to the following posterior: where p(θ) denotes the prior on the model parameters. In the context of variational inference, we use the following factorization of the approximate posterior i.e. we infer only the mean µ and variance σ for each state variable based on the given data points. This conditional density used for inference is the encoder-counterpart to the probabilistic decoder defined in the section before. It can be readily shown that the optimal parameter values are found by maximizing the Evidence Lower Bound (ELBO) F(q φ (z 0:T ), θ) which is derived in Appendix B. We compute Monte Carlo estimates of the gradient of the ELBO with respect to φ and θ with the help of the reparametrization trick and carry out stochastic optimization with the ADAM algorithm . Results for the probabilistic extension We applied our probabilistic version to the KS-equation. We used the same settings as for the deterministic approach but considered up to 10 complex latent variables. The obtained λ's are in Figure . The probabilistic model allows us to quantify the uncertainty in predictions. In Figure predictions for various time-steps and the respective uncertainty bounds are shown for an unseen initial condition. Due to the chaotic nature of the KS-equation and the small amount of training data, the underlying linear dynamic of our model is only able to capture the full dynamics for a limited time horizon. Fortunately, due to the probabilistic approach the model is capable of capturing chaotic fluctuations with increasingly wide uncertainty bounds. We also computed the phase space representation for the KS-equation based on the predictions obtained by our model and compare it with the reference solution. The probabilistic model identifies the correct manifold with a better accuracy than the deterministic model. As some of the small-scale fluctuations are accounted as noise, the resulting manifold is more concentrated at the origin and the obtained values are slightly smaller than the reference manifold although their shape is very similar.
How does the framework capture the reduced-order dynamics?
By using a propagator in the latent space.
3,083
multifieldqa_en
4k
Paper Info Title: Nuclear Liquid-Gas Transition in the Strong Coupling Regime of Lattice QCD Publish Date: 28 Mar 2023 Author List: J Kim (from Institute for Advanced Simulation (IAS-4), Forschungszentrum Jülich), P Pattanaik (from Fakultät für Physik, Bielefeld University), W Unger (from Fakultät für Physik, Bielefeld University) Figure FIG. 1.Typical 2-dimension configuration at β = 1.0, at non-zero quark mass, temperature, chemical potential.The black dots are monomers, the blue lines are dimers, the red arrows are baryon loop segments (or triplets g b + f b = ±3 if adjacent to a non-trivial plaquette), and the green squares are plaquette occupations ±1.The actual configurations are 3+1-dimensional. FIG.2.Chiral susceptibility on a 2 4 volume for various quark masses, as a function of the bare anisotropy γ (with aT = γ 2 /2), analytic results from enumeration compared to numerical data from simulations via the worm algorithm. FIG.3.Various observables in the µB-T plane on a 2 4 volume at amq = 0.1.The back-bending of the first order transition at temperatures below aT = 0.5 in all observables is an artifact of the small volume, and vanishes in the thermodynamic limit.The temperature aT = 1/2 corresponds to the isotropic lattice here. FIG. 4. The chiral condensate (left) and the baryon density (right) for quark mass m = 1.5 as a function of the chemical potential and for various temperatures. FIG. 7. ∆f at amq = 0.2 as a function of chemical potential and β the on a 6 3 × 4 lattice FIG. 8. Baryon mass from ∆E as a function of the quark mass amq, and contributions from different dual variables: monomers, dimers and baryon segments. FIG. 9. Baryon density for volume 4 3 × 8 in the full µB − mq plane, illustrating the strong quark mass dependence of the onset to nuclear matter. FIG. 10.Baryonic observables on various volumes in the first order region amq = 1.5.Vertical bands indicate the mean and error of the nuclear transition. FIG. 12. Left: Extrapolation of the pseudo-critical values of µB for the various volumes into the thermodynamic limit.Right: Critical baryon chemical potential for different quark masses.The first order transition region is shown in blue, the crossover region is shown in red and the range for critical end point is marked in black. FIG. 17. Nuclear interaction scaled with baryon mass.As the quark mass increases, it tends to zero. FIG. 18. Critical baryon chemical potential and baryon mass from different approaches. Parameters for the Monte Carlo runs to determine the nuclear transition at strong coupling, with statistics after thermalization. abstract The nuclear liquid-gas transition from a gas of hadrons to a nuclear phase cannot be determined numerically from conventional lattice QCD due to the severe sign problem at large values of the baryon chemical potential. In the strong coupling regime of lattice QCD with staggered quarks, the dual formulation is suitable to address the nuclear liquid gas transition. We determine this first order transition at low temperatures and as a function of the quark mass and the inverse gauge coupling β. We also determine the baryon mass and discuss the nuclear interactions as a function of the quark mass, and compare to mean field results. It is known from experiments that at low temperatures, there is a phase transition between dilute hadron gas and dense nuclear matter as the baryon chemical potential increases. This transition is of first order and terminates at about T c = 16 MeV in a critical end point. The value of the chemical potential µ 1st B at zero temperature is given roughly by the baryon mass m B , where the difference of µ 1st B −m B is due to nuclear interactions. For a review on nuclear interactions see . As the nuclear force between baryons to form nuclear matter is due to the residual strong interactions between quarks and gluons, it should be accurately described by QCD. We choose to study the nuclear transition and nuclear interaction via lattice QCD , with its Lagrangian being a function of the quark mass and the inverse gauge coupling. In order to understand the nature of the transition, it is helpful to study its dependence on these parameters. However, at finite baryon density, lattice QCD has the infamous sign problem which does not allow us to perform direct Monte Carlo simulations on the lattice. Various methods have been proposed to overcome the numerical sign problem, but they are either limited to µ B /T 3 or can not yet address full QCD in 3+1 dimensions in the whole µ B − T plane , in particular the nuclear transition is out of reach. An alternative method is to study lattice QCD via the strong coupling expansion. There are two established effective theories for lattice QCD based on this: (1) the 3-dim. effective theory for Wilson fermions in terms of Polyakov loops, arising from a joint strong coupling and hopping parameter expansion , the dual representation for staggered fermions in 3+1 dimensions, with dual degrees of freedom describing mesons and baryons. Both effective theories have their limitations: is limited to rather heavy quarks (but is valid for large values of β) whereas ( ) is limited to the strong coupling regime β 1 (but is valid for any quark mass). We study lattice QCD in the dual formulation, both at infinite bare gauge coupling, β = 0, and at leading order of the strong coupling expansion in the regime β < 1, which is far from the continuum limit. But since strong coupling lattice QCD shares important features with QCD, such as confinement, and chiral symmetry breaking and its restoration at the chiral transition temperature, and a nuclear liquid gas transition, we may get insights into the mechanisms, in particular as the dual variables give more information in terms of its world lines, as compared to the usual fermion determinant that depends on the gauge variables. To establish a region of overlap of both effective theories, we have chosen to perform the Monte Carlo simulations in the dual formulation extending to rather large quark masses. This paper is organized as follows: in the first part we explain the dual formulation in the strong coupling regime, in the second part we provide analytic results based on exact enumeration and mean field theory, in the third part we explain the setup of our Monte Carlo simulations and present result on the m q -and β-dependence of the nuclear transition. Since the strong coupling regime does not have a well defined lattice spacing, we also determine the baryon mass am B to set the parameters of the grand-canonical partition function, aT and aµ B , in units of am B . We conclude by discussing the resulting nuclear interactions, and compare our findings with other results. Staggered action of strong coupling QCD and its dual representation In the strong coupling regime, the gauge integration is performed first, followed by the Grassmann integration to obtain a dual formulation. This was pioneered for the strong coupling limit in and has been extended by one of us to include gauge corrections . The sign problem is mild in the strong coupling limit and still under control for β < 1, where we can apply sign reweighting. The dual degrees of freedom are color-singlet mesons and baryons, which are point-like in the strong coupling limit, and become extended about a lattice spacing by incorporating leading order gauge corrections. The partition function of lattice QCD is given by where DU is the Haar measure, U ∈ SU(3) are the gauge fields on the lattice links (x, μ) and { χx , χ x } are the unrooted staggered fermions at the lattice sites x. The gauge action S G [U] is given by the Wilson plaquette action and the staggered fermion action S F [ χ, χ, U] is: where the gauge action depends on the inverse gauge coupling β = 2Nc g 2 and the fermion action depends on the quark chemical potential aµ q which favors quarks in the positive temporal direction, and the bare quark mass am q . First we consider the strong coupling limit where the inverse gauge coupling β=0 and hence the gauge action S G [U] drops out from the partition function in this limit. The gauge integration is over terms depending only on the individual links (x, μ) so the partition function factorizes into a product of one-link integrals and we can write it as: with z(x, μ) the one-link gauge integral that can be eval-uated from invariant integration, as discussed in , where we write the one-link integral in terms of new hadronic variables: Only terms of the form (M (x)M (y)) k x, μ (with k x,μ called dimers which count the number of meson hoppings) and B(y)B(x) and B(x)B(y) (called baryon links) are present in the solution of the one-link integral. The sites x and y = x + μ are adjacent lattice sites. It remains to perform the Grassmann integral of the fermion fields χ, χ. This requires to expand the exponential containing the quark mass in Eq. (4) (left), which results in the terms (2am q M (x)) nx (with n x called monomers). To obtain non-vanishing results, at every site, the 2N c Grassman variables χ x,i and χx,i have to appear exactly once, resulting in the Grassmann constraint (GC): where n x is the number of monomers, k x,μ is the number of dimers and the baryons form self-avoiding loops x,μ , which due to the constraint cannot coexist with monomers or dimers. With this, we obtain an exact rewriting of the partition function Eq. ( ) for N c = 3, in terms of integer-valued dual degrees of freedom {n, k, }: where the sum over valid configurations has to respect the constraint (GC). The first term in the partition function is the contribution from dimers and the second term is the contribution from monomers. The weight factor w( ) for each baryon loop depends on the baryon chemical potential µ B = 3µ q and induces a sign factor σ( ) which depends on the geometry of : Here, ω is the winding number of the loop . The total sign factor σ( ) ∈ {±1} is explicitly calculated for every configuration. We apply sign reweighting as the dual formulation has a mild sign problem: baryons are non-relativistic and usually have loop geometries that have a positive signs. The dual partition function of the strong coupling limit is simulated with the worm algorithm (see Section III A) and the sign problem is essentially solved in this limit. Extension to finite β The leading order gauge corrections O(β) to the strong coupling limit are obtained by expanding the Wilson gauge action Eq. ( ) before integrating out the gauge links. A formal expression is obtained by changing the order of integration (first gauge links, then Grassmann-valued fermions) within the QCD partition function: With this the O (β) partition function is The challenge in computing Z (1) is to address the SU(N c ) integrals that receive contributions from the elementary plaquette U P . Link integration no longer factorizes, however the tr[U P ] can be decomposed before integration: Integrals of the type J ij with two open color indices -as compared to link integration at strong coupling -have been derived from generating functions for either J = 0 or for G = U(N c ) . The SU(3) result was discussed in , in terms of the dual variables, neglecting rotation and reflection symmetries, there are 19 distinct diagrams to be considered. The resulting partition function, valid to O(β), is with q P ∈ {0, ±1}, and the site weights w x → ŵx , bond weights w b → ŵb and baryon loop weights w → ŵ receive modifications compared to the strong coupling limit Eq. ( ) for sites and bonds adjacent to an excited plaquette q P = 1. The weights are given in , and are rederived for any gauge group in . The configurations {n, k, , q p } must satisfy at each site x the constraint inherited from Grassmann integration: which is the modified version of Eq. ( ) with q x = 1 if located at the corner of an excited plaquette q p = 0, otherwise q x = 0. A more general expression that we obtained via group theory and is valid to higher orders of the strong coupling expansion is discussed in terms of tensor networks . A typical 2-dimensional configuration that arises at β = 1 in the Monte Carlo simulations is given in Fig. . Note that if a baryon loop enters a non-trivial plaquette, one quark is separated from the two other quarks, resulting in the baryon being extended object, rather being point-like in the strong coupling limit. The O(β) partition function has been used in the chiral limit to study the full µ B − T plane via reweighting from the strong coupling ensemble. Whereas the second order chiral transition for small values of the aµ B decreased up to the tri-critical point, the first order nuclear transition was invariant: aµ 1st B 1.78(1) at zero temperature has no β-dependence. For the ratio T (µ B = 0)/µ 1st B (T 0) we found the values 0.787 for β = 0 and 0.529 β = 1, which should be compared to T c / 0.165 for full QCD . However, since reweighting cannot be fully trusted across a first order boundary, direct simulations at nonzero β are necessary. The Monte Carlo technique to update plaquette variables is discussed in Section III A. In this section, we provide analytic results from exact enumeration for small volumes, and mean field results based on the 1/d expansion, valid in the thermodynamic limit. The main purpose is to compare our Monte Carlo results to these analytic predictions. Exact enumeration To establish that our Monte Carlo simulations indeed sample the partition functions Eq. ( ) and Eq. ( ), we have obtained analytic results on a 2 4 volume at strong coupling, and at finite beta in two dimensions on a 4 × 4 volume, comparing O (β) and O β 2 truncations. Our strategy to obtain an exact enumeration of the partition function Z is to enumerate plaquette configurations first, then fixing the fermion fluxes which together with the gauge fluxes that are induced by the plaquettes form a singlet, a triplet or anti-triplet, i.e. on a given bond b, g b + f b ∈ {−3, 0, 3}, and last we perform the monomerdimer enumeration on the available sites not saturated by fermions yet by a depth-first algorithm . At strong coupling, with no plaquettes, g b = 0 and f b are baryonic fluxes. All observables that can be written in terms of derivatives of log(z), such as the baryon density, the chiral condensate, the energy density, and also the average sign, are shown in Fig. Expectations from mean field theory Another analytical method to study strong coupling lattice QCD is the mean field approach, where the partition function is expanded in 1 d (d is the spatial dimension) and then a Hubbard-Stratonovich transformation performed . After this procedure, the free energy is a function of temperature T , the chiral condensate σ and chemical potential µ B : here E[m] is one-dimensional quark excitation energy which is a function of the quark mass m = am q . For N c = 3 and d = 3 we determined the minimum of the free energy with respect to the chiral condensate. This gives us the equilibrium chiral condensate as a function of (T, m, µ B ). The chiral condensate and the baryon density as a function of the baryon chemical potential in lattice units aµ B and for various temperatures at quark mass m = 1.5 is shown in Fig. . We have determined the critical temperature to be aT c = 0.23 , which is characterized by an infinite slope of the chiral condensate. For lower temperatures, there is a clear discontinuity of the chiral con-densate, separating the low density phase from the high density phase. For temperatures above and in the vicinity of aT c the chiral condensate and baryon density has no discontinuity but rapidly changes, corresponding to a crossover transition. With this method, the phase diagram is plotted for different quark masses in Fig. . The second order phase transition in the chiral limit is plotted in solid blue line, the dotted lines show the first order phase transition for different quark masses and the solid red line indicates the critical end point for the different quark masses. Mean field theory also gives an expression for the pion mass am π and the baryon mass am B : The mean field baryon mass for N c = 3, d = 3 is also plotted in red in Fig. . Whereas the baryon mass is around N c in the chiral limit (am B 3.12 for N c = 3), it approximately doubles at m = 3.5 (am B 6.28) which corresponds to the pion mass am π = 4.45, i.e. m π /m B = 0.708. Hence, at around bare mass m = 3.5, the valence quark mass of the baryon corresponds roughly to 1/3 of the chiral limit value of the baryon mass. The first Monte Carlo simulations that could extend in the µ B − T plane was the MDP algorithm , but it required the introduction of the worm algorithm to make substantial progress. First studies of the worm algorithm applied to the strong coupling limit QCD (with gauge group U(3)) are , and for gauge group SU . Monte Carlo simulations to extend the worm to incorporate leading order corrections were first proposed in . We will shortly review the setup of or Monte Carlo strategy for the nuclear transition, with an emphasis on the challenges to address large quark masses. Strong Coupling Without any further resummation, there is a mild sign problem in the dual formulation of lattice QCD in the strong coupling limit. When the average sign σ is not too small (close to zero), it implies that most of the configurations have a positive weight thus allowing us to perform sign reweighting strategies. In Fig. , ∆f is plotted as a function of the baryon chemical potential and the quark masses. It is seen that ∆f is close to zero for most cases except near the critical chemical potential and for small quark masses, but never exceeds 5 × 10 −4 . Hence sign reweighting can be performed in the full parameter space. The result that the sign problem becomes even milder when increasing the mass is related to the fact that larger critical chemical potentials result in a larger fraction of static baryons (spatial baryon hoppings become rare). FIG. . ∆F at strong coupling as a function of chemical potential and quark mass on a 6 3 × 8. The sign problem becomes milder as the quark mass increases. Finite β All runs at finite β have been obtained for N τ = 4, which corresponds to a moderately low temperature aT = 0.25 compared to the value of the chiral transition aT 1.54. Those simulations were too expensive to attempt N τ = 8 runs, in particular as a higher statistics was required. The spatial volumes are 4 3 , 6 3 and 8 3 . For β values are from 0.0 to 1.0 with step size 0.1, and for am q values from 0.00 to 1.00 with step size 0.01. The values of aµ were chosen close to the nuclear transition, the scanning range is shifted to large values as am q increases. At small quark masses the scanning range is from aµ = 0.4 to 1.0 and for the large quark masses, it is from 0.6 to 1.2 with step size 0.01. The statistics used for are 15 × 10 4 measurements and between measurement, 40 × N 3 s worm updates. Residual sign problem Although it is possible to resum the sign problem at strong coupling with a resummation of baryon and pion world lines, this is not possible when including gauge corrections. In order to compare both sign problems, we kept the original dual formulation to monitor the severity of the sign problem. This is done via the relation between the average sign σ and the difference of the free energy density ∆f between the full ensemble f and of the sign-quenched ensemble f || . Nuclear interactions We have found that aµ 1st B is very different from the baryon mass. This must be due to strong attractive interactions of nucleons. In contrast to continuum physics, in the strong coupling limit there is no pion exchange due to the Grassmann constraint. Instead, nucleons are point like and hard core repulsive. However, the pion bath, which is modified by the presence of static baryons, results in an attractive interaction. In , this has been analyzed in the chiral limit using the snake algorithm, and it has been found that the attractive force is of entropic origin. Here, we do not quantify the nuclear interaction via the nuclear potential, but via the difference between critical baryon chemical potential and baryon mass, in units baryon mass, as shown in Fig. , given the am B as measured in Section III C. This compares better to the 3dim. effective theory. The nuclear interaction is maximal and more than 40% in the chiral limit, which is related to pions being massless: the modification of the pion bath is maximal. We clearly find that the nuclear interaction decreases drastically and almost linearly until it almost approaches zero at about am q = 2.0, corresponding to a pion mass am π = 3.36, see Section II B. The large error bars for larger quark masses, that are due to the subtraction of almost same magnitudes, makes it difficult to extract a non-zero nuclear interaction at the largest quark masses. In this work, we have determined the baryon mass and the nuclear transition via Monte Carlo: the worm algorithm based on the dual formulation, at finite β equipped with additional updates. All those numerical results and various analytic expressions are summarized in Fig. . We find that as the quark mass becomes large, spatial mesons hoppings (i.e. spatial dimers) become rare, which makes this 3+1-dimensional system closer to 1dim. QCD . Also, both the baryon mass and the baryon chemical potential obtained in our dual representation, i.e. for staggered fermions, approaches the baryon mass of the 3-dim. effective theory which is based on Wilson fermions. Another comparison that summarizes the validity of the mean field approach discussed in Section II B is shown in Fig. . It is evident that mean field theory has strong deviations for small quark masses, but this discrepancy becomes smaller for larger quark masses. The extension of the study of the nuclear transition to finite inverse gauge coupling β is summarized in Fig. , which shows the β-dependence of aµ c B for various quark masses. For all quark masses ranging from am q = 0 to am q = 1.0, there is only a very weak β-dependence, confirming the expectation from mean field theory . This works was restricted to isotropic lattices ξ = a/a t = 1, i.e. we performed simulations at fixed temperature. Non-isotropic lattices are necessary to vary the temperature at fixed values of β. This requires to include two bare anisotropies, γ for the fermionic action and γ G for the gauge action. Finite β has only been studied by us in the chiral limit . Clearly, it is interesting to study the location of the nuclear critical point also including higher order gauge corrections and at finite quark mass. Simulations including O(β 2 ) are under preparation.
What is the main focus of the research paper?
Nuclear liquid-gas transition in lattice QCD.
4,017
multifieldqa_en
4k
Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government. A farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash. In November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election. John Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later. Early life English was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden. English attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington. After finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as "Rogernomics") were being implemented. English joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively. Fourth National Government (1990–1999) At the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the "brat pack", the "gang of four", and the "young Turks". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health. First period in cabinet (1996–1999) In early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a "shotgun marriage", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters. As Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to "balance sheets" and "user charges") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted. By early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet. English was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's "Rogernomics" and Ruth Richardson's "Ruthanasia") had focused on "fruitless, theoretical debates" when "people just want to see problems solved". Opposition (1999–2008) After the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent. Leader of the Opposition In October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times "there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension". Aged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as "the worst day of my political life". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support. By late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest. Shadow cabinet roles and deputy leader On 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education). In November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet. Fifth National Government (2008–2017) Deputy Prime Minister and Minister of Finance (2008–2016) At the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third. He was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. The pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK). English acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: "improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending. In April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP. Strong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax. Allowances issue In 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making "preliminary enquiries" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election. Prime Minister (2016–2017) John Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016. English appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same. In February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little. In his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were "natural partners" and would "continue to forge ties" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact. At a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation. On 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand. On 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal. During the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters. At the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October. Opposition (2017–2018) Leader of the Opposition English was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day. Post-premiership In 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets. Political and social views English is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any "liberalisation" of abortion law. In 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, "I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage". In 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes. Personal life English met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons. English is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics. In June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke. Honours In the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State. See also List of New Zealand governments Politics of New Zealand References External links Profile at National Party Profile on Parliament.nz Releases and speeches at Beehive.govt.nz |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- 1961 births 21st-century New Zealand politicians Candidates in the 2017 New Zealand general election Deputy Prime Ministers of New Zealand Leaders of the Opposition (New Zealand) Living people Members of the Cabinet of New Zealand Members of the New Zealand House of Representatives New Zealand farmers New Zealand finance ministers New Zealand list MPs New Zealand MPs for South Island electorates New Zealand National Party MPs New Zealand National Party leaders New Zealand Roman Catholics New Zealand people of Irish descent People educated at St. Patrick's College, Silverstream People from Dipton, New Zealand People from Lumsden, New Zealand Prime Ministers of New Zealand University of Otago alumni Victoria University of Wellington alumni Knights Companion of the New Zealand Order of Merit New Zealand politicians awarded knighthoods
What position did Simon English hold in the 2008 general election?
He became deputy prime minister and minister of finance.
3,602
multifieldqa_en
4k
Passage 1: Lower South Bay, New York Lower South Bay, commonly called South Bay, is a hamlet on the southwest corner of Oneida Lake, Oneida County, New York, United States. It is opposite North Bay, and is surrounded by many islands to the west, north and east, including Geersbeck Island, Hall Island, Glosky Island, Schroeppel Island, Denmans Island and Long Island (not to be confused with Long Island, New York City). Lower South Bay also lies near the town of Cicero, approximately two miles west. Passage 2: Jones Beach Island Jones Beach Island is one of the outer barrier islands off the southern coast of Long Island in the U.S. state of New York. Etymology It is named for Major Thomas Jones, who first came to Long Island in 1692, where he proceeded to build the island's first brick house near Massapequa. Jones built a whaling station on Jones Island near the present site of Jones Beach State Park in 1700.Jones Beach Island is sometimes referred to as Oak Beach Island and is the former home of the infamous Oak Beach Inn. Because of the ephemeral nature of the various inlets, the name Fire Island is sometimes used to refer collectively to the various barrier islands off the south shore of Long Island, but usually refers specifically to the island across the Fire Island Inlet to the east. Geography Jones Beach Island is separated from Long Island by South Oyster Bay and Great South Bay, from Long Beach Barrier Island by Jones Inlet to the west, and from Fire Island by Fire Island Inlet to the east. It straddles the line between Nassau and Suffolk counties.From west to east, Jones Beach Island contains the following communities and parks: Jones Beach State Park John F. Kennedy Memorial Wildlife Sanctuary, a Town of Oyster Bay wildlife preserve Tobay Beach, a Town of Oyster Bay beach Gilgo, a census-designated place West Gilgo Beach, a private gated community Gilgo Beach, a community and Town of Babylon beach Cedar Beach, a Town of Babylon beach Overlook Beach, a residents-only Town of Babylon beach Gilgo State Park, an undeveloped beach Oak Beach, a census-designated place Captree, a census-designated place Captree State ParkThe southern side of the island is known for its beaches that face the open Atlantic Ocean. The best known of the public beaches on the island, Jones Beach State Park on the western tip of the island, is a summer recreational destination for the New York City area.Jones Beach Island is accessible from Long Island on its western end by the Meadowbrook State Parkway to Merrick (with the Loop Parkway providing a spur to Long Beach), and the Wantagh State Parkway to Wantagh. The Robert Moses Causeway traverses its eastern end, linking to Babylon via the State Boat Channel Bridge and Great South Bay Bridge, as well as to Fire Island by the Fire Island Inlet Bridge. The Ocean Parkway connects all three causeways and runs the length of the island. Passage 3: Oak Beach–Captree, New York Oak Beach–Captree, frequently just called Oak Beach, was a census-designated place (CDP) in the town of Babylon in Suffolk County, New York, United States. The population was 286 at the 2010 census.Prior to the 2010 census, the area was part of a larger CDP called Gilgo-Oak Beach-Captree, New York. The Oak Beach–Captree CDP consists of some small beach communities on a barrier island along the southern edge of Long Island, including Oak Beach, Oak Island, and Captree Island. As of 2020, Oak Beach–Captree was split into two separate CDPs called Oak Beach and Captree. Geography Oak Beach–Captree was located at 40°38′21″N 73°17′35″W. According to the United States Census Bureau, the CDP had a total area of 3.7 square miles (9.5 km2), of which 2.7 square miles (7.1 km2) is land and 0.93 square miles (2.4 km2), or 25.03%, is water. Demographics The census numbers are presumably for full-time inhabitants; many of these houses are second homes and not primary residences, although the proportion of seasonal residents is decreasing. The land for these communities is not privately owned, but leased from the Town of Babylon through the year 2065. However, the residences on the property are owned. If the leases are not renewed at some point in the future, the owners will have to move the houses elsewhere, similar to what happened at High Hill Beach when Jones Beach State Park was created. Passage 4: South Oyster Bay South Oyster Bay or East Bay is a lagoon and natural harbor along the western portion of the south shore of Long Island in New York in the United States. The harbor is formed by Jones Beach Island, a barrier island on the southern side of Long Island. It is approximately 3 mi (5 km) wide between the two islands, and approximately 15 mi (24 km) long. It links to Great South Bay on its eastern end and opens to the Atlantic Ocean through inlets on either side of Jones Beach Island.The name refers to its history as one of the finest oyster beds in the world. See also Outer Barrier Jamaica Bay Oyster Bay, New York Great South Bay Patchogue Bay South Shore Estuary Passage 5: Short Beach (New York) Short Beach is the beach on the northern shore of the western end of Jones Beach Island. The beach faces South Oyster Bay instead of the Atlantic Ocean, thereby providing some shelter from storm waves. Since 1851 it has been the home of a coastal lifesaving station operated (at first) by the United States Life-Saving Service and later by the United States Coast Guard. The current facility, Station Short Beach, typically does around 500 search and rescue missions each year —one of the busiest units in the Coast Guard's 1st District. The Jones Beach State Park's West End Boat Basin is also on Short Beach. The Jones Beach West End barracks of the New York State Park Police is around 200 feet south of the Short Beach shoreline. An uninhabited islet, Short Beach Island, is usually just offshore, but occasionally connects to the beach when low tide exposes sandbars to the surface. External links United States Coast Guard Station Jones Beach Passage 6: Oak Beach Inn The Oak Beach Inn, commonly referred to by the abbreviation OBI, was a Long Island nightclub located in Oak Beach, on Jones Beach Island near Captree State Park in the Town of Babylon, Suffolk County, New York. History and controversy In 1969, Robert Matherson bought what was then a waterfront barrier island restaurant and converted it into an enormously popular (and controversial) nightclub. The Oak Beach Inn, located in Oak Beach on Jones Beach Island, was the original, and later just referred to as The OBI. He later opened three more OBI nightclubs and named them according to their geographic location. The OBI North was in Smithtown, New York, the OBI East near the Shinnecock Canal en route to The Hamptons, and the OBI West locations in Island Park, New York (which had two locations: first at 3999 Long Beach Road, and later, briefly, at 50 Broadway). All four clubs were located on Long Island and were wildly successful for many years, bringing people in from all over Long Island, New York City, Westchester, southern Connecticut and New Jersey and hosting acts such as Twisted Sister and The Good Rats.In 1979, Matherson sued the town to lease him more land for additional parking, which the town granted. However two years later, new officials disagreed, which caused Matherson to sue again, and when the court favored with Matherson, the town granted him $3 million and the nine acres. In 1993, an unhappy Matherson started a "Move Out of New York Before It's Too Late" campaign complete with a hearse, banners and TV ads. An article in 1993 in The New York Times provided details about his campaign, including information that the New York State Division of Alcoholic Beverage Control raided the club one year earlier in 1992. One of the OBI West locations burned down after only a couple of years of packing in thousands on the weekends. Arson involving organized crime figures referred to in the movie Goodfellas was alleged but never substantiated. Closure Over the years, the OBI was involved in many disputes with the local community over issues such as noise, parking and traffic. Finally, in 1999, Matherson sold the property to developer Ross Cassata, who planned to build condominiums. Matherson then moved to Key West, Florida to open a new club of the same name. When it closed, the inn's two-ton statues of whales and dolphins, which were commonly touched by clubgoers, were moved to Danfords on the Sound in Long Island.However, Cassata then sold the nine acres to Suffolk County for $7.95 million and the original property was torn down in 2003 and was replaced with a town-operated park, with later added plans of adding a bed and breakfast, upscale restaurant, boardwalk, water-sport area and boat ramp. The park now hosts activities, such as car racing, and it too has caused controversy. At the time of demolition, the Suffolk County Legislator commented that the park agreement avoided "an enormous tax increase in Babylon, which would have had to pay a court judgment of as much as $20 million or watch its coastline be forever scarred by high-rise development" and the money came from the county's greenway program. Locations The original OBI was at 1 Oak Beach Road. The building was sold in 1999, torn down in 2003, land turned into a park with a small beach simply called Oak Beach. The OBI East was at 239 E Montauk Highway, Hampton Bays. The property underwent extensive renovations between 2018 and 2022 and was reopened as the Canoe Place Inn and Cottages with 20 rooms, 5 cottages, restaurant, bar and a 350 banquet room.The OBI North was at State Road 25A, near Jericho Turnpike, Smithtown. The building burned down in 1980, and the land is now part of Willow Ridge at Smithtown HOA. The OBI West was at 3999 Long Beach Road, Island Park. The building was torn down between 1994 and 2004, the land is now a parking lot for school buses. It briefly was located at 50 Broadway (sometimes listed as 50 Austin Blvd.), Island Park as one of a series of famous nightclubs such as the Shell House, The Action House, The Rock Pile, and Speaks. Robert Matherson later opened Oak Beach Inn at 227 Duval Street, Key West, Florida. Robert Matherson died in 2007. In popular culture Robert "Rosebud" Butt is credited with inventing the Long Island Iced Tea, while working as a bartender at the original OBI in the 1970s. In 2010, the Babylon-based rock band Two Cent Sam released the "OBI Song" and a DIY video celebrating the Oak Beach Inn's history and impact on Long Islanders and the void in Long Island night life after the OBI's destruction. Passage 7: Great South Bay The Great South Bay is a lagoon situated between Long Island and Fire Island, in the State of New York. It is about 45 miles (72 km) long and has an average depth of 4 feet 3 inches (1.3 m) and is 20 feet (6.1 m) at its deepest. It is protected from the Atlantic Ocean by Fire Island, a barrier island, as well as the eastern end of Jones Beach Island and Captree Island. Robert Moses Causeway adjoins the Great South Bay Bridge, which leads to Robert Moses State Park. The bay is accessible from the ocean through Fire Island Inlet, which lies between the western tip of Fire Island and the eastern tip of Jones Beach Island. The bay adjoins South Oyster Bay on its western end, and Patchogue and Moriches bays at the east end. History In the early 17th century, European settlers first encountered the native Montaukett Indian Nation. Among the earliest British families were the Smith, Carman and Hewlett families. Long Island's South Shore, adjacent to the bay, now includes the communities of Lindenhurst, Babylon, Islip, Oakdale, Sayville, Bayport, Blue Point, Patchogue, Bellport, Shirley, and Mastic Beach. Environmental concerns In the late nineteenth century Great South Bay provided many of the clams consumed throughout the region and even the country. The first oysters to be exported from the US to Europe came from Great South Bay. By the latter 20th century, a significant percentage of the habitat was lost. Hurricane Sandy, the largest storm to affect the region since 1938, made landfall with devastating impact to Fire Island sea shores, including multiple breaches, the largest of which formed just south of Bellport. This was formerly known as Old Inlet. Residents were concerned it would have effects on tidal increases and potential flooding, when in actuality it has allowed the bay to relieve some of its captive water, which has changed the salinity and nitrogen levels in the bay. After roughly 75 years, the bay begun flushing itself out which may improve the water condition within the bay. Regulations set forth by the US Government National Wildlife Preserve, which has a seven-mile stretch of land (The Otis Pike Fire Island High Dune Wilderness) prohibit any unauthorized parties from performing any kind of man made changes, thus the inlet has remained open. There have been a number of ongoing public meeting discussing the future of the Inlet. All the other breaches were closed by the Army Corps of Engineers. In 2012, The Save the Great South Bay (STGSB) not-for-profit organization was formed in order to work towards better conservation of the water and its beachfronts. Save The Great South Bay has increased concerns about boat sewerage pumpouts in The Great South Bay as a serious ecological concern. See also South Shore Estuary Passage 8: Great South Bay Bridge The Great South Bay Bridge is a bridge on the southwest side of Suffolk County, New York, on Long Island. It connects the Robert Moses Causeway from Long Island's mainland over the Great South Bay, connecting to Captree and Jones Beach islands. It serves as access via the Robert Moses Causeway to both of the downstream crossings, the State Boat Channel Bridge and the Fire Island Inlet Bridge, also leading visitors and on-lookers to either the Fire Island Lighthouse or the Robert Moses State Park. History The bridge was originally a single span, that opened in 1954 and was called the Captree Bridge. Today it carries southbound traffic. In 1964, a second parallel span opened to traffic and carried northbound traffic. This brought much needed relief to traffic heading back from Jones Beach, Robert Moses, and Captree parks. The bridges are through trusses and are painted a traditional "bridge green". In 1997, a major rebuild of the deck of the older span began and was completed in 2000. Safety compliant railings were installed on the older span. In 2013–2014, the northbound span received upgraded railings. Major improvements NYSDOT is considering is a cycle/pedestrian path shared with the northbound lanes. NYSDOT has not released any official plans. See also State Boat Channel Bridge Passage 9: Oak Beach, New York Oak Beach is a small community and census-designated place located near the eastern end of Jones Beach Island, a barrier island between the Atlantic Ocean and the Great South Bay of Long Island. The community is part of the village of Babylon in Suffolk County, New York, United States. The eastern part, the Oak Island Beach Association, is gated, whereas the western part is not. The Oak Beach CDP was first listed prior to the 2020 census. Prior to that the community was part of the Oak Beach–Captree census-designated place. History and amenities Oak Beach has been inhabited since at least the first decade of the twentieth century, when a U.S. Coast Guard lifesaving station was located there, although it could not be reached overland at that time. Prior to that, marsh bird hunters had kept shacks in the area. Ferry access from Babylon enabled cottages to be built and made more accessible by car after construction of Ocean Parkway; it was largely a summer community until the completion of the Robert Moses Causeway in 1951, which allowed much faster travel from the main part of Long Island. It has gradually evolved since then to become a location where most residents live year-round.Although now entirely residential, Oak Beach was once the location of the popular but controversial Oak Beach Inn, which was closed in 1999 and torn down in 2003, along with a small general store ("The Store") and bait/tackle/surf shop that closed a decade earlier. There is now a public park at the site. The "park" is unusual, in that it lacks any amenities other than a fishing dock and a single portable toilet. While the park encompasses over nine acres of land, there are but two trees and nearly 300 parking spots. In the warmer months the park - or rather the parking lot - collects an informal early Sunday morning motor rally, attracting local motorcycle and car enthusiasts. Geography Oak Beach is in southwestern Suffolk County, in the southeast part of the village of Babylon. The census-designated place includes the community of Oak Beach on Jones Beach Island, as well as the community of Oak Island, directly to the north. The CDP is bordered to the east by Captree State Park and to the west by Gilgo State Park. To the south is the west end of Fire Island. Ownership The land is not owned by the residents but is on long-term lease from the Village of Babylon. In the early 1990s, New York State litigated against extension of the lease. After much negotiation, including detailed environmental impact statements, the lease was renewed (currently through 2050), although with a ramp up in costs. In 2012, the Village of Babylon agreed to extend the current leases through 2065. Shannan Gilbert On May 1, 2010, Shannan Gilbert disappeared after fleeing a client's house in the Oak Beach Association. In December 2010, while searching for Gilbert, the police found 10 dead bodies along the adjacent highway, some bodies in December 2010, others in April 2011. Consequently, the community attracted much public attention. In December 2011, the body of the original missing sex worker Shannan Gilbert was found in the marsh east of the community. On November 29, 2011, the police announced their belief that one person is responsible for all 10 deaths (whom the press refers to at various times as: "the Long Island serial killer", "LISK", "the Gilgo Beach Killer", or "the Craigslist Ripper"), and that they did not believe the case of Gilbert, who went missing before the first set of bodies was found, was related. "It is clear that the area in and around Gilgo Beach has been used to discard human remains for some period of time," said Suffolk County District Attorney Thomas Spota.On December 10, 2015, Suffolk County Police announced that the FBI had officially joined the investigation. A spokesperson for the FBI confirmed the announcement. The FBI had previously assisted in the search for victims, but was never officially part of the investigation until this announcement. The investigation is ongoing. Passage 10: Gilgo-Oak Beach-Captree, New York Gilgo-Oak Beach-Captree, frequently just called Oak Beach, was a census-designated place (CDP) in Suffolk County, New York within the Town of Babylon. The population was 333 at the 2000 census. Following the 2010 census, the area was delineated as two CDPs: Gilgo and Oak Beach–Captree. The original CDP contained several small beach communities on a barrier island along the southern edge of Long Island. In order from west to east, these included West Gilgo Beach (on the Nassau/Suffolk county border), Gilgo Beach, Cedar Beach (no residences), Oak Beach (including the Oak Island Beach Association), Oak Island and Captree Island. They are connected to the mainland by Ocean Parkway from the west and Robert Moses State Parkway from the north. Prior to the 2020 census, the Oak Beach-Captree CDP was further split into the Oak Beach and Captree CDPs. Geography Oak Beach is located at 40°38′28″N 73°16′42″W (40.641100, -73.278195).According to the United States Census Bureau, the CDP had a total area of 3.7 square miles (9.6 km2), of which 2.7 square miles (7.0 km2) was land and 0.9 square miles (2.3 km2), or 25.21%, is water. Demographics As of the census of 2000, there were 333 people, 161 households, and 94 families residing in the CDP. The population density was 122.1 inhabitants per square mile (47.1/km2). There were 305 housing units at an average density of 111.9 per square mile (43.2/km2). The racial makeup of the CDP was 97.00% White, 0.30% African American and 2.70% Asian. Hispanic or Latino of any race were 1.80% of the population. There were 161 households, out of which 14.9% had children under the age of 18 living with them, 50.9% were married couples living together, 6.2% had a female householder with no husband present, and 41.0% were non-families. 34.8% of all households were made up of individuals, and 11.8% had someone living alone who was 65 years of age or older. The average household size was 2.07 and the average family size was 2.64. In the CDP, the population was spread out, with 13.5% under the age of 18, 1.5% from 18 to 24, 28.8% from 25 to 44, 34.8% from 45 to 64, and 21.3% who were 65 years of age or older. The median age was 48 years. For every 100 females, there were 109.4 males. For every 100 females age 18 and over, there were 100.0 males. The median income for a household in the CDP was $66,250, and the median income for a family was $105,870. Males had a median income of $61,250 versus $37,083 for females. The per capita income for the CDP was $55,813. None of the families and 0.9% of the population were living below the poverty line, including no under eighteens and none of those over 64. The census numbers are presumably for full-time inhabitants; many of these houses are second homes and not primary residences, although the proportion of seasonal residents is decreasing. The land for these communities is not privately owned, but leased from the State of New York through the year 2050. However, the residences on the property are owned. If the leases are not renewed at some point in the future, the owners will have to move the houses elsewhere, similar to what happened at High Hill Beach when Jones Beach was created.
Oak Beach, New York and Great South Bay are both situated between what same island?
Long Island
3,766
hotpotqa
4k
Passage 1: Alberic III of Dammartin Alberic III of Dammartin (Aubry de Dammartin) (c. 1138 – 19 September 1200) was a French count and son of Alberic II, Count of Dammartin, and Clémence de Bar, daughter of Reginald I, Count of Bar. He married Mathilde, heiress to the county of Clermont and daughter of Renaud II, Count of Clermont. They had: Renaud I, Count of Dammartin (c. 1165–1227), married 1) Marie de Châtillon and 2) Ide de Lorraine with whom he had Matilda II, Countess of Boulogne, Queen of Portugal Alix de Dammartin (1170–1237), married Jean, Châtelain de Trie Simon of Dammartin (1180 – 21 September 1239), married Marie, Countess of Ponthieu father of Joan, Countess of Ponthieu, Queen of Castile and Leon. Julia of Dammartin, married Hugh de Gournay Agnes of Dammartin, married William de Fiennes Notes Passage 2: Gregory I, Count of Tusculum Gregory I was the Count of Tusculum sometime between 954 and 1012. Consul et dux 961, vir illustrissimus 980, praefectus navalis 999. He was the son of Alberic II (son of Alberic I of Spoleto and Marozia), and Alda of Vienne (daughter of Hugh, King of Italy and his second wife, Alda (or Hilda)). His half-brother was Pope John XII. He held the cities of Galeria, Arce, and Preneste and the title count palatine, the palace referred to being that of the Lateran. He was the first to carry the title "Count of Tusculum" and he passed it to all his descendants. They also received the titles of excellentissimus vir' (most excellent man) and apostolic rector of Sant'Andrea, which Gregory received in 980. In 981, Gregory bore the title Romanorum consul, dux et senator: "Consul, duke, and senator of the Romans." As well as being an intimate and ally of the popes, especially Sylvester II, Gregory also served as praefectus navalis of Holy Roman Emperors Otto I and Otto II. However, on 6 February 1001, he was named "Head of the Republic" by the Romans for leading the revolt against Otto III and expelling the Crescentii. In 1002, the latter returned to power and he had to renounce his title. His death is attested before the 11 June 1012, when his successor, Theophylact, was elected Pope. Marriage and issue By his wife Maria (died 1013) he had three sons and a daughter: Theophylact, who became Pope Benedict VIII. Alberic III who succeeded him in Tusculum and in his titles. Romanus, who became Pope John XIX Marozia III, who married Thrasimund III of Spoleto. Together, the houses of Tusculum and Spoleto were the dominant secular powers in the central Italian peninsula, the one representative of the imperial power and the other, Gregory's, of papal. == Sources == Passage 3: Alberic II of Spoleto Alberic II (912–954) was ruler of Rome from 932 to 954, after deposing his mother Marozia and his stepfather, King Hugh of Italy. He was of the house of the counts of Tusculum, the son of Marozia by her first husband, Duke Alberic I of Spoleto. His half-brother was Pope John XI. At the wedding of his mother to King Hugh of Italy, Alberic and his new stepfather quarreled violently after Hugh slapped Alberic for clumsiness. Infuriated by this and perhaps motivated by rumors that Hugh intended to have him blinded, Alberic left the festivities and incited a Roman mob to revolt against Hugh. In December 932 Hugh fled the city, Marozia was cast into prison, and Alberic took control of Rome. Marriage and issue In 936 Alberic married his stepsister Alda, the daughter of King Hugh of Italy and had one son by her, Count Gregory I of Tusculum. According to Benedict of Soracte, he also had one illegitimate son, Octavianus, by an unknown mistress. On his deathbed Alberic had Roman nobility and clergy swear they would elect Octavianus as pope. Sources Williams, George L. (1998). Papal Genealogy: The Families and Descendants of the Popes. McFarland & Company, Inc. Lexikon des Mittelalters. Passage 4: Marozia Marozia, born Maria and also known as Mariuccia or Mariozza (c. 890 – 937), was a Roman noblewoman who was the alleged mistress of Pope Sergius III and was given the unprecedented titles senatrix ("senatoress") and patricia of Rome by Pope John X. Edward Gibbon wrote of her that the "influence of two sister prostitutes, Marozia and Theodora was founded on their wealth and beauty, their political and amorous intrigues: the most strenuous of their lovers were rewarded with the Roman tiara, and their reign may have suggested to darker ages the fable of a female pope. The bastard son, two grandsons, two great grandsons, and one great great grandson of Marozia—a rare genealogy—were seated in the Chair of St. Peter." Pope John XIII was her nephew, the offspring of her younger sister Theodora. From this description, the term "pornocracy" has become associated with the effective rule in Rome of Theodora and her daughter Marozia through male surrogates. Early life Marozia was born about 890. She was the daughter of the Roman consul Theophylact, Count of Tusculum, and of Theodora, the real power in Rome, whom bishop Liutprand of Cremona characterized as a "shameless whore... [who] exercised power on the Roman citizenry like a man." At the age of fifteen, Marozia became the mistress of Theophylact's cousin Pope Sergius III, whom she knew when he was bishop of Portus. The two had a son, John (the later Pope John XI). That, at least, is the story found in two contemporary sources, the Liber Pontificalis and the Antapodosis sive Res per Europam gestae (958–62), by Liutprand of Cremona (c. 920–72). But a third contemporary source, the annalist Flodoard (c. 894–966), says John XI was brother of Alberic II, the latter being the offspring of Marozia and her husband Alberic I. Hence John too may have been the son of Marozia and Alberic I. Marozia married Alberic I, duke of Spoleto, in 909, and their son Alberic II was born in 911 or 912. By the time Alberic I was killed at Orte in 924, the Roman landowners had won complete victory over the traditional bureaucracy represented by the papal curia. Rome was virtually under secular control, the historic nadir of the papacy. Guy of Tuscany In order to counter the influence of Pope John X (whom the hostile chronicler Liutprand of Cremona alleges was another of her lovers), Marozia subsequently married his opponent Guy of Tuscany. Together they attacked Rome, arrested Pope John X in the Lateran, and jailed him in the Castel Sant'Angelo. Either Guy had him smothered with a pillow in 928 or he simply died, perhaps from neglect or ill treatment. Marozia seized power in Rome in a coup d'état. The following popes, Leo VI and Stephen VII, were both her puppets. In 931 she managed to impose her twenty-one years old son as pontiff, under the name of John XI. Hugh of Arles, and death Guy died in 929, and Marozia negotiated a marriage with his half-brother Hugh of Arles, the King of Italy. While in Rome Hugh quarreled with Marozia's son Alberic II, who organized an uprising during the wedding ceremonies in 932. Hugh escaped, but Marozia was captured. Marozia died after spending some 5 years in prison. Her descendants remained active in papal politics, starting with Alberic II's son Octavian, who became Pope John XII in 955. Popes Benedict VIII, John XIX, and Benedict IX, and antipope Benedict X of the House of Tusculani, were also descended from Marozia. By Guy of Tuscany she had a daughter named Berta Theodora, who never married. Family tree Sources Chamberlin, E. R. (1969). The Bad Popes. New York: Dial Press. ISBN 9789030041801. OCLC 647415773. Williams, George (1998). Papal genealogy, the families and descendants of the popes. di Carpegna Falconieri, Tommaso (2008), Marozia, in Dizionario biografico degli italiani, 70, pp. 681–685 == Footnotes == Passage 5: Renaud I, Count of Dammartin Renaud de Dammartin (Reginald of Boulogne) (c. 1165 – 1227) was Count of Boulogne from 1190, Count of Dammartin from 1200 to 1214 and Count of Aumale from 1204 to 1214. He was son of Alberic III of Dammartin and Mathilde of Clermont.Brought up at the French court, he was a childhood friend of Philip Augustus. At his father's insistence he fought for the Plantagenets. Received back into Philip's favour, he married Marie de Châtillon, daughter of Guy II de Châtillon and Adèle of Dreux, a royal cousin. In 1191, Renaud's father, Alberic, kidnapped and had Renaud marry Ida, Countess of Boulogne. The County of Boulogne thereby became vassal to the French king, rather than the count of Flanders. While this marriage made Renaud a power, it also made enemies in the Dreux family and that of the count of Guînes, who had been betrothed to Ida. In 1203, Renaud and his wife gave a merchant's charter to Boulogne. This was probably made for financial consideration. Philip made Renaud Count of Aumale the following year, but Renaud began to detach himself. Following the acquisition of Normandy in April 1204, King Philip granted Renaud the county of Mortain and the honor of Warenne which was centered on the fortresses of Mortemer and Bellencombre. Both Mortain and Warenne had been held by William I of Boulogne and it would appear that King Philip recognized the Boulogne claim to them. In 1211, he refused to appear before Philip in a legal matter, a suit with Philippe de Dreux, bishop of Beauvais. Philip II seized his lands and on 4 May 1212 at Lambeth, Dammartin made an agreement with King John who had also lost possessions to Philip. Renaud brought other continental nobles, including the Count of Flanders, into a coalition with John against Philip. In return he was given several fiefs in England and an annuity. Each promised not to make a separate peace with France.With the Emperor Otto IV and Ferdinand of Flanders, he took part in the attack on France in 1214 culminating in the Battle of Bouvines. Commanding the Brabançons, he was on the losing side, but was one of the last to surrender, and refused submission to Philip Augustus. His lands were taken away, and given to Philip Hurepel. Renaud was kept imprisoned at Péronne for the rest of his life, which ended in suicide. His daughter Matilda II was married to Philip Hurepel. Passage 6: Alberic II, Count of Dammartin Alberic II (died 1183) was the Count of Dammartin, possibly the son of Aubry de Mello, Count of Dammartin, and Adela, daughter of Hugh I, Count of Dammartin.What little is known for sure about Alberic II is confounded by the preponderance of noblemen of the same name in both France and England. What is known is that he married Clémence of Bar, daughter of Reginald I "One-Eyed", Count of Bar, one of the leaders of the Second Crusade, and Gisèle de Vaudémont, daughter of Gerard I, Count of Vaudémont. Alberic and Clémence had one son: Alberic III, Count of Dammartin.Alberic II was succeeded by his son Alberic III as Count of Dammartin upon his death. The discussion in Aubry, Count of Dammartin, provides some insight into how Alberic III came to claim the countship. Further complicating the genealogy, Clémence, widowed, married Renaud II, Count of Clermont-en-Beauvaisis, her second husband and his second wife. Renaud and his first wife, Adelaide, Countess of Vermandois, were the parents of Mathilde, wife of Alberic III. Sources Mathieu, J. N., Recherches sur les premiers Comtes de Dammartin, Mémoires publiés par la Fédération des sociétés historiques et archéologiques de Paris et de l'Ile-de-France, 1996 Passage 7: Alberik II Alberic II was a bishop of Utrecht from 838 to 844. Alberic was the brother of his predecessor Frederick of Utrecht. Nothing is known about his administration. He was buried in the Saint Salvatorchurch in Utrecht. Passage 8: Pope Agapetus II Pope Agapetus II (died 8 November 955) was the bishop of Rome and ruler of the Papal States from 10 May 946 to his death. A nominee of the princeps of Rome, Alberic II of Spoleto, his pontificate occurred during the period known as the Saeculum obscurum. Pontificate Agapetus was born to a Roman father (a descendant of Consul Anicius Faustus Albinus Basilius) and a Greek mother. He was elected pope on 10 May 946 after the death of Marinus II. The existence of an independent republic of Rome, ruled by Alberic II of Spoleto, meant that Agapetus was prevented from exercising any temporal or secular power in Rome and the Papal States. The struggle between Berengar II and Otto I for the Kingdom of Italy allowed Alberic to exercise complete control over Rome and Agapetus, meaning the pope was largely limited to managing internal church affairs. Even Agapetus’ invitation to Otto to intervene in Italian affairs in 951 was done at the instigation of Alberic, who was growing concerned at Berengar's growing power. However, when Otto's envoys, the bishops of Mainz and Chur, were sent to the pope to discuss Otto's reception in Rome and other more important questions, they were turned away by Alberic.Agapetus was forced to intervene in the dispute over the occupancy of the See of Reims. He ordered a synod to be held at Ingelheim in June 948 to resolve the rights of the rival claimants, Hugh of Vermandois and Artald of Reims. He sent his legate Marinus of Bomarzo to act on his behalf, while Agapetus wrote to a number of bishops, asking them to be present at the council. Through his legate the pope indicated his support for King Louis IV of France, and gave his support for reinstalling Artald as bishop of Reims. This council was followed up by another one at Trier, where Agapetus was again represented by Marinus of Bomarzo. In 949, Agapetus held a synod in Rome, which confirmed the rulings of the two councils. It condemned the former bishop Hugh and it excommunicated his father, Count Herbert II of Vermandois, for his opposition to King Louis IV.After receiving requests from both Louis IV of France and Otto I of Germany, Agapetus granted privileges to monasteries and nunneries within their respective kingdoms. He also was sympathetic towards Otto's plans to restructure the bishoprics within Germany, which were eventually aborted due to pressure exerted by William of Mainz. Around 948, Agapetus, granted the Archbishop of Hamburg the right of consecrating bishops in Denmark and other northern European countries instead of the pope. The pope was also allegedly asked by a Danish king named Frode, now considered legendary, to send missionaries to his kingdom.Agapetus was also asked to intervene in a dispute between Herhold, archbishop of Salzburg and Gerard, bishop of Lauriacum, who both claimed the title of metropolitan of all Pannonia. Agapetus dispatched a letter to the two claimants, in which he stated that the diocese of Lauriacum had been the metropolitan church of all Pannonia before the invasion of the Huns. However, following the ravages inflicted by them, the metropolitan had transferred his see to another city, and since that time Salzburg had been raised to an archbishopric. Consequently, both lawfully occupied their respective sees, and both were to retain their rank and diocese. Agapetus ruled that jurisdiction over western Pannonia would rest with Herhold, while the eastern part, along with the regions occupied by the Avars and the Moravians, would fall under Gerard.In Italy, Agapetus wrote to the dukes of Beneventum and Capua, demanding that monasteries be returned to the monks whom they had displaced. He also deposed the bishops of Termoli and Trivento who were accused of simony. Hoping to rejuvenate the religious life of the clerics in Italy, Agapetus, with the blessing of Alberic, asked for the abbot of Gorze Abbey to send some of his monks down and join the monastic community attached to the church of Saint Paul Outside the Walls.Agapetus died on 8 November 955, and was succeeded by Alberic's son, Octavian, who took the papal name of John XII. He was buried in the Lateran basilica, behind the apse, and close to the tombs of Leo V and Paschal II. Agapetus was noted for his caution and for the sanctity with which he led his life. Passage 9: Pope Leo VII Pope Leo VII (Latin: Leo VII; died 13 July 939) was the bishop of Rome and nominal ruler of the Papal States from 3 January 936 to his death. Election Leo VII's election to the papacy in 936, after the death of Pope John XI, was secured by Alberic II of Spoleto, the ruler of Rome at the time. Alberic wanted to choose the pope so that the papacy would continue to yield to his authority. Leo was the priest of the church of San Sisto Vecchio in Rome, thought to be a Benedictine monk. He had little ambition towards the papacy, but consented under pressure. Pontificate As pope, Leo VII reigned for only three years. Most of his bulls were grants of privilege to monasteries, especially including the Abbey of Cluny. Leo called for Odo of Cluny to mediate between Alberic and King Hugh of Italy. Odo was successful in negotiating a truce after arranging a marriage between Hugh's daughter Alda and Alberic. Leo VII also appointed Archbishop Frederick of Mainz as a reformer in Germany. Leo allowed Frederick to drive out Jews that refused to be baptized, but he did not endorse the forced baptism of Jews.Leo VII died on 13 July 939, and was interred at St. Peter's Basilica. He was succeeded by Stephen VIII. Passage 10: Simon, Count of Ponthieu Simon of Dammartin (1180 – 21 September 1239) was a son of Alberic III of Dammartin (Aubry de Dammartin) and his wife Mathildis of Clermont, heiress to the county of Clermont and daughter of Renaud II, Count of Clermont. Biography Simon was the brother of Renaud I, Count of Dammartin, who had abducted the heiress of Boulogne, and forced her to marry him. It is thought that in order to strengthen the alliance with the Dammartins, King Philip Augustus of France allowed Simon to marry Marie, Countess of Ponthieu, who was a niece of the king, in 1208. Renaud and Simon of Dammartin would eventually ally themselves with John, King of England. In 1214 the brothers stood against Philip Augustus in the Battle of Bouvines. The French won the battle, and Renaud was imprisoned, while Simon was exiled. Marie's father William IV, Count of Ponthieu had remained loyal to Philip Augustus. When William died in 1221, Philip Augustus denied Marie her inheritance and gave Ponthieu in custody to his cousin Robert III, Count of Dreux. After the death of Philip Augustus, Marie was able to negotiate an agreement with his successor Louis VIII in 1225. Ponthieu was held by the king, and Simon would only be allowed to enter this or any other fief if he obtained royal permission. In 1231 Simon agreed to the terms and added that he would not enter into marriage negotiations for his daughters without consent of the king. Family Simon married Marie, Countess of Ponthieu, the daughter of William IV, Count of Ponthieu and Alys, Countess of the Vexin. Marie became Countess of Ponthieu in 1225.Simon and his wife Marie had four daughters: Joan, Countess of Ponthieu (1220–1278), married 1) Ferdinand III of Castile. Mother of Eleanor of Castile, the wife of Edward I of England. Married 2) Jean de Nesle, Seigneur de Falvy et de La Hérelle. Mathilda of Dammartin (-1279), married John of Châtellerault Philippe of Dammartin (-1280), married 1) Raoul II of Lusignan, 2) Raoul II, Lord of Coucy, 3) Otto II, Count of Guelders. Maria of Dammartin, married John II, Count of Roucy.
Who gave the mother of Alberic II of Spoleto the title "patricia" of Rome?
Pope John X
3,291
hotpotqa
4k
Passage 1: Victor Diamond Mine The Victor Mine was the first Canadian diamond mine located in Ontario, and De Beers' second diamond mine in Canada (after the Snap Lake Diamond Mine). It is located in the Northern Ontario Ring of Fire, in the James Bay Lowlands 90 kilometres (56 mi) west of Attawapiskat in the remote northern part of the province. In June 2005, the Attawapiskat First Nation voted in favour (85.5%) of ratifying the Impact Benefit Agreement (IBA). Construction of the mine began in February 2006 which created 3200 positions; mining and operations will create around 400 permanent positions. The Victor Mine is an open-pit mine, with a processing plant, workshops, and an airstrip located on site. By 2013–2014 royalties collected from De Beers Victor Mine amounted to $226. At that time De Beers was continuing to pay off its "$1 billion investment to build the mine and from now until it closes, the company expects to pay tens of millions of dollars in royalties." The mine completed mining and processing in 2019 and has moved to a shut-down phase including demolition of infrastructure and rehabilitation of the site. History De Beers started looking for Kimberlite Pipes within Canada in the 1960s. "The Victor Mine was developed within a cluster of 16 kimberlite pipes that were discovered in the James Bay Lowlands near Attawapiskat in 1987." In 1995 the pipes of the James Bay Lowland area were re-examined and interest was renewed in the Victor Mine Project. The cost feasibility of mining Victor diamonds was done in 2002. In 2005, the Project gained approval after an environmental assessment by the Federal and Provincial government and soon after construction began. In 2007, the Moose Cree First Nations peoples signed in favour of the Victor mine and the first successful productions of diamonds began. June 20, 2008, Victor Mine entered the production phase. De Beers celebrated its opening July 26 and reached an agreement with the Government of Ontario to allow up to 10% of the mine's production to be available to the cutting and polishing industry in Ontario. In October 2009, the Victor Mine was voted “Mine of the year” by readers of the international trade publication called Mining Magazine. Geology The area is composed of 18 kimberlite pipes of the Attawapiskat kimberlite field, 16 of which are diamondiferous, the Victor Mine sits on top of the Victor pipe and mines from Victor Main and Victor Southwest which have appeared close enough to the surface to be used in an open-pit mine. The Victor Kimberlite is a composition of pyroclastic crater facies and hypabyssal facies, and is considered to have a highly variable diamond grade. Mining It is an open-pit mine; the equipment there is 100 tonne trucks, large front-end loaders, bulldozers and other necessary support equipment used in the mining operation. The annual production rate is 2.7 million tonnes a year, which comes to about 600,000 carats a year in diamond grade. Operations There is year-round access via air travel and only seasonal access over land depending on whether the weather permits travel. On the property are warehouses for storage, a processing plant, workshops, offices, fuel storage facilities, pit-dewatering machinery and an airstrip (the Victor Mine Aerodrome) for travel needs. The site also has recreational and dorm buildings for the permanent staff. The life of the mine is expected to be twelve years and the total project life is seventeen years and each year the processing plan is designed to treat 2.5 million tonnes of kimberlite per year (roughly 7100 tonnes a day). Tom Ormsby, director of external and corporate affairs for De Beers Canada claims that the great colour (whiteness), natural shapes, clearness and quality of the Victor diamonds ranks them with the highest stones in the world."Victor is forecasted to have a 17-year cradle-to-grave life. That includes construction, an estimated 12 years of operation and then winding down to closure and rehabilitation of the site." Performance The mine had produced at a high level of performance leading to "[f]urther exploration of the site" with the "hope that De Beers will uncover another source of diamonds within close proximity of the existing operation." 826,000 carats were mined at Victor Mine in 2010 and "$93 million was spent on goods and services and $49 million (53 per cent) was supplied by Aboriginal businesses.""No corporate, federal, provincial taxes or government royalties other than personal income taxes were paid in 2010 as the company was in a loss position for tax purposes." However, De Beers Canada "injected approximately $474,900,000 into the Canadian economy in 2010" through both mines, Snap Lake and Victor Mines.In 2011, De Beers paid total wages and benefits of about $55.5 million to Victor employees. According to the Ontario Mining Association, in 2011 "$101 million was spent on goods and services" by De Beers "with about $57 million, or 57%, being provided by Aboriginal businesses." Exploration Tom Ormsby, claimed that "The high quality of the Victor diamonds and the vastness of the Canadian shield points to great potential for another diamond mine being developed in northeastern Ontario." The "Canadian Shield has great potential to host diamonds" and potential in Canada "appears to be at least twice as good as what southern Africa has held for potential for diamonds.""There are approximately eight years remaining on the forecast life of mine for Victor. In efforts to keep things going and extend this time frame, advanced exploration is currently underway at Victor on 15 previously identified diamond-bearing kimberlite pipes." Environmental and human concerns So far, De Beers Canada employees and its contract partners have safely worked more than four million hours without a Lost Time Injury. Other human concerns are the mine's impact on the First Nations people and the company's pledge to help promote community growth will affect certain communities more than others.Concerns were brought up regarding the mine's impact on the surrounding area. Since it is an open pit mine it would disturb the natural environment. The impact area is 5,000 hectares of land. The first concern was raised in 2005, when environmental groups called on the Ontario government to perform its own environmental impact assessment aside from the Federal one as it was believed the Federal assessment did not fully assess the situation, including long term harm on the wildlife, wilderness and the water systems found there. However, the Project did receive an ISO 14001 certification.The environmental assessment process was later criticized for its restricted scope, namely for focusing primarily on the Attawapiskat First Nation, and largely excluding other potentially affected First Nations. Attawapiskat First Nation De Beers Victor Diamond Mine is on Attawapiskat First Nation traditional land. An Impact-Benefit Agreement (IBA) was signed with community leaders in 2005 with Danny Metatawabin, acting as coordinator for the Impact-Benefit Agreement (IBA) between De Beers and Attawapiskat. Community members later protested the agreement through demonstrations and roadblocks claiming that the community's share of the "bounty from the mine isn't getting back to the community." De Beers has negotiated a lease area. Although it is acknowledged that the mine is on Attawapiskat traditional land, the royalties from Victor Mine flow to the Government of Ontario, not Attawapiskat First Nation. As of 2015, De Beers was paying up to $2 million per year to Attawapiskat. That payment is split between a trust fund controlled by the chief and council and the rest, which is used for community development and to pay Attawapiskat members who manage the band’s impact benefit agreement with De Beers, says Attawapiskat member Charlie Hookimaw. The trust fund now totals $13 million. In 2014, the community received about $1 million; $480,000 went to business relations and $545,868 was spent on community development, Hookimaw says. They have 500 full-time employees with 100 from Attawapiskat First Nation. De Beers also employs Attawapiskat First Nation in winter road construction. The "mine employs 100 people from Attawapiskat at any one time. It generates about $400 million in annual revenue for the company." Sub-contractors from Attawapiskat First Nation also work for the mine."A federal review of the relationship between De Beers' Victor mine and Attawapiskat showed that government support for training and capacity did not start soon enough to deal with the huge lack of skills in the First Nation.""Training is carried out on a year-round basis at the Victor Mine site as well as at the De Beers Canada Training Facility in Attawapiskat." See also De Beers List of diamond mines Lake Timiskaming kimberlite field Northern Ontario Passage 2: Dry Fork Mine The Dry Fork mine is a coal mine located 8 miles north of Gillette, Wyoming in the United States in the coal-rich Powder River Basin. The mine is an open pit mine that utilizes truck and shovel mining method to mine a low-sulfur, sub-bituminous coal that is used for domestic energy generation and shipped to customers via railroad. In 2011, the mine began supplying coal to the newly constructed Dry Fork power station that was constructed adjacent to the mine. The mine is currently owned and operated by Western Fuels Association.As of 2009, Dry Fork had reserves of 330 mm tons of sub-bituminous coal and a maximum permitted production capacity of 15mm tons per year. Typical annual production has been in 5.2mm ton range for the last several years though. In 2008, the mine produced just over 5.2 million short tons of coal, making it the 37th-largest producer of coal in the United States.The average quality of the coal shipped from Dry Fork is 8,050-8,200 BTU/lb, 0.20-0.42% Sulfur, 3.8-5.1% Ash, and 1.50% Sodium (of the ash). Train loading operations at the mine are done with a batch weigh bin system that is coupled to a "weigh-in-motion" track scale system. Silo capacity at the mine's rail loop, which can accommodate a single unit train, is 10,800 tons. History The Dry Fork mine shipped its first coal to members of the Western Fuels Association in 1990 and is run by Western Fuels-Wyoming an associate of Western Fuels. Since opening, Dry Fork has shipped 69.5mm tons of coal. Production Passage 3: Tundra Mine The Tundra Mine is a gold mine that operated in the Northwest Territories, of Canada between 1962 and 1968, producing 104,476 troy ounces (3,249.6 kg) of gold, from 187,714 tons of ore. Indian and Northern Affairs Canada has a project to remediate the Tundra Mine site under their Northern Contaminants Program, funded by the Canadian Federal Contaminated Sites Action Plan. External links Tundra Mine: Indian and Northern Affairs Canada Passage 4: Franklin-Creighton Mine The Franklin-Creighton Mine was a Georgia Gold Rush gold mine located off what is now Yellow Creek Road in the town of Ball Ground in Cherokee County, Georgia. The mine, located along the Etowah River, was initially known as the Franklin Mine because it was started by a widow, Mrs. Mary G. Franklin, who obtained a 40-acre (160,000 m2) lot in the Gold Lottery of 1832. Around 1883, the mine became known as the Creighton Mine or the Franklin-Creighton Mine. This mine was one of the most productive and continued to operate many years after other area mines had ceased operations. Some estimate that it was yielding $1000 per day in 1893 and others place its total production after 1880 at as much as $1,000,000. The mine was shut down in 1913 as a result of a collapsed shaft which caused the mine to flood. As of 2022, only three major structures exist: The stamping mill's concrete foundation (which has been rebuilt into a pavilion for the nearby housing development site), the Franklin residence and doctor's office, and the "Shingle House," the mine's former post office and general store. Sources A Brief History of Cherokee County (accessed December 4, 2006) Georgia Historical Marker – Cherokee County Gold (accessed December 4, 2006) Passage 5: Tundra Mine/Salamita Mine Aerodrome Tundra Mine/Salamita Mine Aerodrome (TC LID: CTM7) is a registered aerodrome that served the Tundra and Salmita Mines in the Northwest Territories, Canada. Passage 6: Raspadskaya coal mine The Raspadskaya Coal Mine is a coal mine located in Mezhdurechensk, Kemerovo Oblast, Russia. It is the largest coal and the largest underground mine in Russia. The mine was opened in 1973 and its construction was completed in 1977. In addition to the main underground mine, the mining complex also includes MUK-96 underground mine, Raspadskaya Koksovaya underground mine, and Razrez Raspadsky open-pit mine, as also the Raspadskaya preparation plant. The mine is the largest coal mine in Russia.Raspadskaya's total resources were estimated at 1,461 million tons and total coal reserves at 782 million ton (JORC standards, according to IMC Consulting report as of June 2006, of which 22 million tons produced by 31 March 2008). Based on the volume produced in 2007, reserves-to-production ratio amounts to about 55 years of production. The complex produces 10% of Russia's coking coal.The mine is owned and operated by Raspadskaya OAO, a Russian publicly listed coal company. In March 2001, a methane explosion killed four miners and injured six. The mine was shut down for two weeks in 2008 due to safety violations and a worker was killed after part of the mine collapsed in January 2010. On 8 May 2010, an explosion occurred killing 66 workers.In 2022, a remote sensing satellite found that the mine was releasing 87 metric tons, or 95 short tons, of methane each hour. Scientists who study methane leakage said the size was unprecedented. By contrast, the worst rate that occurred at the Aliso Canyon gas leak in California was 60 metric tons an hour. Passage 7: Salmita Mine The Salmita Mine was a gold producer in the Northwest Territories, Canada during 1983 to 1987. The deposit was first discovered in 1945 and underground exploration was carried out in 1951–1952. It was reactivated for exploration by Giant Yellowknife Mines Limited in 1975 and entered production in 1983. They used the old camp and milling plant of the abandoned Tundra Mine, located a few kilometres to the south. The mine produced 179,906 troy ounces (5,595.7 kg) of gold from the milling of 238,177 tons of ore. The area is now owned by Seabridge Gold. Passage 8: Colomac Mine The Colomac Mine was a privately owned and operated open pit gold mine located 200 km northwest of Yellowknife in the Northwest Territories in Canada . The Colomac mine operated between 1990–1992, and 1994–1997. It was operated by Neptune Resources Limited that had little success in making a profit during its operation. In 1994, the mine had reopened under Royal Oak Mines Inc. Both Neptune Resources and Royal Oak Mines where both owned and operated by Peggy Witte. Due to low gold prices and high cost of mining, Royal Oak Mines was forced into bankruptcy. The Federal Government of Canada became owners of the mine, along with the related environmental issues. A major cleanup effort was completed to prevent the mine from polluting the environment. On January 26, 2012, Nighthawk Gold Corporation completed an agreement to acquire 100% of the mineral claims and leases of the former producing Colomac Gold Mine and surrounding mineral leases (Colomac Property), from then Aboriginal Affairs and Northern Development Canada (AANDC) now Crown Indigenous Relations and Northern Affairs Canada (CIRNAC). The Colomac Property lies within the central portion of Nighthawk Gold’s 930 square kilometre property. Nighthawk Gold has since been responsibly exploring and advancing the Colomac Gold Project with the goal to restart gold mining operations in the future, assuming positive economics and the receipt of operating permits. Production The Colomac Mine processed a total of almost 12,300 Megagrams of ore, and produced 16.7 Megagrams (535,708 troy ounces) of gold, with an approximate value of $916 million. This figure is based on 2012 gold prices, averaging close to USD $55,000 per kg. In April 2007, Indian and Northern Affairs Canada engaged the Professional Services company Deloitte & Touche LLP to become their solicitor, in order to find another independently-owned and -operated company to acquire the idle mine as well as the resources on the land it occupied. To generate interest, they featured the mine as holding 6.6 Teragrams of untapped resources, a gold processing mill and related equipment, a maintenance building, a dorm room styled housing complex, power and fuel storage facilities, and mobile equipment (rock trucks, excavators and loaders). It was also featured on the popular reality television show Ice Road Truckers. Cleanup After being shut down in 1997 and abandoned shortly after by Royal Oak Mines, it transferred into the Canadian Government's hands and responsibility in mid-December 1999. In accordance with water licensing laws and regulations in Canada, Royal Oak Mines had posted a $1.5 million security deposit and in 1999 they were charged with cyanide dumping by the Federal Government. The government of Canada had estimated the cost of the cleanup at $70 million due to high levels of cyanide and ammonia content, as well as acid mine drainage. For the people of Indian Lake the tailing pond owned by the mine was at one stage threatening to overflow unless immediate action was taken to prevent a disastrous environmental impact. A public hearing was called to cancel the mines' license and to begin a cleanup. In 1999, the Department of Indian Affairs and Northern Development (DIAND) awarded a one-year, $2 million contract to a consortium of aboriginal businesses from DetonÇho Corporation, the Dogrib Rae Band and the North Slave Métis Alliance to undertake final reclamation activities at the Colomac Mine. The consortium conducted studies into contamination and took responsibility for on-going environmental monitoring and maintenance of the site. After the contract was awarded, Royal Oak Mine was finally charged under the Water Act and the Fisheries Act for the pollution it had caused, this was much too late, since the company was already in receivership. According to MineWatch Canada in a 2001 publication: "Now, the water license has not been changed, the money needed to clean-up the site is not forthcoming, and the Dogribs are faced with a potential catastrophe if the tailings pond overflows. Says Dogrib leader Ted Blondin: "I think there is a fiduciary responsibility that the federal government has to looking after the Dogrib interests, and these are the arguments that we will use towards ensuring that the quality of water and the work that has to be done for the cleanup is done."During the initial cleanup phase, many new and effective remediation procedures where developed and put into place, including the use of farmed micro-organisms to remove hydrocarbons from soil contaminated by poor management of the fuel tank farm located on site. On 25 February 2010 a $19 million contract was awarded to two aboriginal firms, Tlicho Engineering/Environmental Services Ltd and Aboriginal Engineering Ltd for a final two year remediation contract, which will also create local jobs in the area. According to Aboriginal Affairs and Northern Development Canada this two year remediation will cover: "final remediation of the site, including: major demolition activities (primary and secondary crushing facilities, mill complex, maintenance shop and camp); hydrocarbon remediation (restoration of Steeve's Lake shoreline, free product recovery and soil treatment); site restoration (Truck Lake channel construction, stream crossing restoration) and capping of the non-hazardous landfill sites as well as continued provision of site services and maintenance. The contract, which follows a competitive process, will last until April of 2012 when the companies will conduct a full and final demobilization of the site." In May 2010, officials suspended the remediation project due to an accident, which occurred at mine in April 2010. It occurred after a foremen working for Aboriginal Engineering Ltd suffered leg injuries after a 2.5 cm cable snapped. Human Resources and Skills Canada announced in May 2010 that they would not allow this remediation to continue until Aboriginal Engineering Ltd set out to implement the standards for health and safety set out by the Federal Government in relation to this type of task. Passage 9: Murowa diamond mine The Murowa diamond mine is a diamond mine located in Mazvihwa, south central Zimbabwe, about 40 kilometres from the asbestos mining town of Zvishavane in the Midlands province. The mine is majority owned and operated by the Rio Tinto Group, which also owns the Argyle diamond mine in Australia and part of the Diavik Diamond Mine in Canada. The mine is a combination of open pit and underground construction; current estimates put construction costs at $61 million USD and mine reserves are 19 million tonnes of ore, with an ore grade of 0.9 carats (180 mg) per tonne. Geology of the Deposit Murowa consists of three north-trending kimberlite pipes, intrusive into the Chivi suite granites of the Zimbabwe Craton. The kimberlites have been dated at 500 Ma. History The Murowa site's possibilities were first realized in 1997 when three diamond-bearing kimberlite pipes were discovered; over a period of three years of study, the two larger pipes have been determined to be economically feasible as mines. Construction of mine facilities was completed in late 2004. Preparation for mining included the forced relocation of 926 people living on the mine site to six farms purchased by a government relocation program. Limited mining operations began in Murowa in 2004, with full capacity expected to be reached sometime in 2005, although permitting problems have slowed progress toward this milestone. Full-scale production is expected to process 200,000 tonnes of ore annually, although it is possible to push production to as much as one million tonnes annually through further capital investment. The mine is a combination of open pit and underground construction; current estimates put construction costs at $61 million USD. Current estimates of mine reserves are 19 million tonnes of ore, with an ore grade of 0.9 carats (180 mg) per tonne. Rio Tinto estimates that over the life of the mine, prices for the Murowa's production will fetch an average price of $65 USD per carat (325 $/g). Passage 10: Negus Mine Negus Mine was a gold producer at Yellowknife, Northwest Territories, Canada, from 1939 to 1952. It produced 255,807 troy ounces (7,956.5 kg) of gold from 490,808 tons of ore milled. The underground workings were acquired by adjacent Con Mine in 1953 and were used for ventilation purposes until Con Mine closed in 2003.
Were the Tundra Mine and Negus Mine located in the same country?
yes
3,708
hotpotqa
4k
Passage 1: Jack Young (speedway rider) Jack Ellis Young (31 January 1925 in Adelaide, South Australia – 28 August 1987 in Adelaide) was a Motorcycle speedway rider who won the Speedway World Championship in 1951 and 1952. He also won the London Riders' Championship 1953 and 1954 and was a nine time South Australian Champion between 1948 and 1964.By winning the 1951 and 1952 World Championships, Young became the first Australian to win two World Championships in any form of motorsport. Career Australia Jack Young started racing bikes with younger brother Frank on the Sand Pits at Findon in Adelaide, before starting his speedway career at the Kilburn Speedway on 9 May 1947 riding a 1926 Harley-Davidson Peashooter borrowed from his brother. There he rode alongside older brother Wally "Joey" Young (b. 1916 – d. 1990), and younger brother Frank. Jack and Frank both represented Australia in test matches against England. Quickly proving himself to be one of the best riders in Adelaide, Jack placed an impressive second in the SA title in 1947 (after only having raced at a couple of meetings), and would win his first South Australian Championship in 1948. He would go on to win the SA Championship again in 1954, 1955, 1956, 1958, 1959, 1960, 1963 and 1964, all at Rowley Park Speedway. Young would win the Queensland State Championship in 1953 at the Brisbane Exhibition Ground, and the Victorian State Championship in 1957 at the Tracey's Speedway in Melbourne. Despite his two World Championships, nine South Australian Championships and the Queensland and Victorian titles, Jack Young would never win or even place in the Australian Individual Speedway Championship, which during his time were held almost exclusively in New South Wales (at the Sydney Showground or Sydney Sports Ground), or in Queensland at the Exhibition Ground. Young declined several invitations to ride in the Australian championship, often preferring to take a break from speedway to enjoy the Australian summer and go fishing. He did finish third in an unofficial "Australian Championship" staged at the Harringay Stadium in London, England in 1950. The promoters of the speedway had a clearing in their schedule and decided fill the space by inviting the top Australian riders in the British Leagues at the time to ride in an Australian Championship (the field included Aussie born New Zealander Ronnie Moore). Brisbane rider Graham Warren won the meeting from NSW rider Aub Lawson and Young. Jack Young announced his retirement from Speedway in December 1963 on the night he won his ninth and last SA Championship (counted as the 1963/64 Championship). Young and the rider who would succeed him as South Australia's best rider John Boulger, jointly hold the record for SA title wins with nine each. A lover of fishing, at his home in Adelaide Young was known to use his two World Championship trophies as a place to store his sinkers. Just a year after his death, Jack Young was inducted into the Sport Australia Hall of Fame for his services to speedway.In 2008, Young was posthumously inducted into the Australian Speedway Hall of Fame. In November 2014, Jack Young was inducted into the Motorcycling South Australia Hall of Fame. International After winning his first South Australian championship in 1948 at Kilburn, as well as impressive displays for Australia in home Test's against England, Jack Young had the attention of British promoters. He was signed by the Edinburgh Monarchs in 1949 after they paid his fare to come over for a trial. He scored maximum points on his debut, winning all six of his rides. In 1949, 1950 and 1951, Young won the Scottish Riders Championship (now the Scottish Open) at Old Meadowbank in Edinburgh. In 1951, Jack Young made history by becoming the first second division rider to become World Champion when he won the title at the Wembley Stadium in London. He defeated England's Split Waterman and fellow Australian Jack Biggs in a three way run-off for the title after each had finished the meeting on 12 points.In 1952 Young moved up a division by joining the West Ham Hammers for a then record transfer fee of UK£3,750. He also retained his World title in front of 93,000 fans at Wembley, thus becoming the first dual World Champion and the first rider to win the title two years in succession. He stayed with the Hammers until the end of the 1955 season and is remembered by many West Ham riders and fans alike as the best rider to ever race for the team. Young stayed home in Adelaide for the next two seasons riding mainly at his home track of Rowley Park, but in 1958 he returned to the UK to ride for the Coventry Bees. After again returning home to Adelaide in 1959, he again rode for the Bees in 1960 and 1961. Jack Young's last World Final appearance was as a reserve rider for the 1961 Championship at the Malmö Stadion in Malmö, Sweden (the first World Championship Final not held at Wembley). Neither Young, nor the other reserve rider, Swede Leif Larsson, got to ride in the final. Jack Young also represented Australia in test matches both at home and overseas and had the honour of captaining his country on many occasions. He first represented Australia in the 7th test against England on 17 February 1950 at the Kilburn Speedway in Adelaide and proved his class by top scoring on the night with 17 points. During the early part of his career when riding for the Edinburgh Monarchs, Young also represented Scotland in some matches. Career Highlights World Champion – 1951, 1952 South Australian Champion – 1948, 1954, 1955, 1956, 1958, 1959, 1960, 1963, 1964 Scottish Riders Champion – 1949, 1950, 1951 Adelaide Golden Helmet winner – 1949 (4 wins at Kilburn Speedway) and 1950 (2 wins at Rowley Park Speedway) Tom Farndon Memorial Trophy – 1951, 1961 Queensland State Champion – 1953 London Riders' Champion – 1953, 1954 National Trophy (with West Ham Hammers) – 1955 Victorian State Champion – 1957 12 times in succession British Match Race Champion over a two-year period, unbeaten in 33 successive meetings in Britain Holds the record for the highest points won in a season in Britain. Inducted into the Sport Australia Hall of Fame – 1988 Inducted into the Australian Speedway Hall of Fame – 2008 Inducted into the Motorcycling South Australia Hall of Fame – 2014 World Final Appearances 1950 – London, Wembley Stadium – 8th – 7pts 1951 – London, Wembley Stadium – Winner – 12+3pts 1952 – London, Wembley Stadium – Winner – 14pts 1953 – London, Wembley Stadium – 5th – 10pts 1954 – London, Wembley Stadium – 4th – 11pts 1955 – London, Wembley Stadium – 7th – 10pts 1960 – London, Wembley Stadium – 10th – 6pts 1961 – Malmö, Malmö Stadion – Reserve – Did not Ride Death Jack died of a lung disorder in Adelaide's Modbury Hospital on 28 August 1987 at the age of sixty two. Years of riding through dust clouds on British cinder tracks, as well as being a heavy cigarette smoker had left Young with Emphysema. He was survived by his wife Joan whom he had married on 12 May 1945 in the All Saints Church of England in the Adelaide suburb of Hindmarsh. Jack and Joan Young (born Joan Mary Carroll) had one son and two daughters. Jack Young was the idol of a young rider from Christchurch, New Zealand who rode against him in Australia during the early 1960s, with the two forming a friendship that would last until Jack's passing in 1987. That rider, Ivan Mauger, who was actually based at Rowley Park at the time, would go on to win a record six Speedway World Championships (1968, 1969, 1970, 1972, 1977, 1979), three Long Track World Championships (1971, 1972, 1976), four Speedway World Team Cups (1968, 1971, 1972, 1979), and two Speedway World Pairs Championships (1969, 1970). Mauger credits advice he received from Young at the 1960 Australian Long Track Championship in the South Australian coastal town of Port Pirie for putting him on the path to becoming a World Champion. Jack Young Solo Cup The Jack Young Solo Cup (formerly known as the Jack Young Memorial Cup) is held in his honour every year at the Gillman Speedway in Adelaide after being previously held from 1990 to 1997 at Gillman's predecessor North Arm Speedway. The first cup was won by Swedish rider Jimmy Nilsen at the conclusion of an Australia vs the Rest of the World test match. The second running of the race again saw a win by a Swedish rider, 1984 and 1988 Ice Racing World Champion Erik Stenlund. The race was again run at the conclusion of Test, this time between Australia and Sweden. The international flavour continued in 1992 when the Cup was won by England's Steve Schofield. The first Australian winner was Mildura rider Jason Lyons who won the Cup in 1993. Ten times Australian Solo Champion Leigh Adams from Mildura holds the record with five wins in 1994 and 1997 (North Arm), and 2001, 2002 and 2003 (Gillman). The first South Australian rider to win the cup was Shane Bowes who won in 1996. 1995 winner Tomasz Gollob from Poland (who was based at North Arm for the 1994/95 Australian season) is the only rider to win the cup who has emulated Young's feat of winning the Individual Speedway World Championship. Gollob won the 2010 Speedway Grand Prix series to become the World Champion, while Leigh Adams was the 1992 World Under-21 Champion and Eric Stenlund was a dual Ice Racing World Champion. With the closure of North Arm in 1997, and the new Gillman Speedway not ready for championship meetings until 2001, the Jack Young Solo Cup was not held from 1998 to 2000. Leigh Adams won the Cup the last time it was held at North Arm in 1997 as well as the first time it was run at Gillman in 2001. The 2001 meeting, held on 26 January (Australia Day), was also the official opening of the new Gillman Speedway. After being a single, six lap race for many years, the Jack Young Solo Cup is currently run in a championship format with riders earning points in the heats before the top scorers go into a semi final and then the final. The current holder of the Jack Young Solo Cup is Tyron Proctor who won his third JYSC in four years on 28 November 2015.* Note: The winner of the "Scottish Open Championship", of which Young was a three time winner, also receives the "Jack Young Memorial Scottish Open Trophy" in honour of the former Edinburgh Monarchs star rider. Adelaide's Rory Schlein is the only rider to have won both Jack Young Memorial trophies. Jack Young Solo Cup Winners Passage 2: British Speedway Championship The British Speedway Championship is an annual motorcycle speedway competition open to British national speedway riders. The winner of the event becomes the British Speedway Champion. History Inaugurated in 1961 as a qualifying round of the Speedway World Championship it was open to riders from Britain and the British dominions. It was initially dominated by riders from New Zealand such as Barry Briggs and Ivan Mauger because of the British Final forming part of the World Speedway championship qualifying rounds. Briggs and Mauger were multiple world champions. It was not until 1975 that the final was restricted to British riders. Countries such as Australia and New Zealand then held their own World Individual Speedway championship qualifying rounds. In the first dozen finals, it was only won twice by a British born rider, both times by Peter Craven.Australians Rory Schlein and Jason Crump rode under an ACU (British) licence. British Champions Medals classification See also British Speedway Under 18 Championship British Speedway Under 21 Championship Speedway in the United Kingdom Passage 3: Kurt Hansen (speedway rider) Kurt Hansen (born 2 October 1964) is a former motorcycle speedway rider from Denmark. Career He competed in two finals of the Speedway Under-21 World Championship (known as the European Championship at the time). The first was as a reserve in the 1984 Individual Speedway Junior European Championship and the second as an outright qualifier in the 1985 Individual Speedway Junior European Championship, where he finished in 9th place.He only rode in the British leagues for two seasons (1984 and 1985) for the Halifax Dukes.He represented the Denmark national under-21 speedway team. Passage 4: 1952 Individual Speedway World Championship The 1952 Individual Speedway World Championship was the seventh edition of the official World Championship to determine the world champion rider.Australian rider Jack Young became the first rider to win a second title (and the first to win two in a row) when he won his second straight World Championship after scoring 14 points. Second was Welshman Freddie Williams on 13 points, with England's Bob Oakley third on 12 points. Qualification Nordic Final 20 June 1952 Växjö First 8 to Continental Final Continental Final 22 June 1952 Falköping First 8 to Championship Round Championship Round Venues 10 events in Great Britain. Scores Top 16 qualify for World final, 17th & 18th reserves for World final World final 18 September 1952 London, Wembley Stadium Classification Podium Jack Young (Australia) Freddie Williams (Wales) Bob Oakley (England) Passage 5: 1989 Individual Speedway World Championship The 1989 Individual Speedway World Championship was the 44th edition of the official World Championship to determine the world champion rider. It was the second time the championship was held in West Germany after previously being held in Norden in 1983.The World Final was held at the Olympic Stadium in Munich. Hans Nielsen made up for his 1988 run-off defeat to fellow Dane Erik Gundersen by scoring a 15-point maximum to take his third World Championship. Nielsen joined fellow Danes Ole Olsen and Erik Gundersen as a three time Speedway World Champion.Simon Wigg from England finished second with the slick, 400 metres (440 yards) track suiting his long track style. Wigg defeated fellow Englishman Jeremy Doncaster in a run-off for second and third places. In what would prove to be his last World Final before his career ending crash in the World Team Cup Final at the Odsal Stadium in England just two weeks later, Erik Gundersen finished in fourth place. His chances of an outright second-place finish (after having finished second behind Nielsen in Heat 4) ended when his bike's engine seized while leading heat 9 causing him to not finish the race. In a sad twist, it was also seized engine in Heat 1 of the World Team Cup Final that would cause Gundersen's career ending crash. Australian rider Troy Butler had a lucky passage to the World Final. After being seeded to the Commonwealth Final, he finished eighth to qualify for the Overseas Final. He then finished tenth in the Overseas Final to be the first reserve for the Intercontinental Final. He then got a start in the Intercontinental Final at Bradford when Overseas champion Sam Ermolenko injured his back in a horrific Long track motorcycle racing crash and was forced to withdraw (the American would be out for over 6 months). Butler would finish twelfth in the IC Final to become a reserve for the World Final where he once again came in as an injury replacement when Dane Jan O. Pedersen was forced to pull out. The 1986 Australian Champion ultimately finished twelfth in Munich, finishing with 4 points (two second places) from his 5 rides. First Round (Overseas Series) New Zealand Qualification First 2 from New Zealand final to Commonwealth final (Mitch Shirra seed to Commonwealth Final)Final Western Springs Stadium, 28 January Australian Qualification Winner of Australian final to Commonwealth final (Stephen Davies & Troy Butler seeded to Commonwealth Final) Final Newcastle Motordrome, 15 January Swedish Qualification Swedish Final May 16, 17 & 18 SWE Nässjö, Nyköping & Karlstad First 5 to Nordic Final plus 1 reserve Danish Final May 19 Vojens, Speedway Center First 6 to Nordic final plus 1 reserve British Final May 21 Coventry, Brandon StadiumFirst 10 to Commonwealth final plus 1 reserve Passage 6: Speedway World Championship Competitions There are annual world championship events in the sport of motorcycle speedway for individual riders - the Speedway Grand Prix - and for national teams - the Speedway World Cup. Each has a counterpart for riders under 21: the Speedway World Under 21 Championship and the Team Speedway Junior World Championship. A pairs event, the Speedway World Pairs Championship, ran until 1993. In addition there are two Ice Speedway World Championships for individuals and teams. The first Ice World Championships were held in 1966.Another form of speedway on larger tracks takes place called Longtrack and there is a World Championship called the Individual Speedway Long Track World Championship. Since 1998 it has been a combination of grasstrack and longtrack Passage 7: 2010 Speedway Grand Prix Qualification The 2010 Individual Speedway World Championship Grand Prix Qualification were a series of motorcycle speedway meetings used to determine the three riders who qualified for the 2010 Speedway Grand Prix. The top eight riders finishing the 2009 Grand Prix series automatically qualified for 2010. The final round of qualification – the Grand Prix Challenge – took place on 18 September 2009, in Coventry, England. The Grand Prix Challenge was won by Magnus Zetterström who finished ahead of Chris Holder and former Grand Prix rider Jarosław Hampel. All three riders qualified for the 2010 Grand Prix. Calendar Domestic Qualifications Deutscher Motor Sport Bund nominated five riders and two track reserve in February 2009. Avto-Moto Zveza Slovenije nominated three riders in March 2009. Czech Republic Autoklub of the Czech Republic nominated six riders in October 2008: Lukáš Dryml, Aleš Dryml, Jr., Luboš Tomíček, Jr., Adrian Rymel, Matěj Kůs and Filip Šitera. A last rider, who will be started in SGP Qualification will be nominated in 2009. Poland The top three riders from 2008 Golden Helmet Final qualified for Grand Prix Qualification (Damian Baliński, Jarosław Hampel and Adrian Miedziński). Four riders will be qualified after Domestic Final. Last rider and one reserve will be nominated by Main Commission of Speedway Sport. Two Polish 2009 Speedway Grand Prix permanent (Rune Holta and Grzegorz Walasek will be started in Domestic Final. Tomasz Gollob (#3) and Sebastian Ułamek (#14) will be not started. Finał krajowych eliminacji do GP IMŚ (Final of Domestic Qualification to Individual World Championship Grand Prix) 7 April 2009 (18:00) Gdańsk Referee: Andrzej Terlecki Beat Time: 63.26 - Piotr Protasiewicz (heat 3) Qualify: 4 and 1 + 1R by Main Commission of Speedway Sport Qualifying rounds == Semi-finals == Passage 8: Brian Andersen Brian Askel Andersen (born 13 March 1971) is a Danish former international motorcycle speedway rider. Career Andersen reached the final of the Under-21 World Championship in 1990 and then won the event the following year to become the 1991 Junior World Champion The success brought him to the attention of British leagues and Coventry Bees signed him for the 1992 British League season.He drove up his average over the following seasons for Coventry and established himself as one of their leading riders. In 1995, he won the Individual Speedway Danish Championship. In 1996, he finished second in the 1996 Intercontinental Final, which qualified him for his first Speedway Grand Prix series.He rode in the Grand Prix between 1997 and 2001, and won two bronze medals in the Speedway World Team Cup. He won the Danish Championship for the second time in 1999, which was also his last season for Coventry before he moved to join Oxford Cheetahs for the 2000 Elite League speedway season.In 2001, he was part of the Oxford Cheetahs title winning team in 2001. Family His brother Jan Andersen was a speedway rider. His son Mikkel Andersen is also a speedway rider and the 2022 FIM Speedway Youth World Championship (SGP3) world champion. Major results World individual Championship 1997 Speedway Grand Prix - 6th (80 pts) including winning the 1997 Speedway Grand Prix of Great Britain 1998 Speedway Grand Prix - 16th (31 pts) 1999 Speedway Grand Prix - 22nd (12 pts) 2000 Speedway Grand Prix - 23rd (15 pts) 2001 Speedway Grand Prix - 18th (23 pts) World team Championships 1996 Speedway World Team Cup - bronze medal 1998 Speedway World Team Cup - bronze medal 2000 Speedway World Team Cup - =5th 2001 Speedway World Cup - 4th See also Denmark national speedway team List of Speedway Grand Prix riders Passage 9: 1936 Individual Speedway World Championship The 1936 Individual Speedway World Championship was the first ever Speedway World Championship and was won by Lionel Van Praag of Australia. The forerunner to the World Championship was generally regarded to be the Star Riders' Championship. The final was held at London's Wembley Stadium in front of 74,000. It was the first of a record 26 times that Wembley would host the World Final with the last being in 1981. Summary The World Championship would consist of a semi final round, where points would be added to the final to determine the winner. One of the favourites Jack Parker had a broken hand injury and was unable to compete in the final. Joe Abbott was also unable to line up for the final due to injury, despite qualifying for the final. They were replaced by Norman Parker and Bill Pitcher.Despite being unbeaten in the Final, Australian Bluey Wilkinson only finished third as the Championship was decided by bonus points accumulated in previous rounds plus the score from the final. Van Praag defeated England's Eric Langton in a runoff to be declared the inaugural Speedway World Champion.As they lined up at the tapes for the runoff, Langton broke them which would ordinarily lead to disqualification. However, Van Praag stated he did not want to win the title by default and insisted that a race should take place. At the restart Langton made it to the first bend in front and led until the final bend on the last lap when Van Praag darted through the smallest of gaps to win by less than wheel length.Afterwards, controversial allegations were abound that the two riders had 'fixed' the match race, deciding between them that the first person to the first bend would win the race and the Championship and split the prize money; Langton led into the first bend but was overtaken by Van Praag. Van Praag reportedly paid Langton £50 "conscience money" after the race for going back on the agreement. Qualifying The top 16 riders over 7 rounds would qualify for the World final. Ron Johnson and Bill Pitcher qualified as first reserves. Podium Lionel Van Praag (Australia) Eric Langton (Great Britain) Bluey Wilkinson (Australia) World final 10 September 1936 London, Wembley Stadium Passage 10: 2010 Individual Speedway Polish Championship The 2010 Individual Speedway Polish Championship (Polish: Indywidualne Mistrzostwa Polski, IMP) was the 2010 version of Individual Speedway Polish Championship organized by the Polish Motor Union (PZM). The Championship was won by Janusz Kołodziej, who beat Krzysztof Kasprzak in the Run-off. Third was Rafał Dobrucki. Kołodziej, who won 2009 (host in 2010), 2010 Golden Helmet and 2010 Speedway World Cup was award nomination to the 2011 Speedway Grand Prix. The defending Champion, Tomasz Gollob, who was a 2010 Speedway Grand Prix leader, resigned from the IMP Final. Format In four quarter-finals was started 64 riders and to semi-finals was qualify 27 riders (top 6 from Lublin' QR and top 7 from Opole, Piła and Poznań meetings. This 27 riders and 5 seeded was started in two semi-finals. This five riders was Grand Prix permanent riders (Tomasz Gollob, Rune Holta and Jarosław Hampel) and top 3 from 2009 Polish Championship Final (Gollob, Krzysztof Kasprzak and Janusz Kołodziej). The top 8 riders from both SF was qualify for the final in Zielona Góra. The hosting of the final is traditionally awarded to the defending Team Polish Champion, Falubaz Zielona Góra. Quarter-finals Semi-finals The final 7 August 2010 18 September 2010 Zielona Góra Referee:
The winner of the the London Riders' Championship in 1953 scored how many points in the 1952 Individual Speedway World Championship?
14
4,029
hotpotqa
4k
Passage 1: SHV connector The SHV (safe high voltage) connector is a type of RF connector used for terminating a coaxial cable. The connector uses a bayonet mount similar to those of the BNC and MHV connectors, but is easily distinguished due to its very thick and protruding insulator. This insulation geometry makes SHV connectors safer for handling high voltage than MHV connectors, by preventing accidental contact with the live conductor in an unmated connector or plug. The connector is also designed such that when it is being disconnected from a plug, the high voltage contact is broken before the ground contact, to prevent accidental shocks. The connector is also designed to prevent users from forcing a high voltage connector into a low voltage plug or vice versa (as can happen with MHV and BNC connectors), by reversing the gender compared to BNC. Details of the connector comprising dimensions of the mating parts, voltage rating, minimum insulation requirements and more are specified by the IEC document 60498.SHV connectors are used in laboratory settings for voltages and currents beyond the capacity of BNC and MHV connectors. Standard SHV connectors are rated for 5000 volts DC and 5 amperes, although higher-voltage versions (to 20 kV) are also available. Passage 2: U-229 The U-229 is a cable connector currently used by the U.S. military for audio connections to field radios, typically for connecting a handset. There are five-pin and six-pin versions, the sixth pin version using the extra pin to power accessories. This type of connector is also used by the National Security Agency to load cryptographic keys into encryption equipment from a fill device. It is specified by the detail specification MIL-DTL-55116D . External links U-229 pin-outs and information MIL-DTL-55116 military specification Compilation of military radio related standards Passage 3: Multicable In stage lighting, a multicable (otherwise known as multi-core cable or mult) is a type of heavy-duty electrical cable used in theaters to power lights. The basic construction involves a bundle of individual conductors surrounded by a single outer jacket. Whereas single cables only have three conductors, multicable has ten or more. They are configured to run in six or eight-circuit varieties. Typically, both ends of multicable have a specific connector known as a Socapex Connector. Technicians then combine the cables with break-outs and break-ins, which essentially are an octopus-like adapter with one Socapex end and six to eight Edison, twist-lok, or stage pin style connectors. Use Multicable is used when technicians need to mount lights where no permanent circuiting options exist. Typically, mounting pipes designed for lighting use have enclosed raceways with permanent power outlets, running to a remote dimmer unit somewhere in the theater. When such options are not available, technicians have to run cable to these positions instead. Originally, it had to be done with cable bundles, running single extension cords long distances and tying or taping them into groups, or just running cable in a disorganized mess. Instead, several circuits can be run in a single cable, using multicables. These are used most often in theaters without on-stage raceways or in systems with portable dimming racks, which are not wired into the building. Multicable is a quick, organized way of getting a large amount of circuits away from the dimmers to the lights. Advantages Viewed in comparison to six individual extension cords, multicable is a neater and more organized alternative. There are fewer connectors, and the multis are labeled one to six on their adapters so technicians always know which circuit they're working on. Disadvantages Running multicable from source to light is particularly strenuous. Compared to running six cables one by one, multicable is much heavier. Also, because of the large amount of copper in the cable, they have very bad memory- an effect where cables will try to curl and twist in attempt to return to its original coiled state. This makes packing multicable back into coils also tricky. Passage 4: Camlock (electrical) A camlock or cam-lock is an interchangeable electrical connector, often used in temporary electrical power production and distribution, predominantly in North America. Originally a trade name as Cam-Lok, it is now a generic term. Each camlock connector carries a single phase, pole, or conductor; multiple camlock connectors will be used to make a complete electrical supply or circuit. The most common form is the 16 series, rated at 400 amperes with 105 °C terminations. Also in common use is the 15 series (mini-cam), rated at 150 amperes. A larger version is made denoted as the 17 series with ratings up to 760 A. A ball nose version and a longer nose standard version exist—the latter is the most common. The early version original connector was hot-vulcanized to the cable body; later versions use dimensional pressure to exclude foreign material from the connector pin area. The tail of the connector insulator body is trimmable to fit the cable outer diameter. Another version is the Posi-Lok, which features keyed, shrouded connectors, and panels with sequencing interlocks.Camlock is generally used where temporary connections of 3-phase and/or more than 50 A are needed. Applications include connecting large temporary generators or load banks to distribution panels or building disconnects. Common scenarios include testing, emergencies, temporary special events, and traveling stage shows with large lighting and sound equipment. They are usually found only in professional environments, where connections are performed by qualified personnel. Color codes Standards and industry conventions for phase and voltage exist, but may vary in practice, particularly when international companies and traveling productions are involved. North America The National Electric Code (NEC) only specifies colors for ground and neutral: Green for the equipment grounding (safety) conductor (NEC Article 250.119), and white or grey for the neutral (grounded) conductor (NEC Article 200.6). These colors may not be used for any other purpose, nor may their purpose use a different color. No other colors are specified by the NEC for general power distribution. Nonetheless, the following conventions exist: United Kingdom The UK system has two established camlock colour codes. The old and new colour codes are not compatible: Black was originally used to indicate neutral, and is now a phase colour; blue was used to denote a phase, and is now used to denote neutral. As the use of camlocks in the UK has been declining, it is very unlikely to find any matching the new colour codes. Gallery Camlock power distribution Passage 5: Cable gland A cable gland (more often known in the U.S. as a cord grip, cable strain relief, cable connector or cable fitting) is a device designed to attach and secure the end of an electrical cable to the equipment. A cable gland provides strain-relief and connects by a means suitable for the type and description of cable for which it is designed—including provision for making electrical connection to the armour or braid and lead or aluminium sheath of the cable, if any. Cable glands may also be used for sealing cables passing through bulkheads or gland plates. Cable glands are mostly used for cables with diameters between 1 mm and 75 mm.Cable glands are commonly defined as mechanical cable entry devices. They are used throughout a number of industries in conjunction with cable and wiring used in electrical instrumentation and automation systems. Cable glands may be used on all types of electrical power, control, instrumentation, data and telecommunications cables. They are used as a sealing and termination device to ensure that the characteristics of the enclosure which the cable enters can be maintained adequately. Cable glands are made of various plastics, and steel, brass or aluminum for industrial usage. Glands intended to resist dripping water or water pressure will include synthetic rubber or other types of elastomer seals. Certain types of cable glands may also serve to prevent entry of flammable gas into equipment enclosures, for electrical equipment in hazardous areas. Although cable glands are often called "connectors", a technical distinction can be made in the terminology, which differentiates them from quick-disconnect, conducting electrical connectors. For routing pre-terminated cables (cables with connectors), split cable glands can be used. These cable glands consist of three parts (two gland halves and a split sealing grommet) which are screwed with a hexagonal locknut (like normal cable glands). Thus, pre-assembled cables can be routed without removing the plugs. Split cable glands can reach an ingress protection of up to IP66/IP68 and NEMA 4X. Alternatively, split cable entry systems can be used (normally consisting of a hard frame and several sealing grommets) to route a large number of pre-terminated cables through one wall cut-out. There are at least three types of thread standards used: Panzergewinde (PG standard) Metric thread National Pipe Thread (inch system) See also Electrical connector Pipe thread Steel conduit thread Feedthrough Passage 6: SR connector An SR connector, or CP connector (from Russian: Соединитель Радиочастотный, radio frequency connector) is a type of Russian made RF connector for coaxial cables. Based on the American BNC connector, the SR connector differs slightly in dimensions due to discrepancies in imperial to metric conversion, though with some force they can still be mated. There are however types of SR connectors that do not resemble their American counterpart. Most SR connectors are variants of SR-50 (50 Ω) or SR-75 (75 Ω) versions, with the SR-75 typically having a thinner center pin. They often resemble C connectors in shape, and have threaded inserts similar to N connectors. Further numerical suffixes denote specific kinds of connectors, for instance the CP 75-164 is a much larger high power connector, designed for upwards of 3000W, with a similar appearance to an N or UHF type. The various letters after the number refer to the dielectric material used. Below is a breakdown of the various suffixes used in the order they would appear: See also BNC connector RF connector Passage 7: Very-high-density cable interconnect A very-high-density cable interconnect (VHDCI) is a 68-pin connector that was introduced in the SPI-2 document of SCSI-3. The VHDCI connector is a very small connector that allows placement of four wide SCSI connectors on the back of a single PCI card slot. Physically, it looks like a miniature Centronics type connector. It uses the regular 68-contact pin assignment. The male connector (plug) is used on the cable and the female connector ("receptacle") on the device. Other uses Apart from the standardized use with the SCSI interface, several vendors have also used VHDCI connectors for other types of interfaces: Nvidia: for an external PCI Express 8-lane interconnect, and used in Quadro Plex VCS and in Quadro NVS 420 as a display port connector ATI Technologies: on the FireMV 2400 to convey two DVI and two VGA signals on a single connector, and ganging two of these connectors side by side in order to allow the FireMV 2400 to be a low-profile quad display card. The Radeon X1950 XTX Crossfire Edition also used a pair of the connectors to grant more inter-card bandwidth than the PCI Express bus allowed at the time for Crossfire. AMD: Some Visiontek variants of the Radeon HD 7750 use a VHDCI connector alongside a Mini DisplayPort to allow a 5 (breakout to 4 HDMI+1 mDP) display Eyefinity array on a low profile card. VisionTek also released a similar Radeon HD 5570, though it lacked a Mini DisplayPort. Juniper Networks: for their 12- and 48-port 100BASE-TX PICs (physical interface cards). The cable connects to the VHDCI connector on the PIC on one end, via an RJ-21 connector on the other end, to an RJ-45 patch panel. Cisco: 3750 StackWise stacking cables National Instruments: on their high-speed digital I/O cards.[1] AudioScience uses VHDCI to carry multiple analog balanced audio and digital AES/EBU audio streams, and clock and GPIO signals. See also SCSI connector Passage 8: C connector The C connector is a type of RF connector used for terminating coaxial cable. The interface specifications for the C and many other connectors are referenced in MIL-STD-348. The connector uses two-stud bayonet-type locks. The C connector was invented by Amphenol engineer Carl Concelman. It is weatherproof without being overly bulky. The mating arrangement is similar to that of the BNC connector. It can be used up to 11 GHz, and is rated for up to 1500 volts. See also USB-C (also called Type C connector) Passage 9: GR connector The GR connector, officially the General Radio Type 874, was a type of RF connector used for connecting coaxial cable. Designed by Eduard Karplus, Harold M. Wilson and William R. Thurston at General Radio Corporation. It was widely used on General Radio's electronic test equipment and some Tektronix instruments from the 1950s to the 1970s.The connector had several desirable properties: Good control of the electrical impedance across a wide range of frequencies, therefore low reflection Reliable mating Hermaphrodism, so there were no "male" or "female" connectors; any GR connector could mate with any other GR connector.This last characteristic was achieved by having both the inner and outer conductors made from four leaves, two of which were displaced slightly outwards and two of which were displaced slightly inwards. By rotating one connector by 90 degrees, its inner leaves would mate with the other connector's outer leaves and vice versa. When frequently mated, the inner leaves were susceptible to breakage due to stubbing, flexing and fatigue cracking as the connector was pressed together and alignment was perfected. In 1961, an optional locking mechanism consisting of an outer hex nut encasing a captured threaded barrel was added to the 874 line. It can be seen in the photograph of a GR-900 to GR-874 adapter,. The locking assembly is not captive and can be backed off the RF connector. The threaded barrel is supplied on each connector. The threaded barrel was withdrawn into the nut on one connector and extended on the other to allow the barrel to engage the nut of both mating connectors. This style of locking mechanism was continued in GR-874's thematic successors; the GR-900 precision 14 mm connector that retains a crenelated hermaphroditic mechanical anti-spin feature to protect the sexless RF interface from rotating and galling when the locking mechanism is tightened, and the fully sexless APC-7 7 mm connector. Adapters to other connector series were available. Eventually, the limited frequency range of a 14 mm connector and its high manufacturing cost overcame its ease of assembly and the GR-874 was supplanted generally by the 7 mm type N connector and its variants, the BNC connector and the TNC connector, and the later higher frequency 3.5 mm SMA connectors. General Radio, then still a major source of RF test equipment, designed the incompatible GR-900 as a 14 mm successor to the GR-874, filling the industry's need for a higher performance sexless connector for fully reversible lab standards and related test equipment. The GR-900 was in turn succeeded in this essential niche role by the completely sexless APC-7 connector. Passage 10: BNC connector The BNC connector (initialism of "Bayonet Neill–Concelman") is a miniature quick connect/disconnect radio frequency connector used for coaxial cable. It is designed to maintain the same characteristic impedance of the cable, with 50 ohm and 75 ohm types being made. It is usually applied for video and radio frequency connections up to about 2 GHz and up to 500 volts. The connector has a twist to lock design with two lugs in the female portion of the connector engaging a slot in the shell of the male portion. The type was introduced on military radio equipment in the 1940s and has since become widely applied in radio systems, and is a common type of video connector. Similar radio-frequency connectors differ in dimensions and attachment features, and may allow for higher voltages, higher frequencies, or three-wire connections. Description The BNC connector features two bayonet lugs on the female connector; mating is fully achieved with a quarter turn of the coupling nut. It uses an outer conductor with slots and some plastic dielectric on each gender connector. This dielectric causes increasing losses at higher frequencies. Above 4 GHz, the slots may radiate signals, so the connector is usable, but not necessarily stable, up to about 11 GHz. BNC connectors are made to match the characteristic impedance of cable at either 50 ohms or 75 ohms (with other impedances such as 93 ohms for ARCNET available though less common). They are usually applied for frequencies below 4 GHz and voltages below 500 volts. The interface specifications for the BNC and many other connectors are referenced in MIL-STD-348. Use The BNC was originally designed for military use and has gained wide acceptance in video and RF applications to 2 GHz. BNC connectors are used with miniature-to-subminiature coaxial cable in radio, television, and other radio-frequency electronic equipment. They were commonly used for early computer networks, including ARCnet, the IBM PC Network, and the 10BASE2 variant of Ethernet. The BNC connector is used for signal connections such as: analog and serial digital interface video signals radio antennas aerospace electronics (avionics) nuclear instrumentation test equipment. The BNC connector is used for analog composite video and digital video interconnects on commercial video devices. Consumer electronics devices with RCA connector jacks can be used with BNC-only commercial video equipment by inserting an adapter. BNC connectors were commonly used on 10base2 thin Ethernet network cables and network cards. BNC connections can also be found in recording studios. Digital recording equipment uses the connection for synchronization of various components via the transmission of word clock timing signals. Typically the male connector is fitted to a cable, and the female to a panel on equipment. Cable connectors are often designed to be fitted by crimping using a special power or manual tool. Wire strippers which strip outer jacket, shield braid, and inner dielectric to the correct lengths in one operation are used. Origin The connector was named the BNC (for Bayonet Neill–Concelman) after its bayonet mount locking mechanism and its inventors, Paul Neill and Carl Concelman. Neill worked at Bell Labs and also invented the N connector; Concelman worked at Amphenol and also invented the C connector. Types and compatibility Types BNC connectors are most commonly made in 50 and 75 ohm versions, matched for use with cables of the same characteristic impedance. The 75 ohm types can sometimes be recognized by the reduced or absent dielectric in the mating ends but this is by no means reliable. There was a proposal in the early 1970s for the dielectric material to be coloured red in 75 ohm connectors, and while this is occasionally implemented, it did not become standard. The 75 ohm connector is dimensionally slightly different from the 50 ohm variant, but the two nevertheless can be made to mate. The 50 ohm connectors are typically specified for use at frequencies up to 4 GHz and the 75 ohm version up to 2 GHz. Video (particularly HD video signals) and DS3 Telco central office applications primarily use 75 ohm BNC connectors, whereas 50 ohm connectors are used for data and RF. Many VHF receivers used 75 ohm antenna inputs, so they often used 75 ohm BNC connectors. Reverse-polarity BNC (RP-BNC) is a variation of the BNC specification which reverses the polarity of the interface. In a connector of this type, the female contact normally found in a jack is usually in the plug, while the male contact normally found in a plug is in the jack. This ensures that reverse polarity interface connectors do not mate with standard interface connectors. The SHV connector is a high-voltage BNC variant that uses this reverse polarity configuration. Smaller versions of the BNC connector, called Mini BNC and High Density BNC (HD BNC), are manufactured by Amphenol. While retaining the electrical characteristics of the original specification, they have smaller footprints giving a higher packing density on circuit boards and equipment backplanes. These connectors have true 75 ohm impedance making them suitable for HD video applications. Compatibility The different versions are designed to mate with each other, and a 75 ohm and a 50 ohm BNC connector which both comply with the 2007 IEC standard, IEC 61169-8, will mate non-destructively. At least one manufacturer claims very high reliability for the connectors' compatibility.At frequencies below 10 MHz the impedance mismatch between a 50 ohm connector or cable and a 75 ohm one has negligible effects. BNC connectors were thus originally made only in 50 ohm versions, for use with any impedance of cable. Above this frequency, however, the mismatch becomes progressively more significant and can lead to signal reflections. BNC inserter/remover tool A BNC inserter/remover tool also called a BNC tool, BNC extraction tool, BNC wrench, or BNC apple corer, is used to insert or remove BNC connectors in high density or hard-to-reach locations, such as densely wired patch panels in broadcast facilities like central apparatus rooms. BNC tools are usually light weight, made of stainless steel, and have screw driver type plastic handle grips for applying torque. Their shafts are usually double the length of a standard connector. They help to safely, efficiently and quickly connect and disconnect BNC connectors in jack fields. BNC tools facilitate access and minimize the risk of accidentally disconnecting nearby connectors. Similar connectors Similar connectors using the bayonet connection principle exist, and a threaded connector is also available. United States military standard MIL-PRF-39012 entitled Connectors, Coaxial, Radio Frequency, General Specification for (formerly MIL-C-39012) covers the general requirements and tests for radio frequency connectors used with flexible cables and certain other types of coaxial transmission lines in military, aerospace, and spaceflight applications. SR connectors In the USSR, BNC connectors were copied as SR connectors. As a result of recalculating from imperial to metric measurements their dimensions differ slightly from those of BNC. They are however generally interchangeable with them, sometimes with force applied. TNC (Threaded Neill–Concelman) A threaded version of the BNC connector, known as the TNC connector (for Threaded Neill-Concelman) is also available. It has superior performance to the BNC connector at microwave frequencies. Twin BNC or twinax Twin BNC (also known as twinax) connectors use the same bayonet latching shell as an ordinary BNC connector but contain two independent contact points (one male and one female), allowing the connection of a 78 ohm or 95 ohm shielded differential pair such as RG-108A. They can operate up to 100 MHz and 100 volts. They cannot mate with ordinary BNC connectors. An abbreviation for twinax connectors has been BNO (Sühner). Triaxial Triaxial (also known as triax) connectors are a variant on BNC that carry a signal and guard as well as ground conductor. These are used in sensitive electronic measurement systems. Early triaxial connectors were designed with just an extra inner conductor, but later triaxial connectors also include a three-lug arrangement to rule out an accidental forced mating with a BNC connector. Adaptors exist to allow some interconnection possibilities between triaxial and BNC connectors. The triaxial may also be known as a Trompeter connection. High-voltage connectors For higher voltages (above 500 V), MHV and SHV connectors are typically used. MHV connectors are easily mistaken for BNC type, and can be made to mate with them by brute force. The SHV connector was developed as a safer alternative, it will not mate with ordinary BNC connectors and the inner conductor is much harder to accidentally contact. Miniature connectors BNC connectors are commonly used in electronics, but in some applications they are being replaced by LEMO 00 miniature connectors which allow for significantly higher densities. In video broadcast industry, the DIN 1.0/2.3 and the HD-BNC connector are used for higher density products See also SMA connector SMB connector SMC connector UHF connector
SR connector is based on the connector that is often used for what type of cable?
coaxial
3,945
hotpotqa
4k
Passage 1: Austrobaileyales Austrobaileyales is an order of flowering plants consisting of about 100 species of woody plants growing as trees, shrubs and lianas. The best-known species is Illicium verum, commonly known as star anise. The order belongs to the group of basal angiosperms, the ANA grade (Amborellales, Nymphaeales, and Austrobaileyales), which diverged earlier from the remaining flowering plants. Austrobaileyales is sister to all remaining extant angiosperms outside the ANA grade.The order includes just three families of flowering plants, the Austrobaileyaceae, a monotypic family containing the sole genus, Austrobaileya scandens, a woody liana, the Schisandraceae, a family of trees, shrubs, or lianas containing essential oils, and the Trimeniaceae, essential oil-bearing trees and lianas. In different classifications Until the early 21st century, the order was only rarely recognised by systems of classification (an exception is the Reveal system). The APG system, of 1998, did not recognize such an order. The APG II system, of 2003, does accept this order and places it among the basal angiosperms, that is: it does not belong to any further clade. APG II uses this circumscription: order Austrobaileyales family Austrobaileyaceae, one species of woody vines from Australia family Schisandraceae [+ family Illiciaceae], several dozen species of woody plants, found in tropical to temperate regions of East and Southeast Asia and the Caribbean. The best known of those is Star anise. family Trimeniaceae, half-a-dozen species of woody plants found in subtropical to tropical Southeast Asia, eastern Australia and the Pacific IslandsNote: "+ ..."=optional segregate family, that may be split off from the preceding family. The Cronquist system, of 1981, also placed the plants in families Illiciaceae and Schisandraceae together, but as separate families, united at the rank of order, in the order Illiciales. Passage 2: Tibouchina Tibouchina is a neotropical flowering plant genus in the family Melastomataceae. Species of this genus are subshrubs, shrubs or small trees and typically have purple flowers. They are native to Mexico, the Caribbean, and South America where they are found as far south as northern Argentina. Members of this genus are known as glory bushes, glory trees or princess flowers. The name Tibouchina is adapted from a Guianan indigenous name for a member of this genus. A systematic study in 2013 showed that as then circumscribed the genus was paraphyletic, and in 2019 the genus was split into a more narrowly circumscribed Tibouchina, two re-established genera Pleroma and Chaetogastra, and a new genus, Andesanthus. Description Tibouchina species are subshrubs, shrubs or small trees. Their leaves are opposite, usually with petioles, and often covered with scales. The inflorescence is a panicle or some modification of a panicle with reduced branching. The individual flowers have five free petals, purple or lilac in color; the color does not change as the flowers age. There are ten stamens, either all the same or dimorphic, with five larger and five smaller ones. The connective tissue below the anthers of the stamens is prolonged and modified at the base of the stamens into ventrally bilobed appendages. When mature, the seeds are contained in a dry, semiwoody capsule and are cochleate (spiralled). Taxonomy The genus Tibouchina was established by Aublet in 1775 in his Flora of French Guiana with the description of a single species, T. aspera, which is thus the type species. In 1885, in his treatment for Flora brasiliensis, Alfred Cogniaux used a broad concept of the genus, transferring into it many of the species at that time placed in Chaetogastra, Diplostegium, Lasiandra, Pleroma and Purpurella, among others. This broad concept was generally adopted subsequently, and around 470 taxa were at one time or another assigned to Tibouchina. Phylogeny A phylogenetic analysis in 2013 based on molecular data (2 plastid and 1 nuclear regions) determined that the traditional circumscription of Tibouchina was paraphyletic. Four major clades were resolved within the genus which were supported by morphological, molecular and geographic evidence. Based on the traditional code of nomenclature, the clade that the type species falls in retains the name of the genus; therefore, the clade containing Tibouchina aspera remains Tibouchina.A further molecular phylogenetic study in 2019 used the same molecular markers but included more species. It reached the same conclusion: the original broadly circumscribed Tibouchina consisted of four monophyletic clades. The authors proposed a split into four genera: a more narrowly circumscribed Tibouchina, two re-established genera Pleroma and Chaetogastra, and a new genus, Andesanthus. The relationship between Chaetogastra and the genus Brachyotum differed between a maximum likelihood analysis and a Bayesian inference analysis: the former found Brachyotum embedded within Chaetogastra, the latter found the two to be sisters. The part of their maximum likelihood cladogram which includes former Tibouchina species is as follows, using their genus names and with shading added to show the original broadly circumscribed Tibouchina s.l.: As re-circumscribed, Tibouchina is monophyletic and contains species belonging to the traditional sections T. section Tibouchina and T. section Barbigerae. Diagnostic characteristics include the presence of scale-like trichomes on the hypanthium and leaves and a long pedoconnective on lilac anthers, and the absence of glandular trichomes. Species are found in savanna habitats. Species As of May 2022, Plants of the World Online accepts the following species within Tibouchina: Selected former species Species placed in Tibouchina in its former broad sense include: Distribution and invasive potential All the species of Tibouchina are native to the Americas as far north as Mexico south to northern Argentina, with many found in Brazil, and others in Belize, Bolivia, Brazil, Colombia, Costa Rica, French Guiana, Guyana, Honduras, Nicaragua, Panama, Peru, Suriname, and Venezuela. Members of Tibouchina tend to be found in lowland savannas and on the lower slopes of the Andes. All Tibouchina species as well as those formerly placed in the genus are considered noxious weeds in Hawaii, because of their high potential for being invasive species. Many species, such as T. araguaiensis, T. papyrus, T. mathaei and T. nigricans, have narrow distributions, being known from only a handful of locations, while a few other species, including T. aspera, T. barbigera and T. bipenicillata, have broader distributions. Passage 3: Cyrtandra (plant) Cyrtandra (Neo-Latin, from Greek κυρτός, kyrtós, "curved", and ἀνήρ, anḗr, "male", in reference to their prominently curved stamens) is a genus of flowering plants containing about 600 species, with more being discovered often, and is thus the largest genus in the family Gesneriaceae. These plants are native to Southeast Asia, Australia, and the Pacific Islands, with the centre of diversity in Southeast Asia and the Malesian region. The genus is common, but many species within it are very rare, localized, and endangered endemic plants. The species can be difficult to identify because they are highly polymorphic and because they readily hybridize with each other. The plants may be small herbs, vines, shrubs, epiphytes, or trees. The genus is characterized in part by having two stamens, and most species have white flowers, with a few red-, orange-, yellow-, and pink-flowered species known. Almost all species live in rainforest habitats.It is an example of a supertramp genus.Hawaiian Cyrtandra are known as ha‘iwale. Species Selected species include: Cyrtandra aurantiicarpa Cyrtandra biserrata – Molokai cyrtandra Cyrtandra calyptribracteata Cyrtandra cleopatrae Cyrtandra confertiflora – lava cyrtandra Cyrtandra cordifolia – the Latin name means cyrtandra with heart-shaped leaves Cyrtandra crenata – Kahana Valley cyrtandra Cyrtandra cyaneoides – mapele Cyrtandra dentata – mountain cyrtandra Cyrtandra elatostemoides Cyrtandra elegans Cyrtandra ferripilosa – red-hair cyrtandra Cyrtandra filipes – gulch cyrtandra Cyrtandra garnotiana – hahala Cyrtandra giffardii – forest cyrtandra Cyrtandra gracilis – Palolo Valley cyrtandra Cyrtandra grandiflora – largeflower cyrtandra Cyrtandra grayana – Pacific cyrtandra Cyrtandra grayi – Gray's cyrtandra Cyrtandra halawensis – toothleaf cyrtandra Cyrtandra hashimotoi – Maui cyrtandra Cyrtandra hawaiensis – Hawaii cyrtandra Cyrtandra heinrichii – lava cyrtandra Cyrtandra hematos – singleflower cyrtandra Cyrtandra hirtigera Cyrtandra hypochrysoides Cyrtandra kalihii – Koolau Range cyrtandra Cyrtandra kamooloaensis – Kamo'oloa cyrtandra Cyrtandra kauaiensis – ulunahele Cyrtandra kealiae Cyrtandra kealiae ssp. kealiae (syn. C. limahuliensis) Cyrtandra kealiae ssp. urceolata Cyrtandra kohalae – Kohala Mountain cyrtandra Cyrtandra laxiflora – Oahu cyrtandra Cyrtandra lessoniana – Lesson's cyrtandra Cyrtandra macraei – upland cyrtandra Cyrtandra menziesii – ha'i wale Cyrtandra munroi – Lanaihale cyrtandra Cyrtandra nitens Cyrtandra oenobarba – shaggystem cyrtandra Cyrtandra olona – Kauai cyrtandra Cyrtandra oxybapha – Pohakea Gulch cyrtandra Cyrtandra paliku – cliffside cyrtandra Cyrtandra paludosa – kanaweo ke'oke'o Cyrtandra platyphylla – 'ilihia Cyrtandra polyantha – Niu Valley cyrtandra Cyrtandra pruinosa – frosted cyrtandra Cyrtandra pulgarensis Cyrtandra samoensis Cyrtandra sessilis – windyridge cyrtandra Cyrtandra subumbellata – parasol cyrtandra Cyrtandra tahuatensis Cyrtandra tintinnabula – Laupahoehoe cyrtandra Cyrtandra umbellifera Cyrtandra viridiflora – greenleaf cyrtandra Cyrtandra waiolani – fuzzyflower cyrtandra Cyrtandra wawrae – rockface cyrtandra Passage 4: Pothos (genus) Pothos is a genus of flowering plants in the family Araceae (tribe Potheae). It is native to China, the Indian Subcontinent, Australia, New Guinea, Southeast Asia, and various islands of the Pacific and Indian Oceans.The common houseplant Epipremnum aureum, also known as "pothos", was once classified under the genus Pothos. Neo P1 is a genetically engineered pothos designed to remove volatile organic compounds from ambient air. Species Pothos armatus C.E.C.Fisch. - Kerala Pothos atropurpurascens M.Hotta - Borneo Pothos barberianus Schott- Borneo, Malaysia, Sumatra Pothos beccarianus Engl. - Borneo Pothos brassii B.L.Burtt - Queensland Pothos brevistylus Engl. - Borneo Pothos brevivaginatus Alderw. - Sumatra Pothos chinensis (Raf.) Merr. - China, Tibet, Taiwan, Japan, Ryukyu Islands, Indochina, Himalayas, India, Nepal, Bhutan Pothos clavatus Engl. - New Guinea Pothos crassipedunculatus Sivad. & N.Mohanan - southern India Pothos curtisii Hook.f. - Thailand, Malaysia, Sumatra Pothos cuspidatus Alderw. - western New Guinea Pothos cylindricus C.Presl - Sabah, Sulawesi, Philippines Pothos dolichophyllus Merr. - Philippines Pothos dzui P.C.Boyce - Vietnam Pothos englerianus (Engl.) Alderw. - Sumatra Pothos falcifolius Engl. & K.Krause - Maluku, New Guinea Pothos gigantipes Buchet ex P.C.Boyce - Vietnam, Cambodia Pothos gracillimus Engl. & K.Krause - Papua New Guinea Pothos grandis Buchet ex P.C.Boyce & V.D.Nguyen - Vietnam Pothos hellwigii Engl. - New Guinea, Solomon Islands, Bismarck Archipelago Pothos hookeri Schott - Sri Lanka Pothos inaequilaterus (C.Presl) Engl. - Philippines Pothos insignis Engl. - Borneo, Palawan Pothos junghuhnii de Vriese - Borneo, Java, Sumatra Pothos keralensis A.G. Pandurangan & V.J. Nair - Kerala Pothos kerrii Buchet ex P.C.Boyce - Guangxi, Laos, Vietnam Pothos kingii Hook.f. - Thailand, Peninsular Malaysia Pothos lancifolius Hook.f. - Vietnam, Peninsular Malaysia Pothos laurifolius P.C.Boyce & A.Hay - Brunei Pothos leptostachyus Schott - Thailand, Peninsular Malaysia, Borneo, Sumatra Pothos longipes Schott - Queensland, New South Wales Pothos longivaginatus Alderw. - Borneo Pothos luzonensis (C.Presl) Schott - Luzon, Samar Pothos macrocephalus Scort. ex Hook.f. - Nicobar Islands, Thailand, Peninsular Malaysia, Sumatra Pothos mirabilis Merr. - Sabah, Kalimantan Timur Pothos motleyanus Schott - Kalimantan Pothos oliganthus P.C.Boyce & A.Hay - Sarawak Pothos ovatifolius Engl. - Peninsular Malaysia, Borneo, Sumatra, Philippines Pothos oxyphyllus Miq. - Borneo, Sumatra, Java Pothos papuanus Becc. ex Engl. - New Guinea, Solomon Islands Pothos parvispadix Nicolson - Sri Lanka Pothos philippinensis Engl. - Philippines Pothos pilulifer Buchet ex P.C.Boyce - Yunnan, Guangxi, Vietnam Pothos polystachyus Engl. & K.Krause - Papua New Guinea Pothos remotiflorus Hook. - Sri Lanka Pothos repens (Lour.) Druce - Guangdong, Guangxi, Hainan, Yunnan, Laos, Vietnam Pothos salicifolius Ridl. ex Burkill & Holttum Pothos scandens L. - Indian subcontinent, Indo-China, Malesia Pothos tener (Roxb.) Wall. - Maluku, Sulawesi, New Guinea, Solomon Islands, Bismarck Archipelago, Vanuatu Pothos thomsonianus Schott - southern India Pothos touranensis Gagnep. - Vietnam Pothos versteegii Engl. - New Guinea Pothos volans P.C.Boyce & A.Hay - Brunei, Sarawak Pothos zippelii Schott - Maluku, New Guinea, Solomon Islands, Bismarck Archipelago Passage 5: Chiranthodendron Chiranthodendron is a flowering plant genus in the family Malvaceae. It comprises a single species of tree, Chiranthodendron pentadactylon. Names The tree is called the devil's, monkey's or Mexican hand tree or the hand-flower in English, the árbol de las manitas (tree of little hands) in Spanish, and mācpalxōchitl (palm flower) in Nahuatl, all on account of its distinctive red flowers, which resemble open human hands. The scientific name means "five-fingered hand-flower tree". Description This species is native to Guatemala and southern Mexico. On the wet slopes of these areas, trees may reach 10.5–27.5 m (34–90 ft) in height. The unusual appearance of the 'hands' has stimulated cultivation in gardens around the world, primarily in North America where it grows well near its native range. The leaves are large and shallowly lobed, with a brown indumentum on the underside. The distinctive flowers appear in late spring and early summer; the five stamens are long, curved upward, and bright red, giving the distinct impression of a clawed hand. Its fruit is a 7.5–10 cm (3.0–3.9 in) long oblong, five-lobed capsule which contains black seeds.It was originally described from a single cultivated specimen grown in Toluca in the Toluca Valley, well outside the native range. The Aztecs revered the tree. Intergeneric hybrid It is closely related to Fremontodendron, sufficiently to produce an intergeneric hybrid ×Chiranthofremontia lenzii Henrickson, which has yellow flowers and a reduced form of the claw. Uses The Aztecs and others have used solutions containing the tree's flowers as a remedy for lower abdominal pain and for heart problems. Such solutions also reduce edema and serum cholesterol levels and, because they contain the glycosides quercetin and luteolin, act as diuretics. Passage 6: Zeltnera Zeltnera is a genus of flowering plants in the gentian family. It was erected in 2004 when the genus Centaurium (the centauries) was split. Genetic analysis revealed that Centaurium was polyphyletic, made up of plants that could be grouped into four clades. Each became a genus. Centaurium remained, but it is now limited to the Eurasian species. The Mexican species now belong to genus Gyrandra, and the Mediterranean and Australian plants are in genus Schenkia. The new name Zeltnera was given to this genus, which contains most of the North American centauries. There are about 25 species.Plants of this genus are annual, biennial, or short-lived perennial herbs. They are taprooted or have fibrous root systems. They produce one or more branching stems which are often ridged and sometimes winged. The leaves are gathered around the lower stem or arranged along the length of the stem. They vary in shape, from linear to lance-shaped to oval, and are green or yellowish. The inflorescence is variable in arrangement. The flower has a tubular throat that opens into a flat corolla with four or five lobes. It may be any shade of pink or white, and the throat is usually paler, to white or yellowish, or occasionally patterned with green. The fruit is a small capsule containing up to 700 minute seeds. Zeltnera and Centaurium species differ mostly in the morphology of the style and stigma, as well as the shape of the fruit capsule.Zeltnera can be subdivided into three groups, a division which is supported by DNA evidence but is most obvious in terms of geography. They are casually named the "Californian group", the "Texan group", and the "Mexican group". The first group is distributed from British Columbia south through the West Coast of the United States and into Baja California. The "Texan" plants occur from Arizona to Oklahoma in the US and throughout northern Mexico. The "Mexican group" occurs in Mexico, Central America, and parts of South America. The range may extend north into Arizona.Genus Zeltnera was named for the Swiss botanists Louis and Nicole Zeltner, who have researched Centaurium and other gentians.Species include: Zeltnera abramsii Zeltnera arizonica - Arizona centaury Zeltnera beyrichii - quinineweed Zeltnera breviflora Zeltnera calycosa - Arizona centaury, shortflower centaury, rosita, Buckley centaury Zeltnera davyi - Davy's centaury Zeltnera exaltata - desert centaury Zeltnera gentryi Zeltnera glandulifera - sticky centaury Zeltnera madrensis Zeltnera martinii Zeltnera maryanna - gypsum centaury Zeltnera muehlenbergii - Muhlenberg's centaury Zeltnera multicaulis - manystem centaury Zeltnera namophila - springloving centaury Zeltnera nesomii Zeltnera nevadensis Zeltnera nudicaulis - Santa Catalina Mountain centaury Zeltnera pusilla Zeltnera quitensis - Britton's centaury Zeltnera setacea Zeltnera stricta Zeltnera texensis - Lady Bird's centaury Zeltnera trichantha - alkali centaury Zeltnera venusta - charming centaury, canchalagua Zeltnera wigginsii Passage 7: Psychotria Psychotria is a genus of flowering plants in the family Rubiaceae. It contains 1,582 species and is therefore one of the largest genera of flowering plants. The genus has a pantropical distribution and members of the genus are small understorey trees in tropical forests. Some species are endangered or facing extinction due to deforestation, especially species of central Africa and the Pacific. Many species, including Psychotria viridis, produce the psychedelic chemical dimethyltryptamine (DMT). Selected species Formerly placed here Psychotria elata = Palicourea elata Psychotria poeppigiana = Palicourea tomentosa Image gallery See also List of the largest genera of flowering plants Passage 8: Tibouchina heteromalla Pleroma heteromallum, synonyms including Tibouchina grandifolia and Tibouchina heteromalla, known by the common name silverleafed princess flower in English, is a species of evergreen flowering plant in the family Melastomataceae. It is native to French Guiana, Bolivia and Brazil. Description Pleroma heteromallum reaches an average height of 4–6 feet (1.2–1.8 m), with a maximum of about 8–10 feet (2.4–3.0 m) in its native habitat. The branching stem is woody and the large, silvery green leaves are simple, ovate, velvety in texture, and oppositely arranged. The inflorescence is a panicle of several purple flowers with five petals. The plant has 4–6 inches (10–15 cm) long leaves, with prominent veins that are puffed up in the middle and old leaves will often turn an orange color just prior to dropping off. Cultivation The plant is cultivated as an ornamental for its showy foliage and purple flowers. It is sensitive to cold but can tolerate a light frost. Gallery Passage 9: Aureusvirus Aureusvirus is a genus of viruses, in the family Tombusviridae. Plants serve as natural hosts. There are six species in this genus. Taxonomy The genus contains the following species: Cucumber leaf spot virus Elderberry aureusvirus 1 Johnsongrass chlorotic stripe mosaic virus Maize white line mosaic virus Pothos latent virus Yam spherical virus Structure Viruses in Aureusvirus are non-enveloped, with icosahedral and Spherical geometries, and T=3 symmetry. The diameter is around 30 nm. Genomes are linear, around 4.4kb in length. Life cycle Viral replication is cytoplasmic, and is lysogenic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription, using the premature termination model of subgenomic RNA transcription is the method of transcription. Translation takes place by leaky scanning, and suppression of termination. The virus exits the host cell by tubule-guided viral movement. Plants serve as the natural host. Transmission routes are mechanical, seed borne, and contact. Passage 10: Pothoideae Pothoideae is a subfamily of flowering plants in the family Araceae. The species in the subfamily are true aroids. Tribes and genera The subfamily consists of two tribes: AnthurieaeAnthurium Schott Pothoeae Pothos L. Pedicellarum M.Hotta (monotypic) Pothoidium Schott (monotypic)
Are Pothos and Tibouchina Aubl both flowering genus of plants?
yes
3,135
hotpotqa
4k
Passage 1: 1961 Night Series Cup The 1961 VFL Night Premiership Cup was the Victorian Football League end of season cup competition played in August and September of the 1961 VFL Premiership Season. This was the sixth season of the VFL Night Series. Run as a knock-out tournament, it was contested by the eight VFL teams that failed to make the 1961 VFL finals series. Games were played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. Geelong won its first night series cup defeating North Melbourne in the final by 12 points. Games Round 1 Semifinals Final See also List of Australian Football League night premiers 1961 VFL season External links 1961 VFL Night Premiership - detailed review including quarter-by-quarter scores, best players and goalkickers for each match Passage 2: 1971 Heinz Cup The 1971 VFL H.J. Heinz Night Premiership was the Victorian Football League end of season cup competition played in September of the 1971 VFL Premiership Season. Run as a knock-out tournament, it was contested by the eight VFL teams that failed to make the 1971 VFL finals series. Games were played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. It was the 16th and last VFL Night Series competition, with the series disbanded the following year due to waning interest and the introduction of the final five in the premiership competition. Melbourne won its first night series cup defeating Fitzroy in the final by 16 points. Games Round 1 Semi-finals Final See also List of Australian Football League night premiers 1971 VFL season Passage 3: 1965 Golden Fleece Cup The 1965 VFL Golden Fleece Night Premiership was the Victorian Football League end of season cup competition played in September of the 1965 VFL Premiership Season. Run as a knock-out tournament, it was contested by the eight VFL teams that failed to make the 1965 VFL finals series. It was the tenth VFL Night Series competition. Games were played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. This was the first time the Night Series cup had a naming rights sponsor in Golden Fleece petroleum products. North Melbourne won its first night series cup defeating Carlton in the final by 40 points. Games Round 1 Semi-finals Final See also List of Australian Football League night premiers 1965 VFL season External links 1965 VFL Night Premiership - detailed review including quarter-by-quarter scores, best players and goalkickers for each match Passage 4: 1956 Night Series Cup The 1956 VFL Night Premiership Cup was the Victorian Football League end of season cup competition played between the eight teams that didn't make the finals of that season. It was held between 23 August to 17 September 1956 with games being played at the Lake Oval, home ground of South Melbourne as it was the only ground equipped to host night games. Eight teams played in the first competition with the final seeing an attendance of 32,450 as South Melbourne defeated Carlton in the final by 6 points (13.16.94 to 13.10.88). Games Quarterfinals The opening match of the 1956 edition saw South Melbourne record the first victory of the new competition by 20 points over St Kilda. But the match was remembered for the brawl that happened in the third quarter which featured all of the players, umpires and some spectators in scenes not since the 1945 VFL Grand Final between Carlton and South Melbourne. The following match which was played five days later saw a surprise victory by North Melbourne who defeated Essendon by two points with North Melbourne leading at every change to get the win. Richmond and Carlton also recorded narrow wins over Hawthorn and Fitzroy in the remaining two quarter-finals which was played on 30 August and the 4 September. Semifinals The semi-finals began on the 6 September with South Melbourne taking on North Melbourne. In another tight tussle between the teams, it was not until late on in the final quarter with the usage of the wind that South Melbourne would take the win with a goal from Monks sealing the five point victory. In the second semi-final, an attendance of 25,500 saw Carlton win the match by 12 points in what The Age called fast, rugged and exciting football with Carlton leading at each break to record the victory. Final The final of the 1956 edition was played on the 17 September 1956 between South Melbourne and Carlton. With an attendance of 32,450 watching the match, South Melbourne's win was credited in the second quarter when they outscored Carlton 4.2 to 0.3 to open up a thirty-one point lead. Carlton tried to come back into the match throughout the final quarter and had a chance to take the lead but a dropped pass from Hands would give South Melbourne the win by six points and become the first winners of the Night Series Cup. See also List of Australian Football League night premiers 1956 VFL season Passage 5: 1958 Night Series Cup The 1958 VFL Night Premiership Cup was the Victorian Football League end of season cup competition played in August and September of the 1958 VFL Premiership Season. This was the third year the VFL Night Series had existed. In last years competition, each of the day finalists were duly defeated upon entry and their addition to the competition resulted in a drawn-out and complicated fixture of matches. The VFL thus elected to return to the original format for this year's competition as previously used in the 1956 Night Series Cup. Run as a knock-out tournament, it was contested by the eight VFL teams that failed to make the 1958 VFL finals series. Games were played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. St Kilda went on to win the night series cup, defeating Carlton in the final by 8 points. Games Round 1 Semifinals Final See also List of Australian Football League night premiers 1958 VFL season External links 1958 VFL Night Premiership - detailed review including quarter-by-quarter scores, best players and goalkickers for each match Passage 6: 1960 Night Series Cup The 1960 VFL Night Premiership Cup was the Victorian Football League end of season cup competition played in September of the 1960 VFL Premiership Season. This was the fifth season of the VFL Night Series. Run as a knock-out tournament, it was contested by the eight VFL teams that failed to make the 1960 VFL finals series. Games were played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. South Melbourne went on to win the night series cup for the third time, defeating Hawthorn in the final by 13 points. Games Round 1 Semifinals Final See also List of Australian Football League night premiers 1960 VFL season External links 1960 VFL Night Premiership - detailed review including quarter-by-quarter scores, best players and goalkickers for each match Passage 7: Albert Park, Victoria Albert Park is an inner suburb of Melbourne, Victoria, Australia, 4 km (2.5 mi) south of Melbourne's Central Business District. The suburb is named after Albert Park, a large lakeside urban park located within the City of Port Phillip local government area. Albert Park recorded a population of 6,044 at the 2021 census.The suburb of Albert Park extends from the St Vincent Gardens to Beaconsfield Parade and Mills Street. It was settled residentially as an extension of Emerald Hill (South Melbourne). It is characterised by wide streets, heritage buildings, terraced houses, open air cafes, parks and significant stands of mature exotic trees, including Canary Island Date Palm and London Planes. The Albert Park Circuit has been home to the Australian Grand Prix since 1996, with the exception of 2020–2021 due to the COVID-19 lockdowns. History Indigenous Australians first inhabited the area that is now Albert Park around 40,000 years ago. The area was a series of swamps and lagoons. The main park after which the suburb was named was declared a public park and named in 1864 to honour Queen Victoria's consort, Prince Albert.Albert Park was used as a garbage dump, a military camp and for recreation before the artificial lake was built. In 1854 a land-subdivision survey was done from Park Street, South Melbourne, to the northern edge of the parkland (Albert Road). St Vincent Gardens were laid out and the surrounding streets home to the city's most successful citizens. Street names commemorated Trafalgar and Crimean War personalities. Heritage Victoria notes that Albert Park's St Vincent Gardens "is historically important as the premier 'square' development in Victoria based on similar models in London. It is significant as the largest development of its type in Victoria and for its unusual development as gardens rather than the more usual small park" and "was first laid out in 1854 or 55, probably by Andrew Clarke, the Surveyor-General of Victoria. The current layout is the work of Clement Hodgkinson, the noted surveyor, engineer and topographer, who adapted the design in 1857 to allow for its intersection by the St Kilda railway line. The precinct, which in its original configuration extended from Park Street in the north to Bridport Street in the south and from Howe Crescent in the east to Nelson Road and Cardigan Street in the west, was designed to emulate similar 'square' developments in London, although on a grander scale. The main streets were named after British naval heroes. The development of the special character of St Vincent Place has been characterised, since the first land sales in the 1860s, by a variety of housing stock, which has included quality row and detached houses and by the gardens which, although they have been continuously developed, remain faithful to the initial landscape concept." St Vincent's is a garden of significant mature tree specimens. It is registered with the National Trust and is locally significant for the social focus the gardens provide to the neighbourhood. Activities in the park range from relaxing walks, siestas to organised sports competition. The Albert Park Lawn Bowls Club was established in 1873 and the Tennis Club established 1883, on the site of an earlier croquet ground. Geography Albert Park features part of the massive Albert Park and Lake (formerly South Park in the 19th century until it was also renamed after Prince Albert) and is located nearby. It is a significant state park managed by Parks Victoria. It is also known as the site of the Albert Park Circuit. Commercial centres Commercial centres include Bridport Street, with its cafes and shops and Victoria Avenue, known for its cafes, delicatessens and boutiques. Beach areas Albert Park has a long beach frontage, with several distinctive features, including many grand buildings (such as the Victoria Hotel, a grand hotel and former coffee palace, now café bar, built in 1887) and Victorian terrace homes; Kerferd Kiosk, an iconic Edwardian bathing pavilion and Kerferd Pier, which terminates Kerferd Road and is a jetty onto Port Phillip, used for fishing by many and sharks have occasionally been found around it. Albert Park and Lake The lake is popular with strollers, runners and cyclists. Dozens of small yachts sail around the lake on sunny days. Only the north eastern part of the park and lake is actually in the suburb, the rest is in the neighbouring suburbs of South Melbourne, Melbourne, Middle Park and St Kilda. Demographics At the 2016 census, Albert Park had a population of 6,215. 66.2% of people were born in Australia. The next most common countries of birth were England 5.4%, Greece 4.0% and New Zealand 2.5%. 74.2% of people only spoke English at home. Other languages spoken at home included Greek at 8.0%. The most common responses for religion were No Religion 39.3% and Catholic 18.4%. Housing Albert Park is composed mainly of Victorian terrace and semi-detached housing. Many residential areas are in heritage overlays to protect their character. Boyd Street, a leafy backstreet near Middle Park, is a fine example of this. Transport Beaconsfield Parade is the main beachside thoroughfare, between St Kilda and Port Melbourne, which runs along the Port Phillip foreshore. Richardson Street and Canterbury Road follows a similar inland route south to St Kilda. The main road arterial is Kerferd Road, a wide boulevard lined with elm trees and a central reservation, which connects from South Melbourne's Albert Road. Pickles Street, Victoria Avenue and Mills Street are the main roads running west and east toward South Melbourne. Several tram routes service Albert Park; Route 1 along Victoria Avenue, Route 12 along Mills Street and Route 96 on a reservation parallel to Canterbury Road. Until 1987, Albert Park was serviced by the St Kilda railway line, with Albert Park railway station being located at Bridport Street. The line has since been converted to serve trams, and forms a large part of the Route 96 tram line. CDC Melbourne's Route 606 runs through the suburb. There are segregated cycle facilities along the beach and Canterbury Roads, with marked bicycle lanes elsewhere. Sport The suburb has been home to the Formula One Australian Grand Prix since 1996. The Albert Park Circuit runs on public roads. The choice of Albert Park as a Grand Prix venue was controversial, with protests by the Save Albert Park group. In preparing the Reserve for the race existing trees were cut down and replaced during landscaping, roads were upgraded, and facilities were replaced. Both major political parties support the event. The Melbourne Supercars Championship is also held on the same circuit. Albert Park is the home of soccer club South Melbourne FC who play out of Lakeside Stadium; aptly named due to its positioning next to Albert Park Lake. Lakeside Stadium (known then as Bob Jane Stadium) was redeveloped in 2010 to include an international standard athletics track, as well as new grandstands and administrative facilities, and is also the home of the Victorian Institute of Sport. The stadium was built on the site of the old Lake Oval, which was an historic Australian rules football venue for the South Melbourne Football Club. The Melbourne Sports & Aquatic Centre (MSAC) is a large swimming centre, which hosted squash, swimming, diving events and table tennis during the 2006 Commonwealth Games. The MSAC is also the home of the Melbourne Tigers that play in the South East Australian Basketball League. In December 2006 polo returned to Albert Park Reserve after an absence of 100 years. Albert Park is home to a parkrun event. The event at Albert Park is held at 8am every Saturday and starts in the Coot Picnic area, opposite the MSAC. Notable residents Hilda and Laurel Armstrong – 'The Vegemite Girls', sisters who coined the name of the iconic Australian food spread in 1923 Mae Busch (1891–1946) – actress, co-star in the films of famous Hollywood comedy duo Laurel and Hardy Roy Cazaly (1893–1963) – Australian rules football legend [birthplace] Noel Jack Counihan (1913–1986) – artist and revolutionary, made social realist art in response to the political and social issues of his times John Danks (1828–1902) – businessman, manufacturer, councillor, benefactor; Danks Street named after him Private Edward "Eddie" Leonski [US Army] (1917–1942) – infamous serial killer; during World War II was stationed in Melbourne and murdered three women. Was hanged for the crimes on 9 November 1942. His first victim, Ivy McLeod, was found beaten and strangled in a doorway in Albert Park, killed by Leonski after he drank whisky all morning and afternoon at the Bleak House Hotel (aka Beach House Hotel) Walter Lindrum (1898–1960) – world-famous billiards player, regarded as the greatest ever to play the game Likely Herman "Like" McBrien (1892–1956) – leading Australian Rules football administrator and politician Ernest McIntyre (1921–2003) – Australian rules footballer [birthplace] Allan McLean (1840–1911) – pastoralist, station agent, politician; 19th Premier of Victoria in 1899; elected to the first Commonwealth Parliament in 1901 Albert Monk (1900–1975) – union and labour leader; during World War Two was concurrent president of the ACTU & Trades Hall Council & ALP Victorian branch; seminally influenced the growth of the ACTU as Australia's peak trade union organization. His house was in Kerferd Road King O'Malley (1858–1953) – politician, influential in the establishment of the Commonwealth Bank and the selection of Canberra as the national capital Alex Lahey, singer-songwriter and multi-instrumentalist who was born and raised in Albert Park See also City of South Melbourne – Albert Park was previously within this former local government area. Passage 8: 1957 Night Series Cup The 1957 VFL Night Premiership Cup was the Victorian Football League end of season cup competition played in August, September and October of the 1957 VFL Premiership Season. This was the second edition that the VFL Night Series had existed with the competition expanding to feature all twelve teams. The games were being played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. In the final, South Melbourne took out their second night series cup, defeating Geelong by 51 points (15.13.103 to 8.4.52). This would later be the only edition until 1977 to feature all twelve teams at the time. Games Round 1 Round 2 Semifinals Final See also List of Australian Football League night premiers 1957 VFL season Passage 9: 1966 Golden Fleece Cup The 1966 VFL Golden Fleece Night Premiership was the Victorian Football League end of season cup competition played in September of the 1966 VFL Premiership Season. Run as a knock-out tournament, it was contested by the eight VFL teams that failed to make the 1966 VFL finals series. It was the eleventh VFL Night Series competition. Games were played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. North Melbourne won its second night series cup in a row defeating Hawthorn in the final by 53 points. Three rule changes, all of which were eventually permanently adopted in the VFL, were trialled during this series:, A free kick was awarded against a player if the ball was kicked out of bounds on the full. Adopted in 1969. A 50yd square was drawn in the centre of the ground, and no more than four players from each team were allowed within the square during a centre bounce; this was to reduce congestion at centre bounces. Adopted in a modified form in 1973. The number of boundary umpires was increased from two to four. Adopted in 2008. Games Round 1 Semi-finals Final See also List of Australian Football League night premiers 1966 VFL season Passage 10: 1967 Golden Fleece Cup The 1967 VFL Golden Fleece Night Premiership was the Victorian Football League end of season cup competition played in August and September of the 1967 VFL Premiership Season. Run as a knock-out tournament, it was contested by the eight VFL teams that failed to make the 1967 VFL finals series. It was the twelfth VFL Night Series competition. Games were played at the Lake Oval, Albert Park, then the home ground of South Melbourne, as it was the only ground equipped to host night games. Footscray won its third night series cup defeating South Melbourne in the final by 45 points. Games Round 1 Semi-finals Final See also List of Australian Football League night premiers 1967 VFL season External links 1967 VFL Night Premiership - detailed review including quarter-by-quarter scores, best players and goalkickers for each match
1956 Night Series Cup games were played at what inner suburb of Melbourne that is 3km south of Melbourne's central business district?
Albert Park
3,326
hotpotqa
4k
Passage 1: Avoca Beach, New South Wales Avoca Beach is a coastal suburb of the Central Coast region of New South Wales, Australia, about 95 kilometres (59 mi) north of Sydney. Avoca Beach is primarily a residential suburb, Avoca Beach is also a popular tourist destination. Avoca Beach is known for its surfing and state (regional) surf competitions. Avoca Beach village has a variety of restaurants and cafes as well as a post office, newsagent, pharmacy and mini-mart. Avoca Beach also has a historic cinema, a hotel, bowling club, motel and caravan park. It is located within the Central Coast Council local government area. This suburb is unrelated to the NSW Southern Highlands suburb of Avoca, New South Wales, except in name only. Geography Avoca Beach is located on the Tasman Sea 17 kilometres (11 mi) east-southeast of the Gosford central business district, and about halfway between Newcastle and Sydney, being about 95 kilometres (59 mi) from each. It is bordered to the north by the Bulbararing Lagoon, to the west by Saltwater Creek and to the east by the ocean. History The area was originally inhabited by the Darkinjung & Awabakal Aboriginal people. "Avoca" is an Irish name meaning "great estuary" or "where the river meets the sea", and is also the name of a town in County Wicklow, Ireland.On 4 January 1830, 640 acres (259 ha) of land in the area were promised to Irish army officer John Moore. However, the official deeds were not issued until 30 September 1839, due to the difficulty in surveying the land. He built a house opposite Bulbararing Lake (now known as Avoca Lake) and planted vines, cereals and fruit trees. He left the area in 1857 for the Victorian goldfields. In the late 19th century, Tom Davis leased the area in order to exploit local timber, which was transported by tram to a mill at Terrigal via what is now Tramway Road in North Avoca. In the 1950s, commercial buildings began to be built and populated, including bakery, service station, butchery, mini mart, caravan park and the Avoca Beach Picture Theatre. Residential development in Avoca Beach began during the 20th century, and the area subsequently became a popular holiday retreat with wealthy residents of Sydney's North Shore.In February 2010, following the proposal to scuttle the frigate HMAS Adelaide off the beach as a dive wreck in late March, a resident action group was formed to protest against this. The group claims that the wreck will negatively affect surf conditions, tides, and littoral sand drift, and is concerned over the thoroughness of inspection and removal of dangerous materials and chemicals from the former warship, with the chance that marine life and people could be poisoned. An appeal to the Administrative Appeals Tribunal three days before the planned scuttling date of 27 March led to a postponement of the plan until the residents' claims were investigated. The decision from the Tribunal, in favor of the project going ahead after further cleanup work, was handed down on 15 September 2010, and despite further attempts to delay, Adelaide was scuttled on 13 April 2011. Demographics At the ABS 2016 census, Avoca Beach had a population of 4,584 people. 76.4% of people were born in Australia. The next most common country of birth was England at 8.9%. 90.9% of people only spoke English at home. The most common responses for religion in Avoca Beach were No Religion 34.0%, Catholic 22.5% and Anglican 19.9%.Avoca Beach residents had a median age of 41, compared with the median of 42 for the Central Coast local government area. Median individual incomes in Avoca Beach were above average for the region — $764 per week compared with $600 per week. The 2016 Census reported 1,527 occupied private dwellings, of which 83.3% were separate houses, and the median monthly housing loan repayment of $2,167 was well above the regional average of $1,750. In 2020, Avoca Beach's median house price was $1,150,000 versus $940,000 for the Central Coast region. Education Avoca Beach has a state primary school, which first opened in 1935. The suburb is within Kincumber High School's catchment area. Politics At federal level, Avoca Beach is within the Division of Robertson. In the Federal election of May 2022 it was won by Gordon Reid of the Australian Labor Party, previously held for nine years by Lucy Wicks of the Liberal Party of Australia. In the New South Wales Legislative Assembly, Avoca Beach is within the electorate of Terrigal, currently held by Adam Crouch of the Liberal Party. Polling place statistics are presented below from the Avoca Beach polling place in the elections leading up to and including the 2019 federal and state elections as indicated. Gallery Passage 2: Hamlyn Terrace, New South Wales Hamlyn Terrace is a suburb of the Central Coast region of New South Wales, Australia. It is part of the Central Coast Council local government area. The suburb was formerly part of Warnervale and is part of the Warnervale development precinct. The suburb is split between two governmental wards in the Central Coast Council governmental area, the northern part is in the Budgiewoi ward and the rest is in the Wyong ward. Passage 3: Woy Woy Bay, New South Wales Woy Woy Bay is a suburb located in the Central Coast region of New South Wales, Australia, as part of the Central Coast Council local government area. Most of the suburb's area belongs to the Brisbane Water National Park, although a small community on Woy Woy Bay (part of Brisbane Water) containing a community hall, public reserve and wharf is also located within the suburb. Woy Woy Bay is commonly used by boaters on the weekend because of the open expanses of the bay. The main thoroughfare is Taylor Street. Notable residents John Della Bosca (1956–), former politician: New South Wales Legislative Council 1999–2010. Belinda Neal (1963–), former politician: Australian House of Representatives (2007–2010); New South Wales Legislative Council (1994–1998) and Gosford City Council Alderman (1992–1994). Passage 4: Phegans Bay, New South Wales Phegans Bay () is a suburb within the local government area of the Central Coast Council on the Central Coast of New South Wales, Australia. Phegans Bay is located 6 kilometres (4 mi) west of Woy Woy between Brisbane Water National Park and Woy Woy Inlet. Passage 5: Umina Beach, New South Wales Umina Beach ( you-MY-nə) is a suburb within the Central Coast Council local government area on the Central Coast of New South Wales, Australia. Umina Beach is situated 85 kilometres (53 mi) north of Sydney and 111 kilometres (69 mi) south of Newcastle. Umina Beach is locally known on the Central Coast as being on 'The Peninsula' (or 'Woy Woy Peninsula'). A natural peninsula that includes the towns of Umina Beach, Woy Woy, Blackwall, Booker Bay and Ettalong Beach. The main street, West Street, is the retail centre of The Peninsula with key national brands represented through Coles, Woolworths, Aldi and Bunnings. Moving from north to south, Umina Beach begins where Woy Woy and Blackwall end: at Veron Road and Gallipoli Avenue. Umina Beach is the most populated suburb on the Central Coast. Geography Umina Beach has one unbroken sand shoreline that has been divided in name only: Umina Beach (south western section) and Ocean Beach (north eastern section). Both beaches have their own Surf Life Saving Club (refer to Sports Clubs section). The only other type of shoreline is located at Umina Point (Mt Ettalong), a Hawkesbury Sandstone headland that adjoins the south western end of Umina Beach. Umina Beach is geographically located on the north side of Broken Bay at the river mouth of Hawkesbury River. The formation of Umina Beach and 'The Peninsula' is due to sand deposition that has been influenced by (and not limited to) climatic conditions, soil-binding flora, Hawkesbury Sandstone formations (e.g.; Box Head, Barrenjoey and Umina Point), wave patterns and tidal amplitude from the Tasman Sea, Hawkesbury River and Brisbane Water. History The word "Umina" was derived from the Australian Aboriginal word meaning Place of sleep.The Woy Woy and Umina district was home to the Guringai Australian Aboriginal tribe. This tribe stretched from the north side of Port Jackson, north through Pittwater, Broken Bay and Brisbane Water, to the southern end of Lake Macquarie.European entry to the region was first recorded in March 1788 when Governor Arthur Phillip landed with a party at Ettalong Beach. In June 1789, a more thorough investigation of Brisbane Water was conducted. A rest stop was made at Ettalong Beach before the group passed through 'The Rip' (a dangerous passage leading into Brisbane Water). On return, the party camped at Ettalong Beach before sailing to Dangar Island in the Hawkesbury River.The first land subdivision occurred in 1914 which led to the current commercial and residential centre. Umina Beach celebrated its 100th anniversary in 2014. Schools Umina Beach is served by two public schools, Umina Public School (primary school) and Brisbane Water Secondary College (high school).Opened on 3 February 1956, Umina Public School's population approximates 800 students and 50 staff. It currently has 29 classes from kindergarten to year 6. Business Umina Beach town centre has been represented by the Peninsula Chamber of Commerce since the late 1980s. It is affiliated with the NSW Business Chamber. The town centre is serviced by Woolworths, Coles, Bunnings Hardware, Aldi Supermarkets and McDonald's, along with a number of local shops, takeaway restaurants and cafes, including the multi-award-winning Bremen Patisserie on West street. The town is also serviced by a number of medical and specialist practices, the Central Coast Council Library, and two service stations. Transport Links Umina Beach is well serviced by regular bus services (Busways) with connections to Woy Woy Rail Station and Gosford. The town centre is easily accessed with an efficient grid system of connecting roads with primary access from Ocean Beach Road, West Street and Barrenjoey Road. Substantial car parking facilities adjacent to the town centre contribute to its success as a retail hub. Media Community Papers: Peninsula Community Access News, fortnightly free distribution within the 2256 and 2257 postcode areas. Central Coast Express Advocate, published by News Limited's Cumberland Courier Newspapers, is distributed free every Wednesday & Friday. Central Coast Sun Weekly was last published on 30 April 2009.Radio Broadcasting: Umina Beach is locally serviced by the national public broadcaster, Australian Broadcasting Corporation, via ABC Local Radio 2BL/T 92.5 FM. Commercial licences covering Umina Beach are: Analogue FM and AM signals can be received from Sydney and Newcastle. As a result, Umina Beach is located within the most saturated radio market in Australia. As of August 2010, there was no launch date known for Digital Radio services for the Central Coast. Sports fields Umina Oval, located at the southern end of Melbourne Avenue, is the home ground for four pitch team sports: Soccer, Rugby league, Cricket and Tennis.McEvoy Oval, located at the western end of McEvoy Avenue, is used for Track and Field Athletics, Touch Football and Cricket. Sports clubs Club Umina RSL Bowls Club is located in Melbourne Avenue, Umina Beach, within the Club Umina complex. Membership is available to ex and existing Servicemen of the Australian Defence Force and its allies who are financial members of both Club Umina and either Full or Associate Members of Merrylands RSL Club Sub-Branch. Ocean Beach Malibu Club. Ocean Beach Surf Life Saving Club is located at the southern end of Trafalgar Avenue, Umina Beach. Ocean Beach Surfers Association. Umina Beach "Bunnies" Rugby League Football Club is based at Umina Oval and play on the Col Gooley Field. The club is affiliated with the Central Coast Division of Country Rugby League (refer also to Country Rugby League) within the New South Wales Rugby League. The Bunnies team sheet has included Australian Kangaroos experience: Mark Geyer in 1993 and Cliff Lyons as Captain-Coach in 2001. Umina Boardriders. Umina "Bunnies" Junior Rugby League Football Club is based at Umina Oval and play on the Col Gooley Field. The club is affiliated with the Central Coast Division Junior Rugby League which is part of the Central Coast Division of Country Rugby League (refer also to Country Rugby League) within the New South Wales Rugby League. Umina Beach "Bunnies" Netball Club are emotionally linked to Umina Junior Rugby League Football Club and do not have a physical presence in Umina Beach. The club conducts committee meetings at Woy Woy Leagues Club, Blackwall Road, Woy Woy, has a postal address in Ettalong Beach and plays in the Woy Woy Peninsula Netball Associations competition, located in Lagoon Street, Ettalong Beach. Umina Beach Surf Life Saving Club is located at the southern end of Ocean Beach Road, Umina Beach. Umina "Devils" Cricket Club is based at Umina Oval and has 2 cricket fields. The main cricket field, Field 1, is located on the eastern side of the oval, on Col Gooley Field, and has multiple grass pitches. Field 2 is located on the western side of the oval and has one artificial cricket pitch. The club caters for both senior and junior players from 5 years of age. The Umina cup is an annual father's vs sons soccer game at Umina beach holiday park. This game has been played 4 times and has been won by the sons all 4 times. The most famous ground is sand field as there has been 3 of the 4 games played their. Umina United "Eagles" Soccer Club is based at Umina Oval. The club caters for both senior and junior players from 5 years of age. The club is affiliated with Central Coast Football. Woy Woy Peninsula Little Athletics Centre is based at McEvoy Oval. The club is affiliated with Central Coast Little Athletics. The club caters for junior athletes from 6 years of age.Umina Beach is known as the home of "Upball" Community Groups Umina Community Group– The Umina Community Group is committed to improving Umina Beach by providing a unified voice to lobby the Central Coast Council and government bodies. Umina Beach is a growing community, along with its many visitors, this means improvements to local infrastructure and services in Umina Beach and Ocean Beach are essential to allow Umina Beach to thrive as a modern coastal community. Umina Beach Men's Shed– The Men's Shed is a place where members of the community of all different life experiences come together at their own pace, share skills, swap ideas, solve problems and be involved in projects for the benefit of the community. Notable residents Belinda Emmett (1974–2006), actress and singer, grew up in Umina Beach Mark Geyer (born 1967), rugby league player and radio host James Harrison OAM (born 1936), Anti-D Vaccine blood donor Dane Searls (1988–2011), BMX rider Eric Worrell MBE (1924–1987), zoologist and writer Passage 6: Shelly Beach, New South Wales Shelly Beach is a coastal suburb of the Central Coast region of New South Wales, Australia, located east of Tuggerah Lake and bordering the Pacific Ocean south of The Entrance. It is part of the Central Coast Council local government area. It is 66 km south of Newcastle & 93 km north of Sydney. Shelly Beach is considered one of the most popular surfing beaches on the Central Coast. Shelly Beach Golf Club (previously Tuggerah Lakes Gold Club) is an 18-hole golf course located at the eastern end of Shelly Beach Road, overlooking Shelly Beach. It was formally established in 1930, originally being located at Killarney Vale until 1954 when it moved to its present location. Within the suburb there is the Shelly Beach Surf Club and the Shelly Beach Fossils soccer club. Transportation Red Bus Services operates routes through Shelly Beach (11, 12, 21, 22, 23). Bus 21 operates from The Entrance North to Gosford regularly. Notable people Nikki Garrett – golfer Banjo Paterson – poet Accommodation Shelly Beach Cabins Bluewater Resort Sun Valley Tourist Park Passage 7: Magenta, New South Wales Magenta () is a coastal location of the Central Coast region of New South Wales, Australia. It is part of the Central Coast Council local government area, and contains a significant portion of the Wyrrabalong National Park. Magenta is a relatively new area to be developed for residence, with the suburb gazetted in 1991. Previously it was the location for rutile mining and as the garbage tip for The Entrance, New South Wales. The location is traversed south–north by Wilfred Barrett Drive linking The Entrance and Toukley, named after a Wyong Shire President. The road was designated part of the Central Coast Highway in 2006. Magenta Shores The Magenta Shores Golf Resort consists of an 18-hole golf course, private housing and a resort. Passage 8: Gorokan, New South Wales Gorokan is a suburb of the Central Coast region of New South Wales, Australia. It is part of the Central Coast Council local government area. The word "Gorokan" means "The Morning Dawn" from the language of the Awabakal (an Aboriginal tribe). There are two schools in the area, Gorokan Public School and Gorokan High School. Electricity was first brought to the area as a part of a £42,000 programme for electricity reticulation under the Brisbane Water County Council. Located on the shores of Lake Tuggerah, Gorokan has long been a holiday destination with the Quoy family first buying land in the area in 1923 where they built and rented a series of holiday homes. Other families such as the Gedlings (by way of Gordon and Margaret Gedling) built a holiday house on the Gorokan waterfront in 1956. Passage 9: Kulnura, New South Wales Kulnura () is a suburb of the Central Coast region of New South Wales, Australia, located north of Mangrove Mountain along George Downes Drive. It is within Central Coast Council local government area. Kulnura's name is an Aboriginal word meaning "in sight of the sea" or "up in the clouds", it was named in 1914 by a meeting of the early pioneers Messrs Archibold, Collins, Gatley, Penn, Young, Williams and Gibson. Its population of approximately 600 people rely mostly on the town's fruit and cattle industries for income, however many commute to regional centres of Gosford, Wyong and even to Sydney for work. It is also home to the biggest catchment area on the Central Coast, Mangrove Creek Dam which has a capacity of 190 000 ML, and has a free tourist kiosk on the site. "Mangrove Mountain Bottlers Pty Ltd" bottle still water at their plant at Kulnura, as well as for private label mass-retailers, such as 7–11. Another one of Kulnura's attractions is the Paintball Place, which holds paintball skirmishes for people over 16. District Kulnura extends from the intersection of Bloodtree Road and George Downes Drive in the south, to the Great Northern Road in the north, to include the Mangrove Creek reservoir in the west, to down Bumble Hill Road in the east and to Red Hill Road in the southeast. Map of Kulnura in Google Maps Kulnura General Store-café, petrol station and a bottle shop are located just opposite Kulnura Hall at the intersection of Greta Road with George Downes Drive. Kulnura's One Stop convenience store is also a popular resting place for passing motorcyclists and tourists. Kulnura Public School is situated at 9 Williams Road. Festival Every year Kulnura is host to the Bloodtree Festival, which is a celebration of life in the Mangrove Mountain area. It has been running for the past five years at Kulnura Oval, adjacent to Kulnura Hall. Passage 10: Microwave Jenny Microwave Jenny is an Australian pop/folk/jazz duet that consists of Tessa Nuku on vocals and Brendon Boney on guitar and vocals. They both have Indigenous backgrounds with Brendon having been born and raised in Wagga Wagga, and Tessa having been born in Ulladulla, New South Wales before moving to and growing up in Umina Beach, New South Wales Brendon was the singing voice for the character Willie in film Bran Nue Dae. The phrase 'Microwave Jenny' comes from the 1997 film The Castle. The duo are married and have a daughter. Brendon has a solo alternative, hip/hop project called The Magpie Swoop. Live performance The band play live in multiple formats either as an Acoustic Duet, Trio, Quartet or full band. The duo feature heavily on the Australian music festival circuit with notable live performances including Bluesfest, Woodford Folk Festival, Peats Ridge Festival and Nannup Music Festival.They also supported Thirsty Merc on their 2011 national tour of Australia. Influences Microwave Jenny have cited James Taylor, Janis Ian, Bill Withers and Van Morrison as musical influences. Awards and nominations They were recipients of the 2009 Peter Garrett Breathrough Grant. Brendon won an APRA Professional Development Award in 2011 for his songwriting and composing. In 2009, Microwave Jenny was nominated for a Deadly Award for Most Promising New Artist but did not win. Discography They have released two EPs Summer 1. "Mellow" – 3:59 2. "Mr Man in the Moon" – 3:47 3. "I'll Never Learn" – 5:02 4. "Summer" – 3:35 Crazy, Crazy Things 1. "Stuck on the Moon" – 3:06 2. "Homemade Lemonade" – 3:52 3. "Lyin" – 3:18 4. Locked in the Closet" – 3:54
What is the name of this suburb within the Central Coast Council local government area, where Tessa, of Microwave Jenny, grew up?
Umina Beach, New South Wales
3,539
hotpotqa
4k
Passage 1: Louisville Thunder Louisville Thunder was an indoor soccer club based in Louisville, Kentucky that was one of the founding clubs competing in the American Indoor Soccer Association. Peter Mahlock served as President and General Manager and Keith Tozer was the head coach. During the first season Tozer moved from just coaching to logging shifts as a player/coach. In their debut season of 1984–1985, goalkeeper Rick Schweizer won the 'Goalkeeper of the Year' award, and made it on to the All-Star team. The Louisville Thunder played its home games at the Broadbent Arena. However, in 1987 after winning the AISA league championship over the Canton Invaders, the team disbanded due to ownership problems. The team did produce several league all-stars during its existence including Rick Schweizer, Zoran Savic, Art Hughes and Chris Hellenkamp. Coaches Keith Tozer (1984–87) Individual player honors 1984–1985 Rick Schweizer – Goalkeeper of the Year 1984–1985 All-Star TEAM Rick Schweizer & Art Hughes 1985–1986 Zoran Savic – Top Points Scorer (81) 1985–1986 All-Star TEAM Zoran Savic & Art Hughes 1986–1987 All-Star TEAM Zoran Savic, Art Hughes & Chris Hellenkamp Year-by-year See also Sports in Louisville, Kentucky Passage 2: Louisville Icehawks The Louisville Icehawks were a professional ice hockey team competing in the East Coast Hockey League. The team, based in Louisville, Kentucky, played from 1990 to 1994. Their home venue was Broadbent Arena at the Kentucky Exposition Center. The mascot was called Tommy Hawk, a play on tomahawk, and resembled The San Diego Chicken, but with coloration and costume matching the team's. Tommy Hawk was "banned" from the inside portion of the arena for a period of time, due to an altercation with a visiting player who was in the penalty box. In the 1995–96 season, the team was renamed and moved to Florida to become the Jacksonville Lizard Kings. For a period of time the Louisville Icehawk's parent team/NHL Affiliate were the Pittsburgh Penguins. Trevor Buchanan was a player for the Icehawks that spent a great deal of time in the penalty box, thus spawning his own fan club. Playoffs 1990–91: Defeated Knoxville 3–0 in quarterfinals; lost to Greensboro 4–0 in semifinals. 1991–92: Defeated Toledo 4–1 in first round; received quarterfinals bye; defeated Cincinnati 3–1 in semifinals; lost to Hampton Roads 4–0 in finals. 1992–93: Did not qualify. 1993–94: Defeated Knoxville 2–1 in first round; lost to Birmingham 3–0 in quarterfinals. See also Sports in Louisville, Kentucky Passage 3: Kentucky Athletic Hall of Fame The Kentucky Athletic Hall of Fame is a sports hall of fame for the U.S. state of Kentucky established in 1963. Individuals are inducted annually at a banquet in Louisville and receive a bronze plaque inside Louisville's Freedom Hall. The Kentucky Athletic Hall of Fame other wise known as the Kentucky Sports Hall of fame, is a non-profit organization funded by the Kentucky Lottery and owned and operated by the Louisville Sports Commission. Notable Inductees Honorees have included Louisville native Muhammad Ali. A three-time world champion and six-time Golden Glove recipient, he won a gold medal in the light heavyweight division at the 1960 Summer Olympics (at age eighteen) and turned professional later that year. Also included is American football player and coach Bo McMillin (who played for Centre College in Danville, Kentucky); and basketball player and coach Pat Riley, who played in college for the Kentucky Wildcats men's basketball team. While at the University of Kentucky, Riley managed to average a double double during his entire career there. He is also a ten-time NBA champion, winning one ring as a player with the Los Angeles Lakers and the rest as a coach and an owner in the NBA. Coach Riley was inducted into the Hall of Fame in 2005. Bob Baffert an American racehorse trainer who trained the 2015 Triple Crown winner American Pharoah and 2018 Triple Crown winner Justify. Baffert's horses have won a record seven Kentucky Derbies, seven Preakness Stakes, three Belmont Stakes, and three Kentucky Oaks. Most recently inducted. A more recent inductee Dwane Casey inducted in the 2021 class who is the head coach for the Detroit Pistons of the National Basketball Association. He is a former NCAA basketball player and coach, having played and coached there for over a decade before moving on to the NBA.The 2013 class included people such as Jerry Carroll who was a golf professional, Donna Bender a student-athlete/athletic director at Sacred Heart Academy, University of Louisville basketball player Pervis Ellison, Calvin Borel who was a Kentucky horse racer, Pro football player for the Pittsburgh Steelers Dwayne D. Woodruff, and Tennis player Julie Ditty.Inducted in the 2015 class were tennis player Mel Purcell,( he captured the 1980 NCAA doubles title with Rodney harmon and was named an all-American.) women's basketball coach Paul Sanderford, basketball player Sharon Garland, college basketball manager and King of the Bluegrass Men's Basketball Tournament founder and director Lloyd Gardner, Major League Baseball umpire Randy Marsh, track and field athlete Boyd Smith, and Lexington's Keeneland Race Course. Scott Davenport, the current men's basketball coach at Bellarmine University was also inducted. The 2016 class included American football player Shaun Alexander, basketball player Darel Carrier, college basketball coach Scott Davenport, basketball player Kyra Elzy, high school basketball coach Philip Haywood, Kentucky Wesleyan basketball play-by-play announcer Joel Utley, and the Lakeside Swim Club. Kyra Elzy is a Kentucky native and currently holds the position as the University of Kentucky's Women's Basketball Head Coach. She also played basketball for the University of Tennessee and assisted for their team after her career. Selection committee The 2021 Selection Committee has the following members: Jeff Bidwell, WPSD-TV Drew Deener, ESPN Radio Jody Demling, iHeart Media Mike Fields, retired Lexington Herald Leader staff writer Jason Frakes, Courier Journal Kendrick Haskins, WAVE 3 Reina Kempt, Courier Journal Zack Klemme, The Daily Independent Mark Mathis, Owensboro Messenger-Inquirer Marques Maybin, ESPN Radio Brian Milam, WKYT-TV Steve Moss, WKYT-TV Kevin Patton, The Gleaner Kent Spencer, WHAS-TV Mark Story, Lexington Herald-Leader Inductees The Hall of fame has been honoring athletes for the past 58 years. These are some of the athletes inducted in the past 6 years. Here is the link to the full list of inductees. 2021 John Asher - Kentucky Derby ambassador. He is known as the voice and face of horseracing in Kentucky. Dwane Casey - American Professional basketball coach who attended Union County High School in Morganfield, Kentucky and played four years at the University of Kentucky winning a National Championship in 1977-78. He began His coaching career at the Western Kentucky University before becoming the first African American assistant coach at The University of Kentucky, before moving to the NBA. Romeo Crennel - American football coach. Before becoming a defensive coordinator, he was a star at Western Kentucky, where he was a four-year starter and a team captain as a senior in 1969. He then embarked on his coaching career that spanned six decades and included five Super Bowl rings as an assistant. Rachel Komisarz Baugh - American swimmer, Olympic gold medalist, and former world record-holder. She swam at the University of Kentucky and became a seven-time All American swimmer and three-time SEC Champion by the end of her four years at the University. Keith Madison - Head coach of the Kentucky Wildcats baseball team from 1979 to 2003. He remains the most winningest baseball coach in program history with 735 wins. Elmore Smith - Former American professional basketball player. Played at Kentucky State University and went on to play in the NBA for the Buffalo Braves. 2020 Pete Browning - American professional baseball player that was a pioneer for the major league games, which included several seasons with the Louisville Colonels. Anna May Hutchison- Louisville native, she played in the All-American Girls Professional Baseball League. Clarence "Cave" Wilson- Basketball player. Wilson led the Horse Cave, KY, Colored School to 65 consecutive basketball victories in the 1940s. He was a forward and a point guard for the Harlem Globetrotters (1949-1964) 2019 Derek Anderson- American former professional basketball player. In 1996, Anderson helped the University of Kentucky win the NCAA Men's Basketball Championship as part of a team that featured nine future NBA players under their coach Rick Pitino, known as the “untouchables” Deion Branch- is a former American football player for the NFL. He played college football as a wide receiver at Louisville under coach John L. Smith. Branch was named the Most Valuable Player of Super Bowl XXXIX. William Exum was the head of the Kentucky State University Physical Education Department and later head of the Athletics Department. He coached the KSU men's cross country team to an NCAA Division II championship in 1964. He was also the manager of the United States Track and Field teams at the 1972 and 1976 Olympics. Ralph Hacker spent 34 years on the UK Radio Network. He served as the men's basketball analyst for many years with broadcaster Cawood Ledford Willis Augustus Lee was a Kentucky native and a skilled sport shooter that won seven medals in the 1920 Olympics shooting events, including five gold medals. He was tied with teammate Lloyd Spooner for the most anyone had ever received in a single Olympics. Their record stood for 60 years. Nate Northington- He was the first African-American to play in a college football SEC game with the Kentucky Wildcats. 2018 Bob Baffert Sam Ball Bob Beatty Bernie Bickerstaff Ken Ramsey Nicky Hayden 2017 Mike Battaglia Howard Beth Rodger Bird Rob Bromley Swag Hartel Kenny Klein Dennis Lampley Marion Miley 2016 Shaun Alexander Darel Carrier Scott Davenport Kyra Elzy Philip Haywood Joel Utley Lakeside Swim Club*Selection on hiatus 1965 **Selection on hiatus 1967-1974 **Selection on hiatus 1976-1984 Passage 4: Freedom Hall Freedom Hall is a multi-purpose arena in Louisville, Kentucky, on the grounds of the Kentucky Exposition Center, which is owned by the Commonwealth of Kentucky. It is best known for its use as a basketball arena, previously serving as the home of the University of Louisville Cardinals and, since November 2020, as the home of the Bellarmine University Knights. It has hosted Kiss, Chicago, AC/DC, WWE events, Mötley Crüe, Elvis Presley, The Doors, Janis Joplin, Creed, Led Zeppelin, Van Halen and many more. As well as the Louisville Cardinals men's basketball team from 1956 to 2010, the arena’s tenants included the Kentucky Colonels of the American Basketball Association from 1970 until the ABA-NBA merger in June 1976, and the Louisville Cardinals women's team from its inception in 1975 to 2010. The Kentucky Stickhorses of the North American Lacrosse League used Freedom Hall from 2011 until the team folded in 2013. From 2015 to 2019 it has hosted the VEX Robotics Competition World Championship Finals yearly in mid-April. The arena lost its status as Kentuckiana's main indoor sporting and concert venue when the downtown KFC Yum! Center opened in 2010. It is still used regularly, however, hosting concerts, horse shows, conventions, and basketball games. History Freedom Hall was completed in 1956 in the newly opened Kentucky Fair and Exposition Center located 5 miles (8.0 km) south of Downtown Louisville. It received its name as the result of a statewide essay contest sponsored by the State Fair Board and the American Legion. Charlotte Owens, a senior at DuPont Manual High School, submitted the winning entry over 6,500 others. Designed for the nation's premier equestrian competition, the Kentucky State Fair World's Championship Horse Show, the floor length and permanent seating were designed specifically for the almost 300-foot (91 m)-long show ring (in comparison, a regulation hockey rink is 200 feet (61 m) long, and a basketball court is only 94 feet). The North American International Livestock Exposition also is held there each November. Muhammad Ali fought his first professional fight at Freedom Hall when he won a six-round decision over Tunney Hunsaker. Freedom Hall was also one of the major stops on the Motortown (later MOTOWN) traveling music revue during the early and mid-1960s. Grateful Dead played Freedom Hall 4 times including 6/18/74, 4/9/89, 6/15/93, and 6/16/93. 6/18/74 was officially released as Road Trips Volume 2 Number 3. Judgment Day (2000) was also held at the Freedom Hall. A collegiate wrestling tournament was held at the arena in 2019. Freedom Hall has hosted campaign rallies for two U.S. presidents: John F. Kennedy and Donald Trump. Tenant history The Kentucky Colonels fielded successful teams during their tenure at Freedom Hall, winning the American Basketball Association (ABA) Championship in the 1974–75 season and reaching the ABA Finals two other times. The 1970-71 team played in the ABA Championship Finals, losing to the Utah Stars in 7 games. The 1972-73 team advanced to the Finals again, losing to the Indiana Pacers in 7 games. The Colonels were disbanded when the ABA merged with the National Basketball Association in 1976. Hall of Fame players Dan Issel and Artis Gilmore played for the Colonels during their successful run. Hall of Fame Coach Hubie Brown coached the Colonels Championship team.In 1984 the facility was refurbished, including lowering the floor to allow maximum capacity to increase from 16,664 to 18,865 for basketball. It was the full-time home of Cardinal men's basketball from the 1957–58 season to 2010, with the team winning 82% of home games in 50+ seasons. The University of Louisville was ranked in the Top 5 in attendance for the past 25 years, with 16 of the last 19 years averaging more than 100% of capacity. In addition to being the home of the Cardinals, Freedom Hall has hosted NCAA tournament games ten times, including six Final Fours between 1958 and 1969. The arena has also hosted 11 conference tournaments, nine Metro Conference Tournaments and two Conference USA tournaments—2001 and 2003. It has also hosted the Kentucky Boys' High School State Basketball Tournament (also known as the Sweet 16) 23 times, including every year from 1965 to 1978. In 1984, the floor of the arena was lowered about 10 feet (3.0 m) to increase the capacity of the arena from 16,613 to its current figure. In the 1996–97 season Freedom Hall averaged an attendance of 19,590 surpassing arena capacity. Freedom Hall hosts the Championship tractor pull every February during the National Farm Machinery Show. From 2001 to 2008, the arena football team Louisville Fire of the af2 played in Freedom Hall before ceasing operations. On the lower level is the Kentucky Athletic Hall of Fame where an engraved bronze plaque honors each inductee.The University of Louisville men's basketball team played their final game at Freedom Hall in front of a record crowd of 20,138 on March 6, 2010, against Syracuse University, the #1 ranked team in the nation. Louisville won in an upset 78–68. The arena began to gain new tenants in 2012 with the addition of the Kentucky Stickhorses, and in 2013, with the addition of the Kentucky Xtreme. However, the Kentucky Stickhorses folded in 2014 after the lack of wins and the lack of attendance. The Kentucky Xtreme were suspended mid-season with other teams playing the remainder of their season. In 2020, the Bellarmine University Knights selected Freedom Hall as their home for men's and women's basketball. Gallery See also List of events at Freedom Hall KFC Yum! Center Sports in Louisville, Kentucky List of attractions and events in the Louisville metropolitan area Passage 5: Cleveland City Hall Cleveland City Hall is the seat of government for the City of Cleveland, Ohio, and the home of Cleveland City Council and the office of the Mayor of Cleveland. It opened in 1916 and is located at 601 Lakeside Avenue in the Civic Center area of Downtown Cleveland. The building was the first of its kind designed by Cleveland architect J. Milton Dyer for governmental purposes for a major U.S. city. At the time of its construction, City Hall was to continue the city planning of Daniel Burnham's 1903 Group Plan. City Hall stands as a historic landmark that was added to the Cleveland Landmarks Commission.The rotunda in the building has been the site of numerous weddings, rallies, protests, and galas. The body of U.S. Representative Louis Stokes lay in state in the rotunda for the public to pay their respects after his death in 2015. Construction The original design had been finalized by 1907 and features Neoclassical elements, but it would take nearly 10 more years before that design would be realized. By the time of its construction, Dyer had undertaken several important building commissions in the Cleveland area and was known for his ornate but refined style of architecture. The building cost $3 million in 1916 (equivalent to $81 million in 2022) and took nearly five years to complete with construction commencing in 1912. The building is located on the bluff that overlooks the North Coast Harbor district that abuts Lake Erie and the Port of Cleveland. The Cleveland City Council Chambers underwent major renovations in 1951 and 1977. However, the outside of the building has remained largely unchanged since 1916, save for normal repairs, refittings and the usual upkeep of the superstructure. City Hall stands next to Willard Park and The Mall and is across the street from Public Hall. Occupants The following city agencies are in building: Mayor's Office Subdivisions Building and Housing Civil Service Commission Community Development Community Relations Finance Law Personnel and Human Resources Public SafetyThe city of Cleveland has numerous other agencies and departments spread throughout downtown buildings, these include, Carl B. Stokes Public Works Building which is headquarters to Cleveland Division of Water, the Tower at Erieview, Cleveland Public Powerhouse and Public Hall, among others. As with other major U.S. cities as the city expanded and diversified, the City Hall building could no longer house all of the needed departments. See also Downtown Cleveland Civic Center (Cleveland) Passage 6: Louisville RiverFrogs The Louisville RiverFrogs were a professional ice hockey team competing in the East Coast Hockey League (ECHL), which was a mid-level professional American hockey league with teams from all over the United States as well as one franchise from Canada. The team was based in Louisville, Kentucky and played from 1995 to 1998. Their home venue was Broadbent Arena (nicknamed "The Swamp" for their duration; capacity 6,600) at the Kentucky Exposition Center. At the conclusion of the 1997–1998 season, the franchise was sold and moved to Florida to become the Miami Matadors for a year before moving to Ohio as the Cincinnati Cyclones in 2001. The Cyclones are still currently playing in the ECHL. They started out playing their games in Cincinnati Gardens, but they now play at the Heritage Bank Center. The Cyclones are also the minor league affiliate to the Rochester Americans of the American Hockey League (AHL), as well as the Buffalo Sabres of the National Hockey League (NHL). The team's mascot was Rowdy River Frog. The RiverFrogs games were locally known for the amount of non-hockey-related entertainment at shows, including a giant frog blimp, hot tubs, and concession booths. See also Sports in Louisville, Kentucky Passage 7: Louisville Fire The Louisville Fire was an arena football team that played its home games at the Brown-Forman Field in Freedom Hall in Louisville, Kentucky. They were a 2001 expansion team of the af2. Their owner/operator was former Pro Bowl lineman and Louisville native Will Wolford. The team was somewhat successful. After a rocky first few seasons they finally found success in 2004 and then made it all the way to the Arena Cup in the 2005 season. On December 19, 2001, Jeff Brohm was named the head coach of the Louisville Fire arena football team. The Fire started the 0–7 before they defeated the Carolina Rhinos 31–28 to improve to 1–7. The Fire would finish the season 2–14.In 2003, English was hired to replace Brohm as the head coach of the Louisville Fire af2 team. He was fired after just two games with a record of 2–2.In July 2007, it was announced that the team planned on selling portions of the team to local ownership (aka the NFL's Green Bay Packers) in an attempt to boost season ticket sales and then buy the shares back in time before the team joined the AFL.In November 2008, the Louisville Fire ceased operations. Award winners 2004 – Takua Furutani – International Player of the Year 2005 – Matthew Sauk – Offensive Player of the Year 2005 – Danny Kight – Kicker of the Year 2006 – Brett Dietz – Rookie of the Year 2006 – Rob Mager – Offensive Player of the Year 2008 – Elizabeth "Liz" Horrall – Miss Louisville Fire Football Season-by-season Coaching staff See also Sports in Louisville, Kentucky Louisville Cardinals Passage 8: Broadbent Arena Broadbent Arena is a 6,600 seat multi-purpose arena in Louisville, Kentucky. It was home to the Louisville Icehawks and Louisville RiverFrogs ECHL teams. The arena, along with Cardinal Stadium and Freedom Hall, is located on the grounds of the Kentucky Exposition Center in Louisville. The arena is used for equestrian events, and other fairground type activities. As of January 2021, the arena is being used as a major distribution site for COVID-19 vaccines. See also Sports in Louisville, Kentucky Passage 9: Freedom Hall Civic Center Freedom Hall Civic Center is a multi-purpose arena in Johnson City, Tennessee.Starting in 2014, it became the basketball venue for East Tennessee State University. History Freedom Hall Civic Center opened in 1974 on the Liberty Bell Complex next to Science Hill High School. The arena was built by the city of Johnson City to be used as an entertainment venue and additional space for the middle and high schools located on the property. Over the years the venue has been used for sporting events, theatrical productions, concerts, ice shows, and a rodeo venue. Former and current entertainment include concerts from Van Halen, Bon Jovi, Eric Clapton, Def Leppard, Poison (band), Ozzy Osbourne, Mötley Crüe, KISS, Bruce Springsteen, Third Day, For King & Country, Chicago, AC/DC, Lynyrd Skynyrd (original band), Metallica, Elvis Presley, Elton John, and Aerosmith. Entertainment from professional wrestling organizations Jim Crockett Promotions, WWE, Smoky Mountain Wrestling, and from the Ringling Bros. and Barnum & Bailey Circus. See also List of NCAA Division I basketball arenas Passage 10: Johnson City 2001 Johnson City 2001 is a complete concert album by Widespread Panic. The three disc set is the fifth release from the Widespread Panic archives. The performance was recorded live at Freedom Hall Civic Center in Johnson City, Tennessee on November 20, 2001. The multi-track recording featured all original band members including the late guitarist, Michael Houser. Track listing Disc 1 "L.A." (Widespread Panic) - 4:45 "Wondering" (Widespread Panic) - 7:41 "It Ain't No Use" (Joseph "Zigaboo" Modeliste / Art Neville / Leo Nocentelli / George Porter Jr.) - 10:27 "Impossible" (Widespread Panic) - 5:04 "Worry" (Widespread Panic) - 7:16 "New Blue" (Widespread Panic) - 4:41 "Holden Oversoul" (Widespread Panic) - 8:03 "Stop-Go" (Widespread Panic) - 10:20 "Makes Sense To Me" (Daniel Hutchens) - 4:27 Disc 2 "Weak Brain, Narrow Mind" (Willie Dixon) - 9:13 "One Arm Steve" (Widespread Panic) - 4:04 "Old Neighborhood" (Widespread Panic) - 6:22 "Trouble" (Cat Stevens) - 2:57 "Love Tractor" (Widespread Panic) - 7:18 "Pigeons" (Widespread Panic) - 11:37 "Airplane" (Widespread Panic) - 14:06 "Drums" (Widespread Panic) - 16:27 Disc 3 "Drums and Bass" (Widespread Panic) - 7:41 "Astronomy Domine" (Syd Barrett) - 2:45 "Good Morning Little School Girl (Sonny Boy Williamson I)"- 8:16 "The Waker" (Widespread Panic) - 4:50 "She Caught the Katy (Taj Mahal / James Rachell)" - 4:21 "Gimme" (Widespread Panic) - 5:24 "Chunk of Coal" (Billy Joe Shaver) - 4:23 Personnel Widespread Panic John Bell - Vocals, Guitar Michael Houser - Guitar, Vocals Dave Schools - Bass, Vocals Todd Nance - Drums John "Jojo" Hermann - Keyboards, Vocals Domingo "Sunny" Ortiz - Percussion Production Mixed by Chris Rabold and Drew Vandenberg at Chase Park Transduction in Athens, GA. Recorded by Brad Blettenberg Mastered by Drew Vandenberg at Chase Park Transduction Studios in Athens, GA. Setlist by Garrie Vereen Additional audience recordings provided by Charles Fox
Broadbent Arena and Freedom Hall are landmarks of which U.S. city?
Louisville
4,002
hotpotqa
4k
Passage 1: Nüyou Nüyou (Chinese: 女友; lit. 'Female Friend') is a bilingual (English and Chinese) monthly fashion and beauty magazine targeting women. The magazine is based in Singapore. History and profile Nüyou was started in 1976. The magazine is part of SPH Magazines and is published on a monthly basis. It covers articles about fashion, beauty tips and celebrities and targets women between the ages of 25 and 30 years-old. The magazine is published in English and Chinese languages.In April 2013 Terence Lee became the editor-in-chief of Nüyou. He replaced Grace Lee in the aforementioned post.In 2009 Nüyou was redesigned. The magazine had a Malaysian edition which was belong to Blu Inc Media Sdn Bhd. The company ceased operations since the Malaysian movement control order due to the COVID-19 pandemic which interfered with distribution of the magazine. Passage 2: List of Allure cover models Allure is a women's beauty magazine published by Condé Nast Publications. A famous woman, typically an actress, singer, or model, is featured on the cover of each month's issue. Following are the names of each cover subject from the first issue of Allure in March 1991 to the most recent issue. Allure 1990s 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000s 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010s 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020s 2020 2021 2022 Allure Russia Russian edition of Allure magazine was published from September 2012 to December 2016. 2012 2013 2014 2015 2016 Passage 3: Alfredo Hoyos (doctor) Alfredo Hoyos, M.D is a Colombian plastic surgeon who created High-definition liposuction and other advanced body contouring techniques in plastic surgery. He specializes in Plastic Surgery, Aesthetic Plastic Surgery, Maxillofacial surgery, and Hand Surgery. He is also a medical illustrator, painter and sculptor. Dr. Hoyos is featured as a speaker of different companies of the most innovative technologies in the field, not only in Colombia., his natal country, also worldwide. He is also featured in the Aesthetics & Beauty Magazine. Chief and creator of TOTAL DEFINER brand that is an integral programs that show surgeons in the different scenarios of the medical field, how optimize their practice and their team. “THE MOST COMPLETE GUIDE FOR PLASTIC SURGEONS” Early life and education Born on March 26, 1972, in Bogota Colombia, Dr. Alfredo Hoyos had his bachelor's degree education in School Mayor Del Rosario, Colombia and his M.D in Del Rosario University Colombia. He completed visiting fellowships in Aesthetic surgery at New York University, New York City. He did same for Maxillofacial surgery at Mount Sinai Hospital, New York, and also for Facial Aesthetic surgery at Manhattan Eye, Ear and Throat Hospital, New York. Plastic surgery career Dr. Alfredo Hoyos combines his plastic surgery techniques with different technologies, such as Ultrasound-assisted VASER, Bodytite, Microaire, Wells Johnson, Renuvion to get the best results in liposculpture. He invented the High Definition Liposculpture (HDL), known today as H4D LIPO, Lipo HD and Dynamic Definition at Lipoplasty (4D) as well as other advanced techniques that focus on body contouring. as EVE Technique ( Tummy tuck and Mini Tummy tuck High Def ). He has also refined techniques in fat transfer (Buttock and Pectoral Lipograft) and breast augmentation. Dr. Hoyos alongside his colleagues has performed several High Def Lipo procedures with consistent results. He continues to refine these techniques while researching and developing new applications of HD in additional anatomical regions and the field of cosmetic surgery. He has performed plastic surgeries for many celebrities including Belinda Peregrín.Dr. Hoyos is the author of books and several scientific articles that discuss innovations and new technologies that work on body contour. Author of High Definition Body Sculpting: Art and Advanced Lipoplasty Techniques in which he reveals details of his techniques and new technologies in body contouring. He travels to share his knowledge with cosmetic surgeons and also serves as a consultant for media and non-scientific magazines.Dr Hoyos has extensive experience as lecturer and trainer for Vaser, Bodytite, Microaire and other different technologies. He has trained hundreds of plastic surgery physicians in the Liposculpture procedures. Among his trainees include: Dr. Anthony Lockwood of Charleswood Clinic, Canada, Dr. Aguirre of Aguiire Specialist Care, Denver, Dr. Cynthia M. Lopez, Dr. Pazmino, Dr. Sam Sukkar of Houston Texas, and Dr. Shapiro, Dr. Emmanuel de la Cruz, Dr. Hossam Tahseen, Dr. Augusto Pupio, Dr Felipe Massignan, Certifications Colombian Society of Plastic Surgery, (SCCP). International Confederation of Plastic Reconstructive and Aesthetic Surgery (IPRAS). He is a member of the American Society for Aesthetic Plastic Surgery (ASAPS) He is a member of the International Society of Aesthetic Plastic Surgery (ISAPS) He is a member of the American Society of Plastic Surgeons (ASPS) He is a member of the International Federation for Adipose Therapeutics and Science (IFATS) See also High-definition liposuction Liposuction Plastic surgery Awards The Brazilian Plastic Surgery Association 2019 Honorable Mention for his numerous and valuable contribution to Worldwide Plastic Surgery field Ted Lockwood Award (Excellence in Body Contouring) 2019 by ASAPS SAPS Award 2018 for his great contributions to the Worldwide Plastic Surgery field ISCG Award 2014 for his Outstanding contribution to Cosmetic Surgery New Economy Award 2014 for his High Definition Liposculpture Technique Cutting Edge Award 2014 for his innovative video about High Definition Lipo procedure RIYADH, SAUDI ARABIA 2014 Mention for his Outstanding contribution to the Plastic Surgery field Passage 4: Allure (magazine) Allure is an American women's magazine focused on beauty, published monthly by Condé Nast in New York City. It was founded in 1991 by Linda Wells. Michelle Lee replaced Wells in 2015. A signature of the magazine is its annual Best of Beauty awards—accolades given in the October issue to beauty products deemed the best by Allure's staff. History In 1990, S.I. Newhouse Jr., chairman of Condé Nast, and then editorial director Alexander Liberman approached Linda Wells to develop a concept they had for a beauty magazine. At the time, Wells was the beauty editor and the food editor at The New York Times Magazine.The magazine's prototype was shredded shortly before the scheduled launch date and, after overhauling everything (including the logo), Allure made its debut in March 1991 designed by Lucy Sisman. The magazine's original format was oversize, but this prevented it from fitting into slots at grocery-store checkouts and required advertisers to resize their ads or create new ones. After four issues, Allure changed to a standard-size glossy format.On August 29, 2022, Conde Nast announced the December 2022 issue will be the last print issue of the magazine before transitioning to digital-only. Allure employees unionized in 2022. Conde states, “It’s our mission to meet the audience where they are and with this in mind, after our December print issue, we are making Allure an exclusively digital brand.” Impact Allure focuses on beauty, fashion, and women's health. Allure was the first women's magazine to write about the health risks associated with silicone breast implants, and has reported on other controversial health issues. After Lee took the helm in late 2015, the brand was celebrated for promoting diversity and inclusivity. In 2017, Adweek named Allure Magazine of the Year and awarded Lee as Editor of the Year.The magazine's circulation, initially 250,000 in 1991, is over 1 million as of 2011. Many writers have contributed to Allure. Among them are Arthur Miller, John Updike, Jhumpa Lahiri, Michael Chabon, Kathryn Harrison, Frank McCourt, Isabel Allende, and Francine du Plessix Gray. Elizabeth Gilbert’s essay “The Road to Rapture,” published in Allure in 2003, was the precursor to her 2006 memoir, Eat, Pray, Love (Viking Adult). Photographers who have shot for Allure include Michael Thompson, Mario Testino, Patrick Demarchelier, Norman Jean Roy, Tina Barney, Marilyn Minter, Carter Smith, Steven Klein, Steven Meisel, and Helmut Newton. Cover subjects have included Demi Lovato, Jennifer Aniston, Jennifer Lopez, Helen Mirren, Zendaya, Julia Roberts, Angelina Jolie, Reese Witherspoon, Mary-Kate and Ashley Olsen, Victoria Beckham, Beyoncé, Fergie, Britney Spears, Lupita Nyong'o, Jessica Simpson, Kate Hudson, Christina Aguilera, Ariana Grande, Rihanna, and Gwen Stefani. (See List of Allure cover models). Best of Beauty Awards Allure began its Best of Beauty awards program in the mid-1990s, at the initiative of Wells, to help readers choose among the vast array of makeup, skincare, and hair-care products on the market. In 2019, the magazine introduced the Allure Best of Beauty Clean Seal award to products that met the publication's "clean" standards.Allure has two sets of awards, one judged by the magazine's editors and the other by readers. A "winners' seal" logo, developed by Allure, appears on many of the winning products. To ensure that its judgments are neutral, Allure's ad department isn't involved in the selections. In 2010, the magazine developed an iPhone app that highlights the winning products and tells users where they can buy them based on their location. Controversy The magazine faced online criticism when it showed Marissa Neitling with an Afro haircut.Singer Halsey has announced she will no longer do press after Allure failed to use her preferred pronouns in its August cover story and promoted the interview by allegedly taking quotes out of context. Awards (for Allure) Magazine of the Year from Adweek (2017) Bronze Clio Award for Allure Unbound augmented reality app (2017) The National Magazine Award for Design (1994) The Editorial Excellence Award from Folio (2001) The Circulation Excellence Award from Circulation Management (2001) "Ring Leader", an essay by Natalie Kusz from the February 1996 issue of Allure, was selected for The Best American Essays 1997 (Houghton Mifflin). The magazine has been on Adweek’s Hot List in 1993, 1994, 1995, 2003, and 2007. Allure has received 29 awards from the American Academy of Dermatology, nine journalism awards from The Fragrance Foundation, and the Excellence in Media Award from the Skin Cancer Foundation. Awards (for Linda Wells) The Achiever Award from Cosmetic Executive Women (2001) The Matrix Award for magazine leadership from New York Women in Communications, Inc. (2009) Awards (for Michelle Lee) Editor of the Year from Adweek (2017) Digiday's Glossy 50 (2017) A100 Most Influential Asians from Gold House (2018) Creative 100 from Create & Cultivate (2017) In the media Wells, along with Allure editors Michael Carl and Kelly Atterton, have appeared as judges on the Bravo TV series Shear Genius. Allure editors have appeared as experts on television programs such as the Today show and 60 Minutes, and Allure stories frequently receive national attention. Hilary Duff played an Allure intern in Cheaper by the Dozen 2. See also List of Allure cover models List of Allure Editor Passage 5: List of W Korea cover models W Korea is a women's beauty magazine published by Doosan Magazine under license from Condé Nast Publications. A famous person, usually an actress, singer, or model, is featured on the cover of each month's issue. Following are the names of each cover subject from the most recent issue to the first issue of W Korea under editorship of Lee Hye Joo in March 2005. 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 Passage 6: Anders Manga Anders Manga is an American recording artist best known for his self named Darkwave work and under the name Bloody Hammers. His first release was under the name "Coffin Moth" in 1997, a Deathrock music project. In 2012, Anders' song "Glamour" was featured in Season 4 Episode 04 of The Vampire Diaries In 2019, he released "The Summoning, a new album by Bloody Hammers, his heavy rock project with wife Devallia. In late 2013 he signed Bloody Hammers to Napalm Records and have since released four albums with the label. In 2015 he started another project inspired by horror soundtracks from the seventies and eighties called Terrortron which was featured on the Rue Morgue (magazine) compilation titled "They Came From Rue Morgue". Personal life He is currently married to band member Devallia since 2005. Musical Influences Anders was eleven years old when he first began teaching himself Bass guitar, after borrowing a bass from a neighborhood friend. Anders identifies Alice Cooper as having been his main musical influence as a child, and has said that he was the reason why he wanted to pursue music. Other artists such as Gary Numan, Nick Cave, Roky Erickson, The Sisters of Mercy and Black Sabbath also had a major influence on his musical tastes. Current Band Members Devallia (Organ) & Anders Manga (Vocals, Bass, Guitar) Discography Albums Coffin Moth Coffin Moth (1997) (Self Released) Solo releases Murder in the Convent (1999) (Vampture) One Up for the Dying (2004) (Vampture) Left of an All-Time Low (2005) (Vampture) Welcome to the Horror Show (2006) (Vampture) Blood Lush (2007) (Vampture) X's & the Eyes (2008) (Vampture) Perfectly Stranger (2018) (Sacrificial Records) Andromeda (2020) Bloody Hammers Bloody Hammers (2012) (SoulSeller Records) Spiritual Relics (2013) (SoulSeller Records) Under Satan's Sun (2014) (Napalm Records) Lovely Sort of Death (2016) (Napalm Records) The Horrific Case of Bloody Hammers (2016) (Napalm Records) The Summoning (2019) (Napalm Records) Songs of Unspeakable Terror (2021) (Napalm Records) Terrortron Hexed (2015) (Sacrificial Records) Necrophiliac Among the Living Dead (2016) (Sacrificial Records) Orgy of the Vampires (2017) (Sacrificial Records) Other releases The Traumatics - The Traumatics (1993) (Irregular Records) The Traumatics - Republic (1995) (Irregular Records) The Dogwoods - The Dogwoods (1997) (Irregular Records) A Tribute to Alice Cooper (1998) (Wasted Records) Interbreeding VIII: Elements of Violence (2006) (BLC Productions) Sweet Leaf - A Tribute to Black Sabbath (2015) (Cleopatra Records) Passage 7: Claudia (magazine) Claudia is a monthly women's magazine published in Warsaw, Poland. The magazine has been in circulation since 1993. History and profile Claudia was established in 1993. Gruner + Jahr was the founding company. The magazine was published by Burda Publishing Polska SP. Z O.O. on a monthly basis. It is published by Grupa Kobieta. The headquarters of the monthly is in Warsaw. It targets Polish women around 20-45 years old who live in small cities.Claudia achieved record circulation numbers in Poland at the beginning of the 2000s. In 2001 it was the twenty-third best-selling women's magazine worldwide with a circulation of 799,000 copies. In 2002 it was the most popular magazine in Poland.The magazine sold 307,729 copies in 2010 and 261,716 copies in 2011. Its circulation rose to 276,752 copies in 2012. See also List of magazines in Poland Passage 8: Beautycounter Beautycounter is an American direct to consumer company that sells skin care and cosmetic products. As of 2018, the company had 150 products with over 65,000 independent consultants, and with national retailers. History Beautycounter was founded by Gregg Renfrew in 2013. Renfrew had previously worked with merchandising executives such as Martha Stewart and Susie Hilfiger. Beautycounter released nine products in March 2013, including facial cleansers, eye creams, and shampoo. The company launched as a direct retail brand, selling through its website, independent consultants, and retailers including J.Crew, Target and Sephora.Beautycounter was one of Allure magazine's Best of Beauty award recipients for their lip sheer in twig (2014) and dew skin tinted moisturizer (2015). Beautycounter became a founding member of the nonprofit Environmental Working Group's verification program, which aims to make it easier for consumers to identify consumer goods that do not contain toxic ingredients. The company compiled a "never list" of reportedly harmful chemicals omitted from their products.In 2016, Beautycounter launched its first mascara line. Later that year, Beautycounter's Lengthening Mascara was one of Allure's Best of Beauty products in the natural category. In June 2016, Beautycounter acquired the worldwide assets of Nude Skincare, Inc. and Nude Brands, Ltd., Ali Hewson's natural beauty line, from LVMH. As part of the acquisition, Hewson's husband Bono became an investor in Counter Brands, LLC., Beautycounter's parent company, and Hewson became a board member.In 2018, the company opened its first brick and mortar store in Manhattan. A second location opened in 2019, in Denver, Colorado. In March, the company was named to Fast Company's Most Innovative Companies list, for its efforts to promote nontoxic ingredients in beauty products. In June, the company was also named to CNBC's 2020 Disruptor 50 list, as a next generation billion dollar business. In December, the company opened a hybrid retail store and livestream content studio in Los Angeles.In February 2020, the company released a documentary Transparency: The Truth About Mica, as part of its efforts to promote ethical mining. The documentary films an in-person audit of the company's mica supply chain, to ensure responsible sourcing.In April 2021, American private equity firm The Carlyle Group acquired a majority stake in Beautycounter, which valued parent Counter Brands, LLC. at $1B.In January 2022, Marc Rey was named as the company's new CEO, and founder Renfrew became executive chair. Legislation In 2014, Renfrew hired public health and environmental advocate Lindsay Dahl to lead company advocacy and lobbying efforts to reduce harmful chemicals used in the cosmetic industry. Renfrew and Beautycounter hosted a congressional briefing in Washington, D.C., in fall 2015, regarding the potential dangers of under-regulated beauty products. In May 2016, Renfrew went to Washington, D.C., with a group of 100 women representing all 50 U.S. states to discuss the importance of regulation in the beauty industry with senators, representatives, and legislative staff. Renfrew also testified in a congressional hearing on cosmetic reform in December 2019.In 2021, Beautycounter led two days of virtual lobbying with members of Congress on federal standards regarding clean beauty. Passage 9: Betty Irabor Betty Irabor is a Nigerian columnist, philanthropist, writer, publisher and founder of Genevieve magazine. She previously had a column in Black & Beauty magazine in the United Kingdom. She also has a foundation that promotes breast cancer awareness, early detection and treatment. Career Irabor studied English in university and then ventured into publishing. She worked as a journalist at Concord Newspapers, took freelance writing jobs at the Vanguard, The Guardian, This Day as well as Black & Beauty magazine and others abroad. She later ventured into telecommunications. In 2003 she founded the glossy magazine Genevieve Magazine, which has been described as "Nigeria's leading inspirational and lifestyle magazine". It is headquartered in Lekki, with a staff of fourteen. Ten issues are published each year. The magazine website focuses on celebrity news. Irabor is the editor-in-chief and chief executive officer.In 2018 her memoir Dust to Dew was published. In it she chronicles her struggles with depression.She is also a philanthropist, public speaker and champion for breast cancer awareness with her nonprofit known as the Genevieve PinkBall Foundation.She is also the host and presenter of Life's Lessons with Betty Irabor. She is a speaker and ambassador who shares promotional and modeling shots. Personal life Irabor was born on March 25, 1957, and raised in Nigeria. She's married to Soni Irabor and they have two children. Their son made a short film that was selected by the Zanzibar International Film Festival. Award and honors Irabor was honored by the Association of Professional Women Bankers as The Most Accomplished Female Publisher in Nigeria 2011. Passage 10: Elle Girl Elle Girl was the largest older-teen fashion and beauty magazine brand in the world with twelve editions. Launched in August 2001, it was the younger sister version of Elle magazine, and similarly focused on beauty, health, entertainment and trendsetting bold fashion—its slogan: "Dare to be different". The magazine was published monthly and was based in New York City. Closure Its staffers were informed in early April 2006 that Elle Girl (USA)'s final issue would be its June/July 2006 Summer Issue, while they were recently in the middle of working on the August 2006 issue, which is traditionally the largest issue of the year-covering fall fashion and back-to-school topics. The company intended to continue updating the Elle Girl website, and create new media in conjunction with Alloy.com, as well as publishing bi-annual special issues. Hachette Filipacchi CEO Jack Kliger, who was also responsible for closing three other Hachette magazines—George, Mirabella, and Travel Holiday, commented on Elle Girl's future on the Internet and explained, "When teen girls are not on the Web, they are on their cells. The company will keep the website and work on Elle Girl ringtones, wallpaper mobile pages and projects in the mobile blogging area." ELLEgirl.com relaunched in early 2008 after parting ways with Alloy. The new version included a blog, more simple navigation, and a strengthened association with ELLE.com under Executive Editor Keith Pollock. Hearst Magazines bought the website in 2011. As of May 2014, the ELLEgirl website redirects to the main Elle website. International editions The UK edition of Elle Girl magazine closed for business shortly before the American version. As of August 2005, international editions continued to be published in South Korea, the Netherlands, Canada, Taiwan, Japan, Russia, France, Germany, and China. Elle Girl USA offers most foreign editions 60% of their content, yet it was not announced whether all of the foreign editions would also fold. Sarra Manning, author of YA novels Guitar Girl and Let's Get Lost, was on the launch team of Elle Girl UK and edited the magazine for a short period.
Which is a beauty magazine, Allure or Claudia?
Allure
3,506
hotpotqa
4k
Passage 1: Qingsheng Railway Station Qingsheng railway station (Chinese: 庆盛站) is a station in located in Qingsheng Village (庆盛村), Dongchong Town, Nansha District, Guangzhou. It is one of the stations on the Guangzhou–Shenzhen–Hong Kong Express Rail Link between Guangzhou South railway station in Panyu District, Guangzhou and Futian railway station in Futian District, Shenzhen. Guangzhou Metro An elevated metro station on Line 4 of the Guangzhou Metro started operation on 28 December 2017. Services China Railway Guangzhou–Shenzhen–Hong Kong Express Rail Link Guangzhou Metro Line 4 (Guangzhou Metro) Passage 2: Pingxiang, Guangxi Pingxiang (凭祥市) is a county-level city under the administration of the prefecture-level city of Chongzuo, in the southwest of the Guangxi Zhuang Autonomous Region, China. Situation The city covers an area of 650 km2 (250 sq mi). It is bordered in the north by Longzhou County and in the east by Ningming County, both in Chongzuo, and in the south and west by Vietnam's Lạng Sơn Province. National Route 322 comes through the city centre, as does the railway which continues on to Hanoi; a high-speed expressway, now also international, passes nearby. Zhennan Pass, site of the Battle of Bang Bo during the Sino-French War, is now named the "Friendship Pass" and is considered the gateway to Vietnam. There are also plans to build a high-speed railway from Nanning to the Vietnamese border. Administration Demographics Pingxiang has a population of approximately 106,400 (83.5% of the people belong to the Zhuang ethnic group, 2010). Ethnic groups include Zhuang, Han, Yao, Miao, Jing, and others. Towns (Chinese: 镇, zhen) Pingxiang (Chinese: 凭祥) Shangshi (Chinese: 上石) Xiashi (Chinese: 夏石) Youyi (Chinese: 友谊) Transportation Rail Hunan–Guangxi Railway Climate See also Lang Son == Notes and references == Passage 3: G7211 Nanning–Youyiguan Expressway The Nanning–Youyiguan Expressway (Chinese: 南宁—友谊关高速公路), commonly referred to as the Nanyou Expressway (Chinese: 南友高速公路), is a 225.06-kilometre-long expressway (139.85 mi) in the Chinese autonomous region of Guangxi that connects the city of Nanning, the capital of Guangxi, and Friendship Pass, known in Chinese as Youyiguan, a border crossing between China and Vietnam. The Friendship Pass is located in the county-level city of Pingxiang, under the administration of the city of Chongzuo. At the border, the expressway connects with North–South expressway in Vietnam. The expressway is designated G7211, and opened on 28 December 2005.The expressway is a spur of G72 Quanzhou–Nanning Expressway. The Nanning–Youyiguan Expressway branches off from its primary expressway, G72, just before the western terminus of G72 in Nanning. The entire route is also part of Asian Highway 1. Along with the G7511 Qinzhou–Dongxing Expressway and G8011 Kaiyuan–Hekou Expressway, it is one of the three expressways that connect China with Vietnam. Route Nanning City Center The Nanning–Youyiguan Expressway begins east of the city centre of Nanning, at the San'an Interchange in Qingxiu District. At this interchange, it connects to the G72 Quanzhou–Nanning Expressway, its parent expressway, as well as the G7201 Nanning Ring Expressway, the G75 Lanzhou–Haikou Expressway, and the G80 Guangzhou–Kunming Expressway. It proceeds south, in a concurrency with the G7201 Nanning Ring Expressway and G75 Lanzhou–Haikou Expressway, to the Liangqing South interchange in Liangqing District, passing through Yongning District on the way. At this three way interchange, the Nanning–Youyiguan Expressway and Nanning Ring Expressway continue westward while the G75 Lanzhou–Haikou Expressway branches off to the south. The concurrency with the G7201 Nanning Ring Expressway continues until the Gaoling Interchange in Jiangnan District, south of the city centre of Nanning, where the Nanning–Youyiguan Expressway proceeds southward, away from the city centre, while the G7201 Nanning Ring Expressway continues westward. This section of expressway forms the southeastern part of a ring expressway around the city centre of Nanning. The orbital expressway is designated G7201 Nanning Ring Expressway. Hence, the G7201 Nanning Ring Expressway is concurrent with the Nanning–Youyiguan Expressway for this entire length, and in addition, the two expressways share exit numbers during the concurrency. Nanning Airport Expressway The 18.443-kilometre-long section of the Nanning–Youyiguan expressway (11.460 mi) south of the Gaoling interchange is known as the Nanning Airport Expressway. This section first opened in October 2000, and is so named because it connects the city centre with Nanning's international airport, Nanning Wuxu International Airport. As this section was built prior to the other sections, it uses different exit numbers than those employed on the rest of the expressway (in which kilometre zero is at the San'an interchange). Instead, the exit numbers reset, with the Gaoling interchange serving as kilometre zero for this section.This section of expressway is toll-free. Just south of the Gaoling interchange, all motorists must stop at a toll booth. Immediately after the toll booth, the Nahong exit provides access to and from the city centre of Nanning from the south. The Nanning Airport Expressway section of the Nanning–Youyiguan expressway traverses through Jiangnan District in a southwesterly manner, ending at the Wuxu exit, where it connects to Airport Road and Nanning Wuxu International Airport. Wuxu to Youyiguan The final section of expressway, from the Wuxu exit to the Vietnam border, is 180.063 kilometres (111.886 mi) in length. This section of expressway is tolled from the Wuxu exit to the Pingxiang exit. West of the Wuxu exit, motorists must stop at a toll booth. The expressway enters the prefecture-level city of Chongzuo immediately after the Suxu exit. At the Yuanjing exit, the Nanning–Youyiguan Expressway connects with the S60 Qinzhou–Chongzuo Expressway to the east. This exit opened along with the opening of the Qinzhou–Chongzhou expressway on 31 December 2012.The expressway passes just south of the city centre of Chongzuo, before continuing west to the county-level city of Pingxiang. The section of expressway between the Wuxu and Chongzuo exits has a speed limit of 100 km/h (62 mph). The speed limit between the Chongzuo and Ningming exits, just before Pingxiang, is 80 km/h (50 mph), and between Ningming and Youyiguan is 60 km/h (37 mph). The section between Ningming County and Youyiguan parallels much of China National Highway 322 and the Hunan–Guangxi Railway. Entering Pingxiang, motorists must stop at a toll booth, marking the end of the tolled stretch of expressway. Immediately after the toll-booth is an at-grade intersection with China National Highway 322 and Jinxiang Avenue, the only at-grade intersection on the expressway. The expressway makes a sharp turn southward toward the Vietnam border and its southwestern terminus at Youyiguan. At Youyiguan, individuals must pass through the border checkpoint before entering Vietnam. On the Vietnamese side of Youyiguan is the northern terminus of North–South Expressway. Exit List Passage 4: Guandong, Guangxi Guandong (Chinese: 官垌) is a Chinese town located Northeastern Pubei, Qinzhou, Guangxi, which is famous for Guandong Fish. Passage 5: Tonghe Station Tonghe Station (Chinese: 同和站; pinyin: Tónghé Zhàn; Jyutping: tung4wo4 zaam6) is a metro station on Line 3 of the Guangzhou Metro. The underground station is located at the intersection of Guangzhou Avenue (广州大道) and Tongsha Road (同沙路) in the Baiyun District of Guangzhou. It started operation on 30 October 2010. Passage 6: Kaiping Kaiping (Chinese: 开平), alternately romanized in Cantonese as Hoiping, is a county-level city in Guangdong Province, China. It is located ín the western section of the Pearl River Delta and administered as part of the prefecture-level city of Jiangmen. The surrounding area, especially Sze Yup (Chinese: 四邑), is the ancestral homeland of many overseas Chinese, particularly in the United States. Kaiping has a population of 688,242 as of 2017 and an area of 1,659 square kilometres (641 sq mi). The locals speak a variant of the Sze Yup dialect. History During the Northern Song dynasty (960–1127), Kaiping was under the administration of Xin'an county (信安縣) Under the Qing (1649), Hoiping County made up part of the commandery of Shiuhing (Zhaoqing). It was promoted to county-level city status in 1993. Administration Administratively, Kaiping is administered as part of the prefecture-level city of Jiangmen. Geography Kaiping's city centre is located on the Tanjiang River, 140 kilometres (87 mi) away from Guangzhou, on the edge of the county Kaiping west of the Pearl River Delta. Kaiping consists of broken terrain, mostly either rocky or swampy, with only a third of the land arable. The county is shaped like a giant question mark (see map, in pink) and includes rural areas as well as three port cities: Changsha, Xinchang, and Dihai. Notable people Wing-tsit Chan: Chinese American scholar Ed Chau: member of the California State Assembly George Chow: member of the Legislative Assembly of British Columbia Yun Gee: Chinese American artist Víctor Joy Way: Chinese Peruvian politician Betty Kwan Chinn: Chinese American philanthropist Lee Quo-wei (1918–2013): former Hong Kong banker Liang Xiang: former Governor of Hainan Betty Ong: American flight attendant aboard American Airlines Flight 11 Jean Quan: former mayor of Oakland, California Bing Thom: Chinese Canadian architect and urban designer Szeto Wah (1931–2011): Hong Kong politicianSzutu, Chiu: scholar, his publishing included inter generation of Chinese new zealanders,the biography of architecet Ron FongDelbert E. Wong: Los Angeles County jurist who was the first judge in the continental United States of Chinese descent Ken Hom: Chinese-American chef, BBC TV presenter, and author Sights Kaiping Diaolou Kaiping Diaolous (碉楼) are fortified multi-storey towers constructed in the village countryside of mainly the Kaiping area. They were built from the early Qing Dynasty to the early 20th century, reaching a peak in the 1920s and 1930s, with the financial aid of overseas Chinese, when there were more than three thousand of these structures. Today, 1,833 diaolou are still standing, with the most in the towns of Shuikou (水口镇), Tangkou (塘口镇), Baihe (百合镇), Chikan (赤坎镇), and Xiangang (蚬冈镇), in that order (see map in article by Batto).In the late 19th and early 20th century, Kaiping was a region of major emigration abroad, and a melting pot of ideas and trends brought back by overseas Chinese, Huaqiao, made good. As a consequence, many watchtowers incorporated architectural features from China and the West. These were examples of the Qiaoxiang (僑鄉) architecture. The diaolou were built by villagers during a time of chaos and served two purposes: housing and protecting against forays by bandits.In 2007, the Kaiping diaolou and villages were added to the list of UNESCO World Heritage Sites and consist of four separate restored village areas: Zilicun village (自力村) in Tangkou, Sanmenli village (三门里) in Chikan, Jinjiangli village (锦江里) in Xiangang, and Majianglong village cluster (马降龙村落群) in Baihe township.The Kaiping diaolou was the location for parts of the filming of 2010 movie Let the Bullets Fly (让子弹飞). Examples of diaolous include: Yinglonglou (迎龙楼), oldest extant diaolou in Kaiping, in the village of Sanmenli (Chikan township) built by the Guan (关族) lineage during the Jiajing era of the Ming dynasty (1522-1566), is a massive three-storey fortress with one-meter thick walls, in contrast with the high tower diaolou built much later with the aid of Huaqiao. Jinjiangli Diaolou Cluster (锦江里雕楼群), situated behind Jinjiangli Village (Xiangang Township), includes three exquisite diaolous: Ruishi Lou, Shengfeng Lou, and Jinjiang Lou. Ruishi Diaolou, constructed in 1921, has nine floors and is the tallest diaolou in Kaiping. It features a Byzantine style roof and a Roman dome. The Majianglong diaolou cluster (马降龙雕楼群) is spread across five villages (Baihe township) in a bamboo forest: Yong'an and Nan'an Villages of the Huang (黄) family; Hedong, Qinglin, and Longjiang Villages of the Guan (关) family. Zilicun Diaolou Cluster (自力村雕楼群), located in Zilicun Village (Tangkou township), includes nine diaolous, the largest number among the four Kaiping villages designated by UNESCO. They feature the fusion of Chinese and various Western architectural styles and rise up surrealistically over the rice paddy fields. Fangshi Denglou - Built in 1920 after contributions from villagers, this denglou is five storeys high. It is referred to as the "Light Tower" because it had an enormous searchlight as bright as the beam of a lighthouse. Li Garden, in Beiyi Xiang, was constructed in 1936 by Mr. Xie Weili, a Chinese emigrant to the United States. Bianchouzhu Lou (The Leaning Tower), located in Nanxing Village was constructed in 1903. It has seven floors. Nan Lou (南楼), or the "Southern Diaolou", located on the riverbank in Chikan township, which was known for seven local soldiers by the surname Situ (司徒) who died defending Chikan from the Japanese. Chikan Chikan (赤坎) is officially designated as a National Historic and Cultural Town of China (中国历史文化名镇). The old town of Chikan has many historical sites that are about one hundred years old. For example, it has over 600 late-Qing and early-Republic historic Tong laus or Qilous (唐樓/ 騎樓) continuous, spanning over a length of 3 kilometers, including the riverside stretch along Dixi Lu (堤西路), sometimes referred to as 'European Styled Street'. Part of old Chikan town has been designated Chikan Studio City (赤坎影视城) for filming of historical scenes. Chikan township also has two restored diaolous: Yinglonglou, built by the Guan (关族) lineage in the Ming dynasty, and Nanlou, memorialized by the martyrdom of seven Situ clan (司徒族) members in the early 20th century. Historically, Chikan has been shaped by these two competing clans. One example is the existence of two libraries: the Situ's library, opened in 1926, and, not to be outdone, the Guan's library, opened in 1931; both libraries funded by overseas Chinese and incorporated architecture features from overseas. It is a famous and well-known location for braised pork in noodles to locals. Chikan is to become a tourist destination and the closing of local stores, dining posts, and streets are scheduled for the summer of 2017. Miscellaneous Kaiping has been twinned with Mesa, Arizona, United States, since October 18, 1993. Kaiping was a major source of emigrants at the turn of the 20th century. As a result, a large number of early Chinese Canadian and Chinese American communities had people who originated from Kaiping and its neighboring counties of Taishan, Enping and Xinhui, which is known collectively as Sze Yup. It is said that there are more Kaipingnese people living abroad today than there are Kaipingnese in Kaiping. In a 2016 report, Deloitte estimated that there are 750,000 Kaiping-born overseas Chinese.In 1973, various people originated from Kaiping started the Hoi Ping Chamber of Commerce Secondary School in Hong Kong. Climate Notes Passage 7: Hunan–Guangxi Railway The Hunan–Guangxi railway or Xianggui railway (simplified Chinese: 湘桂铁路; traditional Chinese: 湘桂鐵路; pinyin: xiāngguì tiělù), is a mostly electrified railroad in southern China that connects Hunan province and the Guangxi Zhuang Autonomous Region. The shortform name of the line, Xianggui, is named after the Chinese short names of Hunan, Xiang and Guangxi, Gui. The line runs 1,013 km (629 mi) from Hengyang in Hunan to Friendship Pass on Guangxi's border with Vietnam. Major cities along route include Hengyang, Yongzhou, Guilin, Liuzhou, Nanning, Pingxiang, and Friendship Pass. History The original single-track Xianggui Line was built in sections from 1937 to 1939 and 1950–1955. In December 2008, construction began on a capacity-expansion project to a new pair of 723.7 km (450 mi) electrified tracks from Hengyang to Nanning, which would create a three-track line of 497.9 km (309 mi) between Hengyang and Liuzhou and a four-track line of 225.8 km (140 mi) between Liuzhou and Nanning. The expansion project was completed in December 2013. The new double-track from Hengyang to Liuzhou is called the Hengyang–Liuzhou intercity railway and the new double-track from Liuzhou to Nanning is called the Liuzhou–Nanning intercity railway. As of May 2014, the southernmost section of the Xianggui Line, from Nanning to Pingxiang on the Vietnamese border, is undergoing capacity expansion to accommodate high-speed trains. Passenger service The Hunan–Guangxi railway is used by most trains traveling from Beijing, Shanghai, and other points in eastern China to Guangxi (Guilin, Nanning) and to the Vietnamese border. This includes the Beijing–Nanning–Hanoi through train. At the end of 2013, high-speed passenger service was introduced on the Hunan–Guangxi railway as well. A direct G-series trains from Beijing makes it to Guilin in about 10.5 hours. D-series trains continue from Guilin to Nanning, taking less than 3 hours for the trip. Rail connections Hengyang: Beijing–Guangzhou railway Yongzhou: Luoyang–Zhanjiang railway Liuzhou: Jiaozuo–Liuzhou railway, Guizhou–Guangxi railway Litang Township: Litang–Zhanjiang railway, Litang–Qinzhou railway Nanning: Nanning–Kunming railway; branch line to Qinzhou and Beihai Pingxiang: Hanoi–Đồng Đăng Railway See also List of railways in China Passage 8: Ruyifang Station Ruyifang Station (Chinese: 如意坊站; pinyin: Rúyìfāng Zhàn; Jyutping: jyu4ji3fong1 zaam6) is a station on Line 6 of the Guangzhou Metro. It is located underground in the Liwan District of Guangzhou. It started operation on 28 December 2013. Construction incident In the 03:00 hour of 5 October 2007, as the Ruyifang station was being dug up, water from an unidentified source covered a 300 m2 (3,200 sq ft) area of the construction site was submerged to a depth of 5 to 6 m (16 to 20 ft). No injuries were reported, and by 06:30 on 6 October, the previously submerged portion was re-sealed. Station layout Exits Passage 9: Diaolou Diaolou (simplified Chinese: 碉楼; traditional Chinese: 碉樓) are fortified multi-storey watchtowers in rural villages, generally made of reinforced concrete. These towers are located mainly in the Kaiping (开平) county of Jiangmen prefecture in Guangdong province, China. In 2007, UNESCO designated the Kaiping Diaolou and Villages (Chinese: 开平碉楼与村落) a World Heritage Site, which covers four separate Kaiping village areas: Sanmenli (三门里), Zilicun (自力村), Jinjiangli (锦江里), and Majianglong village cluster (马降龙村落群). These areas demonstrate a unique fusion of 19th and 20th-century Chinese and Western architectural styles. History Diaolou structures were built from the time of the Ming Dynasty to the early 20th century, reaching a peak during the Warlord Era in the 1920s and 1930s, with the financial aid of overseas Chinese, when there were more than three thousand of these structures. Today, approximately 1,800 diaolou remain standing, and mostly abandoned, in the village countryside of Kaiping, and approximately 500 in neighboring Taishan. They can also occasionally be found in several other areas of Guangdong, such as Shenzhen and Dongguan.The earliest standing diaolou in Kaiping is Yinglong Lou (迎龙楼) in the village of Sanmenli (Chikan township), built by the Guan lineage during the reign of Jiajing Emperor of the Ming dynasty (1522–1566). It was a massive three-storey rectangular fortress with one-meter thick walls, with little resemblance with the high tower diaolous built four centuries later. Yinglong Lou was renovated in 1919 and is 11.4 meters high.In the late 19th century and early 20th century, because of poverty and social instabilities, Kaiping was a region of major emigration abroad, one of the "pre-eminent sending area" of overseas Chinese. Diaolous built during the chaotic early 20th century were most numerous around the centers of emigration. Monies from emigrants wanting to ensure the security of their families, villages, or clan lineages were used to fund the diaolou. Although the diaolous were built mainly as protection against forays by bandits, many of them also served as living quarters. Some of them were built by a single family, some by several families together or by entire village communities. Kaiping became also a melting pot of ideas and trends brought back by overseas Chinese. As a result, the villagers built their diaolou to incorporate architectural features from China and from the West.It wasn't until after 1949 when an administrative system that extended down to the small villages was created that the diaolou lost their defensive purpose and were then abandoned or converted. Still, they stand as a tribute to overseas Chinese culture and the perseverance of the peasants of Kaiping.In 2007, UNESCO named the Kaiping Diaolou and Villages (开平碉楼与村落) a World Heritage Site. UNESCO wrote, "...the Diaolou ... display a complex and flamboyant fusion of Chinese and Western structural and decorative forms. They reflect the significant role of émigré Kaiping people in the development of several countries in South Asia, Australasia, and North America, during the late 19th and early 20th centuries, and the close links between overseas Kaiping and their ancestral homes. The property inscribed here consists of four groups of Diaolou, totaling some 1,800 tower houses in their village settings." The four restored groups of Kaiping diaolou are in: Zilicun village (自力村) of Tangkou township (塘口镇), Sanmenli village (三门里) of Chikan township (赤坎镇), Majianglong cluster (马降龙) of Baihe township (百合镇), and Jinjiangli village (锦江里) of Xiangang township (蚬冈镇). The Kaiping diaolou was the location for parts of the filming of 2010 movie Let the Bullets Fly (让子弹飞). Examples Yinglong Lou (迎龙楼), located in the village of Sanmenli (Chikan township), was built by the Guan (关族) lineage during the Jiajing era of the Ming dynasty (1522–1566). As the oldest preserved diaolou in Kaiping, it retains the primitive model of a watchtower with traditional square structure and is not influenced by western architectural styles. Jinjiangli Diaolou Cluster (锦江里碉楼群), situated behind Jinjiangli Village (Xiangang Township) of the Huang (黄) family, includes three exquisite diaolous: Ruishi Lou, Shengfeng Lou, and Jinjiang Lou. Ruishi Diaolou, constructed in 1921, has nine floors and is the tallest diaolou in Kaiping. It features a Byzantine style roof and a Roman dome. Majianglong Diaolou cluster (马降龙碉楼群) is spread across five villages (Baihe township) in a bamboo forest: Yong'an and Nan'an Villages of the Huang (黄) family; Hedong, Qinglin, and Longjiang Villages of the Guan (关) family. Tianlu Lou (Tower of Heavenly Success), located in Yong'an Village, was built in 1922 and is seven storey tall plus a roof top floor. Zilicun Diaolou Cluster (自力村碉楼群), located in Zilicun Village (Tangkou township), includes nine diaolous, the largest number among the four Kaiping villages designated by UNESCO. They feature the fusion of Chinese and various Western architectural styles and rise up surrealistically over the rice paddy fields. Fangshi Denglou – Built in 1920 after contributions from villagers, this denglou is five stories high. It is referred to as the "Light Tower" because of an enormous searchlight with a brightness much like that of a lighthouse. Li Garden, in Beiyi Xiang, was constructed in 1936 by Mr. Xie Weili, a Chinese emigrant to the United States. Bianchouzhu Lou (The Leaning Tower), located in Nanxing Village (南兴村) in Xiangang township, was constructed in 1903. It has seven floors and overlooks a pond. Gallery See also Cantonese architecture Chikan, Kaiping Kaiping Himalayan Towers Passage 10: Huaxia Art Centre Huaxia Art Centre is a facility for art and culture located on the outskirts of the Overseas Chinese Town in the Nanshan District, Shenzhen City, Guangdong Province, China. The 13,500 square metres (145,000 sq ft) centre was completed in 1990 and opened in 1991. It has since hosted a variety of large national and international conferences, exhibitions, and artistic and cultural events.From February to June 1997, it hosted the Provisional Legislative Council of Hong Kong. The centre underwent renovations between May 2004 to March 2005 to replace seats and add film studios.
Are both Kaiping and Pingxiang, Guangxi located in Guandong Province?
no
3,777
hotpotqa
4k
Passage 1: Donnie Elbert Donnie Elbert (May 25, 1936 – January 26, 1989) was an American soul singer and songwriter, who had a prolific career from the mid-1950s to the late 1970s. His U.S. hits included "Where Did Our Love Go?" (1971), and his reputation as a Northern soul artist in the UK was secured by "A Little Piece of Leather", a performance highlighting his powerful falsetto voice. Career Elbert was born in New Orleans, Louisiana, but when aged three his family relocated to Buffalo, New York. He learned to play guitar and piano as a child, and in 1955 formed a doo-wop group, the Vibraharps, with friend Danny Cannon. Elbert acted as the group's guitarist, songwriter, arranger, and background vocalist, making his recording debut on their single "Walk Beside Me". He left the group in 1957 for a solo career, and recorded a demonstration record that earned him a recording contract with the King label's DeLuxe subsidiary. His solo debut "What Can I Do?" reached #12 in the U.S. R&B chart, and he followed it up with the less successful "Believe It or Not" and "Have I Sinned?", which became a regional hit in Pittsburgh.He continued to release singles on DeLuxe, but with little commercial success, and also played New York's Apollo Theater and toured the Chitlin' Circuit of African-American owned nightclubs. After completing an album, The Sensational Donnie Elbert Sings, he left DeLuxe in 1959, joining first Red Top Records, where in 1960 he recorded "Someday (You'll Want Me to Want You)", and then Vee-Jay Records, where he had another regional hit with "Will You Ever Be Mine?", which reportedly sold 250,000 copies in the Philadelphia area but failed to take off nationwide. His career was also interrupted by a spell in the US Army, from which he was discharged in 1961. He then recorded singles for several labels, including Parkway, Cub and Checker, but with little success. However, although the 1965 Gateway label release of "A Little Piece of Leather" failed to chart in the US, the record became a #27 pop hit when released on the London label in the UK several years later in 1972, and remains a Northern soul favorite.Elbert relocated to the UK in 1966, where he married. There, he recorded "In Between The Heartaches" for the Polydor label in 1968, a cover version of the Supremes' hit "Where Did Our Love Go?" and an album of Otis Redding cover versions, Tribute To A King. His 1969 Deram release "Without You" had a rocksteady rhythm, and went to the top of the Jamaican charts. He returned to the US the same year and had his first US chart hit in over a decade with the Rare Bullet release, "Can't Get Over Losing You", which reached #26 on the Billboard R&B chart. The track and its b-side, "Got To Get Myself Together", both written by Elbert, were released several times on different labels in subsequent years. After the success of that record, Elbert moved labels for a re-make of the Supremes' 1964 hit, "Where Did Our Love Go?" on All Platinum. It became his biggest hit, reaching #15 on the Billboard pop chart, #6 on the R&B chart, and (in 1972) #8 in the UK. Its follow-up, "Sweet Baby" reached #30 on the R&B chart in early 1972. Elbert then signed with Avco-Embassy, where he entered the recording studio with the successful production team of Hugo & Luigi. His cover of the Four Tops' "I Can't Help Myself" reached #14 on the Billboard R&B chart, but climbed as high as #2 on the alternative Cashbox R&B chart. Elbert baulked at the label's insistence that he record material associated with Motown and departed with only a few tracks left to record for an album. Even so, the album was released after Avco sold it on to a budget label, Trip. He returned to All Platinum and had a run of minor R&B hits, but left after a disagreement over the claimed authorship of Shirley & Company's R&B chart-topper "Shame Shame Shame", which was credited to label owner Sylvia Robinson. Elbert was also involved in a copyright wrangle over Darrell Banks' major R&B and pop hit in 1966, "Open The Door To Your Heart". He had originally written the song as "Baby Walk Right In" (still its alternative legal title) and given it to Banks, but received no writing credit on the original record. Eventually, the matter was resolved by BMI with a disgruntled Elbert awarded joint authorship with Banks. "Open The Door" has since been given award-winning status by BMI and is one of over 100 songs written or co-written by Elbert. For 1975's "You Keep Me Crying (With Your Lying)", Elbert formed his own label and "I Got to Get Myself Together", appeared on an imprint bearing his surname, but it was among his final recordings.By the mid-1980s, Elbert had retired from performing and became director of A&R for Polygram's Canadian division. He suffered a massive stroke and died in 1989, at the age of 52. Discography Chart singles Albums The Sensational Donnie Elbert Sings (King, 1959) Tribute to a King (1968) Where Did Our Love Go? (All Platinum, 1971) U.S. #153, R&B #45 Have I Sinned? (Deluxe, 1971) Stop in the Name of Love (Trip, 1972) A Little Bit of Leather (1972) Roots of Donnie Elbert (Ember, 1973) Dancin' the Night Away (All Platinum, 1977) See also List of disco artists (A-E) Passage 2: Benny Rubinstein Benny Rubinstein (בני רובינשטיין) is an Israeli former footballer and current real estate developer. He played soccer for Maccabi Netanya and Hapoel Netanya. At the 1969 Maccabiah Games, Rubinstein played soccer for Israel, winning a gold medal. Biography Rubinstein was born in Netanya, Israel. His wife is Sarah Rubinstein. Benny's son, Aviram also played football for Maccabi Netanya.He played soccer for Maccabi Netanya and Hapoel Netanya. At the 1969 Maccabiah Games, Rubinstein played soccer for Israel, winning a gold medal.Rubinstein then worked as a real estate agent, and now works in real estate development. Honours Israeli Premier League (1): 1970-71 Passage 3: Nancy Baron Nancy Baron is an American rock singer who was active in New York City in the early 1960s, known for the singles "Where Did My Jimmy Go?" and "I've Got A Feeling". Early life Born into a family of singers and writers, Baron was introduced to many musical genres by her family at an early age. Noting her singing talents, her parents brought their young child to auditions for musical theater productions in New York City. The singer joined Glee clubs at school and formed her own female singing groups at school. At the age of 11, she heard her first "Rock and Roll" song. This affected her taste in music and desire to emulate the style; it was the first time she heard a Rock group with a female lead singer. This was significant since she realized that she could be a lead singer. Recording career At the age of 15, her parents sent her for vocal coaching in Manhattan, N.Y. After a while her coach sent her to record a demonstration record in a sound studio near Broadway. Upon hearing her sing, the sound engineer contacted his friend who was a producer of a small record company in N.Y.C.; he was impressed by her voice and immediately signed her to a contract. The singer's mother co-signed the document since Baron was a fifteen-year-old minor at the time.Baron became one of the many girl group/girl sound singers of the early 1960s. Baron was not a member of a group; her producers would hire "pay for hire" backup groups for her recordings. This "sound" as it is referred to had much to do with Phil Spector, one of its major creators; Spector produced recordings of this genre prolifically. The groups were composed of young adult or teenage girls, each with a lead singer and any number of back up singers.At the time, the troubled label (a small N.Y.C. record company owned by Wally Zober) could not promote Baron's "I've Got A Feeling"/"Oh Yeah" 45 vinyl and so she eventually signed a contract with Jerry Goldstein producer of FGG productions, also located in Manhattan. "Where Did My Jimmy Go"/"Tra la la, I Love You" was the result (Diamond). Later life Baron left the music industry at the age of 19, choosing to enter higher education due to changes in the music industry of those days; she eventually received an advanced degree. Baron's "I've Got a Feeling" was covered by The Secret Sisters on their 2010 self-titled album as well as being released as a single. AllMusic describes Baron's song as "an early-'60s pop/rock obscurity". Passage 4: Jack Carroll (hurler) Jack Carroll (1921–1998) is an Irish hurler who played as a goalkeeper for the Offaly senior hurling team. Carroll made his first appearance for the team during the 1943 championship and was a regular member of the starting fifteen until his retirement after the 1953 championship. During that time he enjoyed little success as Offaly were regarded as one of the minnows of provincial hurling. At club level Carroll was a five-time county club championship medalist with Coolderry. Carroll's father-in-law, "Red" Jack Teehan, his son, Pat Carroll, and his grandson, Brian Carroll, also played hurling with Offaly. Passage 5: Andrew Allen (singer) Andrew Allen (born 6 May 1981) is a Canadian singer-songwriter from Vernon, British Columbia. He is signed to Sony/ATV and has released five top ten singles, and written and recorded many others, including Where Did We Go? with Carly Rae Jepsen. He also records covers and posts them on YouTube. Background Raised in British Columbia's Okanagan Valley, his acoustic pop/rock music is inspired by artists like Jason Mraz and Jack Johnson. Career Andrew Allen scored his first hit in 2009, when I Wanna Be Your Christmas cracked the Top Ten in his native Canada. He was honored as the feature performer for the Sochi 2014 hand off finale on the internationally broadcast Closing Ceremony of the 2010 Paralympic Winter Games held at Whistler, British Columbia. Allen continued building an international profile in 2010, and released his biggest single Loving You Tonight, which sold more than 100,000 copies worldwide, was featured on the Gold Selling NOW 37, hit #6 on the Canadian charts for 22 weeks in a row and #30 on the US Hot AC charts, and got him a record deal with Epic after spending much of that year on the road. Because of the song's attention, Allen had the opportunity to perform with some of the world's biggest artists like Bruno Mars, One Republic, The Barenaked Ladies, Train, Matt Nathanson, Joshua Radin, Andy Grammer, The Script, Nick Carter, Kris Allen, Carly Rae Jepsen and many others. Loving You Tonight was also featured on the soundtrack of Abduction starring Taylor Lautner. Collaborations Andrew Allen is also well known in the songwriting community, and has written songs with artists like Meghan Trainor, Rachel Platten, Cody Simpson, Carly Rae Jepsen, Matt Simons, Conrad Sewell as well as writer/producers like Toby Gad, Ryan Stewart, Eric Rosse, Jason Reeves, John Shanks, Nolan Sipes, Mark Pellizzer (Magic), Brian West and Josh Cumbee. Numerous songs he has been a part of writing have been released by various artists, including Last Chance, which was on the Grammy nominated album Atmosphere by Kaskade feat. DJ Project 46, Ad Occhi Chiusi which was on the Double Platinum release by Italian artist Marco Mengoni and Maybe (which Allen also later released himself) released by teen pop sensation Daniel Skye, as well as many others. Singles I Wanna Be Your Christmas (2009) Loving You Tonight (2010) I Want You (2011) Where Did We Go? (2012) Satellite (2012) Play with Fire (2013) Thinking About You (2014) What You Wanted (2016) Favorite Christmas Song (2017) Maybe (2017) Discography The Living Room Sessions (2008) Andrew Allen EP (2009) The Mix Tape (2012) Are We Cool? (2013) All Hearts Come Home (2014) The Writing Room (2020) 12:34 (2022; pre-released on vinyl in 2021) Songwriting credits Last Chance released by Kaskade featuring Project 46 on his Grammy nominated record Atmosphere. Ad Occhi Chiusi released by Marco Mengoni on his Double Platinum record. Reasons released by Project 46. No Ordinary Angel released by Nick Howard from The Voice Germany. Million Dollars released by Nick Howard from The Voice Germany. Maybe released by Daniel Skye. Passage 6: Helena Carroll Helena Winifred Carroll (13 November 1928 – 31 March 2013) was a veteran film, television and stage actress. Early life Born to clothing designer Helena Reilly and Abbey Theatre playwright Paul Vincent Carroll, she was the youngest of three sisters. Her elder sisters were Theresa Elizabeth Perez (1924–2001), a classically trained musician and the producer/founder of the People's Pops Concerts in Phoenix, Arizona, and journalist Kathleen Moira Carroll (1927–2007).Carroll attended Clerkhill Notre Dame High School, a Roman Catholic convent school in Dumbarton. Stage career Carroll received her acting training at the Central School which later became the Webber Douglas Academy of Dramatic Art London, appearing in three plays in London's West End and a film, Midnight Episode, by age 20. She made her Broadway debut in Separate Tables by Terence Rattigan. She moved to the U.S. during the 1950s, touring and performing on Broadway and co-founded, with Dermot McNamara, The Irish Players, a repertory theater company in Manhattan.Helena split her stage work between Dublin, London and New York, appearing on Broadway in, among other productions the original production of Oliver! as Mrs. Sowerberry, as well as Pickwick, Design for Living, Waiting in the Wings, and the Elizabeth Taylor-Richard Burton revival of Private Lives (New York and Los Angeles). Her last stage performance was in 2007 at the age of 78. Film and television Carroll played the leading role of Nora, in a television production of her father's play, The White Steed (1959 Play of the Week Series), directed by Joe Gisterak. Gisterak directed a 1980 commissioned opera of her father's play, Beauty is Fled, as part of the "Children's Opera Series", which her sister, Theresa Perez founded. The opera was performed at the Phoenix Symphony Hall. Prompted by producer Al Simon and casting director Caro Jones, Carroll moved to Los Angeles in the late 1960s and appeared in numerous films and television programs, including the lively Aunt Kate in John Huston's Academy Award-nominated film The Dead, based on the short story by James Joyce. Other works in Hollywood included The Friends of Eddie Coyle starring Robert Mitchum, The Jerk, directed by Carl Reiner and starring Steve Martin, The Mambo Kings, the Warren Beatty remake of Love Affair, the 1979 NBC mini-series Backstairs at the White House, and such television programs as Kojak, General Hospital, The Edge of Night, Loving Couples, Laverne and Shirley, Murder She Wrote, and Married... With Children. Death Carroll resided in Los Angeles, and died in Marina del Rey, California from heart failure on 31 March 2013 at the age of 84. She is survived by a half brother, Brian Carroll; a niece, Helena Perez Reilly; and a great-nephew, Paul Vincent Reilly. Filmography Passage 7: Robert Paul Smith Robert Paul Smith (April 16, 1915 – January 30, 1977) was an American author, most famous for his classic evocation of childhood, Where Did You Go? Out. What Did You Do? Nothing. Biography Robert Paul Smith was born in Brooklyn, grew up in Mount Vernon, NY, and graduated from Columbia College in 1936. He worked as a writer for CBS Radio and wrote four novels: So It Doesn't Whistle (1946) (1941, according to Avon Publishing Co., Inc., reprint edition ... Plus Blood in Their Veins copyright 1952); The Journey, (1943); Because of My Love (1946); The Time and the Place (1951). The Tender Trap, a play by Smith and Dobie Gillis creator Max Shulman, opened in 1954 with Robert Preston in the leading role. It was later made into a movie starring Frank Sinatra and Debbie Reynolds. A classic example of the "battle-of-the-sexes" comedy, it revolves around the mutual envy of a bachelor living in New York City and a settled family man living in the New York suburbs. Where Did You Go? Out. What Did You Do? Nothing is a nostalgic evocation of the inner life of childhood. It advocates the value of privacy to children; the importance of unstructured time; the joys of boredom; and the virtues of freedom from adult supervision. He opens by saying "The thing is, I don't understand what kids do with themselves any more." He contrasts the overstructured, overscheduled, oversupervised suburban life of the child in the suburban 1950's with reminiscences of his own childhood. He concludes "I guess what I am saying is that people who don't have nightmares don't have dreams. If you will excuse me, I have an appointment with myself to sit on the front steps and watch some grass growing." Translations from the English (1958) collects a series of articles originally published in Good Housekeeping magazine. The first, "Translations from the Children," may be the earliest known example of the genre of humor that consists of a series of translations from what is said (e.g. "I don't know why. He just hit me") into what is meant (e.g. "He hit his brother.") How to Do Nothing With Nobody All Alone By Yourself (1958) is a how-to book, illustrated by Robert Paul Smith's wife Elinor Goulding Smith. It gives step-by-step directions on how to: play mumbly-peg; build a spool tank; make polly-noses; construct an indoor boomerang, etc. It was republished in 2010 by Tin House Books. List of works Essays and humor Where Did You Go? Out. What Did You Do? Nothing (1957)Translations from the English (1958) Crank: A Book of Lamentations, Exhortations, Mixed Memories and Desires, All Hard Or Chewy Centers, No Creams(1962)How to Grow Up in One Piece (1963)Got to Stop Draggin’ that Little Red Wagon Around (1969)Robert Paul Smith’s Lost & Found (1973) For children Jack Mack, illus. Erik Blegvad (1960)When I Am Big, illus. Lillian Hoban (1965)Nothingatall, Nothingatall, Nothingatall, illus. Allan E. Cober (1965)How To Do Nothing With No One All Alone By Yourself, illus Elinor Goulding Smith (1958) Republished by Tin House Books (2010) Novels So It Doesn't Whistle (1941) The Journey (1943) Because of My Love (1946)The Time and the Place (1952)Where He Went: Three Novels (1958) Theatre The Tender Trap, by Max Shulman and Robert Paul Smith (first Broadway performance, 1954; Random House edition, 1955) Verse The Man with the Gold-headed Cane (1943)…and Another Thing (1959) External links An Interview, by Edward R Murrow on YouTube Passage 8: Joseph J. Sullivan (vaudeville) Joseph J. Sullivan was a blackface comedian and acrobat in New York. He composed the song Where Did You Get That Hat? and first performed it in 1888. It was a great success and he performed it many times thereafter. Passage 9: Paul Vincent Carroll Paul Vincent Carroll (10 July 1900 – 20 October 1968) was an Irish dramatist and writer of movie scenarios and television scripts. Carroll was born in Blackrock, County Louth, Ireland and trained as a teacher at St Patrick's College, Dublin and settled in Glasgow in 1921 as a teacher. Several of his plays were produced by the Abbey Theatre in Dublin. He co-founded, with Grace Ballantine and Molly Urquhart, the Curtain Theatre Company in Glasgow. Personal life Carroll and his wife, clothing designer Helena Reilly, had three daughters; the youngest was actress Helena Carroll (1928–2013). He also had a son, Brian Francis, born in 1945.Paul Vincent Carroll died at age 68 in Bromley, Kent England..He died in his sleep from heart failure.He was a close friend of Patrick Kavanagh's in the 1920s. List of works The Watched Pot (unpublished) The Things That are Caesar's (London, 1934) Shadow and Substance (1937, won the Casement Award and the New York Drama Critics' Circle Award) The White Steed (1939, won Drama Critics’ Circle Award) The Strings Are False (1942, published as The Strings My Lord Are False, 1944) Coggerers (1944, later renamed The Conspirators) The Old Foolishness (1944) The Wise Have Not Spoken (1947) Saints and Sinners 1949 She Went by Gently (1953, *Irish Writing* magazine. Republished in 1955 in 44 Irish Short Stories edited by Devin A. Garrity) Passage 10: Yaya Soumahoro Yaya Alfa Soumahoro (born 28 September 1989) is an Ivorian former professional footballer who plays as an attacking midfielder. Having begun his career with Séwé Sports in his native country, he joined Thai club Muangthong United in 2008. His good performances earned him a move to K.A.A. Gent in 2010. He spent five and a half seasons with Gent but was plagued by recurring injuries throughout his time there. Following a half-season loan to Sint-Truidense V.V., he returned to Muangthong United where did not feature. In 2018, he joined the Egyptian side Wadi Degla SC. Early life Soumahoro grew in the Ivorian capital Abidjan. He learned to play football in the streets and he decided to play for Séwé Sports. Soumahoro lost both parents at an early age and was taken care by a foster family. Club career Muangthong United In 2008 Soumahoro moved to Thai Premier League side Muangthong United from Séwé Sports. He became a figurehead in this team, as he scored many goals and charmed the supporters with his numerous dribbles. He scored 32 goals in 72 games and helped the club win the Thai Premier League Championship Thai Division 1 League in 2008 and the Thai Premier League in 2009. Gent On 1 July 2010, Soumahoro joined Belgian club K.A.A. Gent on a three-year contract. On 22 August, he impressed in 3–1 league win against Charleroi scoring and assisting a goal each while also winning a penalty which Shlomi Arbeitman failed to convert. Four days later, he scored a goal to put Gent level on aggregate in a UEFA Europa League qualifying match against Feyenoord. His side went on to win 2–0 and qualify for the UEFA Europa League.In September 2010, Soumahoro sustained a hamstring injury in a league match against Zulte Waregem and was substituted off after 73 minutes. It was announced he would be out of action for four weeks. In October 2010, he signed a one-year contract extension, tying him to the club until 2014.In April 2011, he received a three-match suspension.In March 2012, it was announced Soumahoro would need to undergo surgery likely ruling him out for the rest of the 2012–13 season.In October 2013, he signed a two-year contract extension with Gent, keeping him at the club until 2016.On 20 September 2015, Soumahoro made his first starting appearance after an injury layoff in a league match against Standard Liège. He had to leave the pitch after twisting his knee. With his contract set to expire at the end of the 2015–16 season Gent were looking to transfer Soumahoro. He did not take part in the club's winter training camp and instead trained with the reserves in wait of contract offers from other clubs. On 8 January 2016, Soumahoro rejected a move involving a 2.5-year deal to Cypriot club Anorthosis Famagusta. On 12 January, he joined Gent's league rivals Sint-Truidense V.V. on loan until the end of the season. After Gent In June 2016 Soumahoro returned to former club Muangthong United. Six months later, his contract was terminated after he had not made any appearances due to injury problems. In July 2018, he trialled with Belgian First Division B side K.S.V. Roeselare. He sustained an injury in a friendly match with Crawley Town and was not signed by Roeselare.In October 2018, Soumahoro joined Egyptian Premier League side Wadi Degla SC as a free agent. Honours Muangthong United Thai Division 1 League: 2008 Thai Premier League: 2009Gent Belgian Pro League: 2014–15 Belgian Super Cup: 2015
Where did Helena Carroll's father study?
St Patrick's College
3,964
2wikimqa
4k
Passage 1: Marie of Évreux Marie d'Évreux (1303 – October 31, 1335) was the eldest child of Louis d'Évreux and his wife Margaret of Artois. She was a member of the House of Capet. She was Duchess of Brabant by her marriage to John III, Duke of Brabant. Her paternal grandmother being Marie of Brabant, she was a great-granddaughter of Henry III, Duke of Brabant and so, her husband's second cousin. Marie was the eldest of five children born to her parents. Marie's younger siblings included: Charles d'Évreux; Lord of Étampes, Philip III of Navarre; husband of Joan II of Navarre, and Jeanne d'Évreux; Queen of France by her marriage to Charles IV of France. Marriage In 1311, Marie married John III, Duke of Brabant as his father's gesture of rapprochement with France. They had six children: Joanna, Duchess of Brabant (1322–1406) Margaret of Brabant (February 9, 1323 – 1368), married at Saint-Quentin on June 6, 1347 Louis II of Flanders Marie of Brabant (1325 – March 1, 1399), Lady of Turnhout, married at Tervuren on July 1, 1347 Reginald III of Guelders John (1327–1335/36) Henri (d. October 29, 1349) Godfrey (d. aft. February 3, 1352)Marie's daughter Joanna was the first woman to be Duchess of Brabant in her own right. Marie died October 31, 1335, aged thirty-one or thirty-two. Genealogy Passage 2: Hannah Arnold Hannah Arnold may refer to: Hannah Arnold (née Waterman) (c.1705–1758), mother of Benedict Arnold Hannah Arnold (beauty queen) (born 1996), Filipino-Australian model and beauty pageant titleholder Passage 3: Beatrice of Luxembourg Beatrice of Luxembourg (Hungarian: Luxemburgi Beatrix; 1305 – 11 November 1319), was by birth member of the House of Luxembourg and by marriage Queen of Hungary. She was the youngest child of Henry VII, Holy Roman Emperor and his wife, Margaret of Brabant. Her two siblings were John of Luxembourg and Marie of Luxembourg, Queen of France. Life At the time of his death (1313), Emperor Henry VII initiated the negotiations for a marriage between Beatrice and Charles, Duke of Calabria, son and heir of King Robert of Naples, and also planned to marry again (his wife was already dead in 1311) with Catherine of Habsburg. Beatrice was called by her father to Italy, where she arrived with her paternal grandmother, Beatrice d'Avesnes. The marriage plans with the Duke of Calabria failed, and the Emperor began negotiations for a marriage with Prince Peter of Sicily, eldest son and heir of King Frederick III; however, the current political conflicts between the Holy Roman Empire and the Kingdom of Sicily soon ended this planned betrothal too. When King Charles I of Hungary (whose first wife Maria of Bytom, had died in 1317) decided to marry again, he sent to the Kingdom of Bohemia two representants, Thomas Szécsényi and Simon Kacsics, in addition to an interpreter, a bourgeois from Szoprońskim called Stephen, in order to find a bride. King John called his two sisters to his court; at that moment, Marie resided in St. Marienthal Abbey and Beatrice remained in Italy. Both princesses arrived to Prague on 20 June 1318, and three days later, the Hungarian envoys met both girls at the monastery of Zbraslav, where the Bohemian king gave them the opportunity to choose between them their future queen. After a calculated assessment of both personal and physical attitudes, they chose Beatrice. Soon after, the formal engagement took place, and the young bride parted with the Hungarian entourage to her new home. On the border of the Kingdom of Hungary she was officially welcomed by Charles I's messengers. Beatrice and Charles I married at the Octave of Saint Martin (between 12 and 17 November) and she was crowned Queen of Hungary in the ceremony. Beatrice became pregnant in 1319. In November, she went into labour but died while giving birth. The child was stillborn. She was buried at Nagyvárad Cathedral. Passage 4: Matilda of Brabant, Countess of Artois Matilda of Brabant (14 June 1224 – 29 September 1288) was the eldest daughter of Henry II, Duke of Brabant and his first wife Marie of Hohenstaufen. Marriages and children On 14 June 1237, which was her 13th birthday, Matilda married her first husband Robert I of Artois. Robert was the son of Louis VIII of France and Blanche of Castile. They had: Blanche of Artois (1248 – 2 May 1302). Married first Henry I of Navarre and secondly Edmund Crouchback, 1st Earl of Lancaster. Robert II, Count of Artois (1250 – 11 July 1302 at the Battle of the Golden Spurs).On 8 February 1250, Robert I was killed while participating in the Seventh Crusade. On 16 January 1255, Matilda married her second husband Guy III, Count of Saint-Pol. He was a younger son of Hugh I, Count of Blois and Mary, Countess of Blois. They had: Hugh II, Count of Blois (died 1307), Count of Saint Pol and later Count of Blois Guy IV, Count of Saint-Pol (died 1317), Count of Saint Pol Jacques I of Leuze-Châtillon (died 11 July 1302 at the Battle of the Golden Spurs), first of the lords of Leuze, married Catherine de Condé and had issue; his descendants brought Condé, Carency, etc. into the House of Bourbon. Beatrix (died 1304), married John I of Brienne, Count of Eu Jeanne, married Guillaume III de Chauvigny, Lord of Châteauroux Gertrude, married Florent, Lord of Mechelen (French: Malines). Passage 5: Marie of Brabant, Countess of Savoy Marie of Brabant (1277/80–1338), was a Countess Consort of Savoy by marriage to Amadeus V, Count of Savoy. She was the daughter of John I, Duke of Brabant and Margaret of Flanders. Life She was engaged to Amadeus after the death of her father. The marriage was arranged when Savoy joined Brabant in an alliance with France against England. A Papal dispensation was obtained in October 1297. The wedding took place at the Château de Chambéry in 1298. As countess of Savoy, Marie of Brabant appears to have brought with her a certain cultural influence from Brabant, and brought with her several artisans which influenced the court of Savoy, such as her tailor Colin de Brabant. The marriage resulted in close ties between Savoy and Brabant, and gave Brabant closer access to Italy. Marie appears to have had some influence at court, playing a role as diplomat and political adviser.In 1308, her brother-in-law was elected King in Germany. When her sister and brother-in-law travelled to Italy in 1310, they visited Maria at the court of Savoy in Geneva on their way to Rome. In 1323, she became a widow. Her spouse was succeeded by Maria's stepson. The exact date of her death is unknown. Issue Maria of Savoy Catherine of Savoy, d. 1336, married to Leopold I (duke of Austria and Styria) Anna of Savoy, d. 1359, married to Byzantine Emperor Andronikos III Palaiologos Beatrice of Savoy (1310–1331), married in 1327 to Henry VI, Duke of Carinthia, count of Tirol Passage 6: Hubba bint Hulail Hubba bint Hulail (Arabic: حبة بنت هليل) was the grandmother of Hashim ibn 'Abd Manaf, thus the great-great-great-grandmother of the Islamic prophet Muhammad. Biography Hubbah was the daughter of Hulail ibn Hubshiyyah ibn Salul ibn Ka’b ibn Amr al-Khuza’i of Banu Khuza'a who was the trustee and guardian of the Ka‘bah (Arabic: كَـعْـبَـة, 'Cube'). She married Qusai ibn Kilab and after her father died, the keys of the Kaaba were committed to her. Qusai, according to Hulail's will, had the trusteeship of the Kaaba after him. Hubbah never gave up ambitious hopes for the line of her favourite son Abd Manaf. Her two favourite grandsons were the twin sons Amr and Abd Shams, of ‘Ātikah bint Murrah. Hubbah hoped that the opportunities missed by Abd Manaf would be made up for in these grandsons, especially Amr, who seemed much more suitable for the role than any of the sons of Abd al-Dar. He was dear to the ‘ayn (Arabic: عـيـن, eye) of his grandmother Hubbah. Family Qusai ibn Kilab had four sons by Hubbah: Abd-al-Dar ibn Qusai dedicated to his house, Abdu’l Qusayy dedicated to himself, Abd-al-Uzza ibn Qusai to his goddess (Al-‘Uzzá) and Abd Manaf ibn Qusai to the idol revered by Hubbah. They also had two daughters, Takhmur and Barrah. Abd Manaf's real name was 'Mughirah', and he also had the nickname 'al-Qamar' (the Moon) because he was handsome. Hubbah was related to Muhammad in more than one way. Firstly, she was the great-great-grandmother of his father Abdullah. She was also the great-grandmother of Umm Habib and Abdul-Uzza, respectively the maternal grandmother and grandfather of Muhammad's mother Aminah. Family tree * indicates that the marriage order is disputed Note that direct lineage is marked in bold. See also Family tree of Muhammad List of notable Hijazis Passage 7: Margaret of France, Queen of England Margaret of France (c. 1279 – 14 February 1318) was Queen of England as the second wife of King Edward I. She was a daughter of Philip III of France and Maria of Brabant. Childhood Margaret was the daughter of King Philip III of France and his second wife, Maria of Brabant. Margaret was only six years old when her father died. She grew up under guidance of her mother, and also of Queen Joan I of Navarre, the wife of her half-brother, King Philip IV. Marriage negotiations The death of his beloved first wife, Eleanor of Castile, in 1290, left King Edward I of England grief-stricken. He was at the time at war with France and Scotland. He and Eleanor had only one surviving son, Edward, and so the king was anxious to remarry to have more sons. In summer of 1291, Edward betrothed his son to Blanche, half-sister to Margaret and Philip IV, in order to achieve peace with France. However, having been told of Blanche's renowned beauty, Edward decided to have his son's bride for his own and sent emissaries to France. Philip IV agreed to have Blanche marry Edward on the conditions that a truce would be concluded between the two countries, and that Edward would give up the province of Gascony. Edward agreed, and sent his brother Edmund Crouchback, Earl of Lancaster, to fetch the new bride. Edward had been deceived, for Blanche was to be married to Rudolph, the eldest son of King Albert I of Germany. Instead, Philip IV offered her younger sister Margaret to marry Edward (then 55). Upon hearing this, Edward declared war on France, refusing to marry Margaret. After five years, a truce was agreed upon under the influence of Pope Boniface VIII. A series of treaties in the first half of 1299 provided terms for a double marriage: Edward I would marry Margaret and his son would marry Isabella, Philip IV's only surviving daughter. Additionally, the English monarchy would regain the key territory of Guyenne and receive £15,000 owed to Margaret as well as the return of Eleanor of Castile's lands in Ponthieu and Montreuil as a dower first for Margaret and then Isabella. Queenship Edward was then 60 years old, at least 40 years older than his bride. The wedding took place at Canterbury on 10 September 1299. Margaret was never crowned due to financial constraints, being the first uncrowned queen since the Conquest. This in no way lessened her dignity as the king's wife, however, for she used the royal title in her letters and documents, and appeared publicly wearing a crown even though she had not received one during a formal rite of investiture.Edward soon returned to the Scottish border to continue his campaigns and left Margaret in London, but she had become pregnant quickly after the wedding. After several months, bored and lonely, the young queen decided to join her husband. Nothing could have pleased the king more, for Margaret's actions reminded him of his first wife Eleanor, who had had two of her sixteen children abroad. In less than a year Margaret gave birth to a son, Thomas, who was named after Thomas Becket, since she had prayed to him during her pregnancy. The next year she gave birth to another son, Edmund. Many who fell under the king's wrath were saved from too stern a punishment by the queen's influence over her husband, and the statement, Pardoned solely on the intercession of our dearest consort, queen Margaret of England, appears. In 1305, the young queen acted as a mediator between her step-son and husband, reconciling the heir apparent to his aging father, and calming her husband's wrath. She and her stepson, who was only two years younger than she, also became fond of each other: he once made her a gift of an expensive ruby and gold ring, and she on one occasion rescued many of the prince's friends from the wrath of the king. Margaret favoured the Franciscan order and was a benefactress of a new foundation at Newgate. She employed the minstrel Guy de Psaltery and both she and her husband liked to play chess. The mismatched couple were blissfully happy. When her sister Blanche died in 1305, Edward ordered full court mourning to please his wife. He had realised the wife he had gained was "a pearl of great price" as Margaret was respected for her beauty, virtue, and piety. The same year Margaret gave birth to a girl, Eleanor, named in honour of Edward's first wife, a choice which surprised many, and showed Margaret's unjealous nature. In 1307, when Edward went on summer campaign to Scotland, Margaret accompanied him. Edward died in Burgh by Sands. Widowhood Margaret never remarried after Edward's death in 1307, despite being only 26 when widowed. She was alleged to have stated that, "when Edward died, all men died for me". Margaret was not pleased when Edward II elevated Piers Gaveston to become Earl of Cornwall upon his father's death, since the title had been meant for one of her own sons. She attended the new king's wedding to her half-niece Isabella, and a silver casket was made with both their arms. After Isabella's coronation, Margaret retired to Marlborough Castle (which was by this time a dower house), but she stayed in touch with the new queen and with her half-brother Philip IV by letter during the confusing times leading up to Gaveston's death in 1312. Margaret, too, was a victim of Gaveston's influence over her stepson. Edward II gave several of her dower lands to the favourite, including Berkhamsted Castle. In May 1308, an anonymous informer reported that Margaret had provided £40,000 along with Philip IV to support the English barons against Gaveston. Due to this action, Gaveston was briefly exiled and Margaret remained fairly unmolested by the upstart until his death in June 1312. She was present at the birth of Edward III in November 1312. On 14 February 1318 she died in her castle at Marlborough. Dressed in a Franciscan habit, she was buried at Christ Church Greyfriars in London, a church she had generously endowed. Her tomb was destroyed during the Reformation. Issue In all, Margaret gave birth to three children: Thomas of Brotherton, 1st Earl of Norfolk (1 June 1300 – 4 August 1338) Edmund of Woodstock, 1st Earl of Kent (5 August 1301 – 19 March 1330) Eleanor (4 May 1306 – 1311) Died at Amesbury Abbey, buried at Beaulieu Abbey. Genealogical table Passage 8: Henry III, Duke of Brabant Henry III of Brabant (c. 1230 – February 28, 1261, Leuven) was Duke of Brabant between 1248 and his death. He was the son of Henry II of Brabant and Marie of Hohenstaufen. He was also a trouvère. The disputed territory of Lothier, the former Duchy of Lower Lorraine, was assigned to him by the King Alfonso X of Castile, a claimant to the German throne. Alfonso also appointed him imperial vicar to advance his claims on the Holy Roman Empire. In 1251, he married Adelaide of Burgundy (c. 1233 – October 23, 1273), daughter of Hugh IV, Duke of Burgundy and Yolande de Dreux, by whom he had four children: Henry IV, Duke of Brabant (c. 1251 – aft. 1272) Mentally disabled, and made to abdicate in favor of his brother John on 24 May 1267. John I, Duke of Brabant (1253–1294) Married first to Marguerite of France, daughter of King Louis IX of France (Saint Louis) and his wife Margaret of Provence, and later to Margaret of Flanders, daughter of Guy, Count of Flanders and his first wife Mathilda of Béthune. Godfrey of Brabant, Lord of Aarschot (d. July 11, 1302, Kortrijk), killed at the Battle of the Golden Spurs, married 1277 Jeanne Isabeau de Vierzon (d. aft. 1296) Maria of Brabant (1256, Leuven – January 12, 1321, Murel), married at Vincennes on August 27, 1274 to King Philip III of France.On February 26, 1261, Henry III signed his will, which included a clause threatening to banish Jewish people from Brabant unless they ceased the practice of usury, albeit only after his death. He died two days later. His wife Adelaide, acting as regent since Henry IV was incapable of ruling, never enforced this policy laid out in the will, and the Jews were able to stay. See also Dukes of Brabant family tree Passage 9: Marie of Brabant, Queen of France Marie of Brabant (13 May 1254 – 12 January 1322) was Queen of France from 1274 until 1285 as the second wife of King Philip III. Born in Leuven, Brabant, she was a daughter of Henry III, Duke of Brabant, and Adelaide of Burgundy. Queen Marie married the widowed Philip III of France on 21 August 1274. His first wife, Isabella of Aragon, had already given birth to three surviving sons: Louis, Philip and Charles. Philip was under the strong influence of his mother, Margaret of Provence, and his minion, surgeon and chamberlain (Chambellan) Pierre de la Broce. Not being French, Marie stood out at the French court. In 1276, Marie's stepson Louis died under suspicious circumstances. Marie was suspected of ordering him to be poisoned. La Broce, who was also suspected, was imprisoned and later executed for the murder. Queen dowager After the death of Philip III in 1285, Marie lost some of her political influence, and dedicated her life to their three children: Louis (May 1276 – 19 May 1319), Blanche (1278 – 19 March 1305) and Margaret (died in 1318). Her stepson Philip IV was crowned king of France on 6 January 1286 in Reims. Together with Joan I of Navarre and Blanche of Artois, she negotiated peace in 1294 between England and France with Edmund Crouchback, the younger brother of Edward I of England.Marie lived through Philip IV's reign and she outlived her children. She died in 1322, aged 67, in the monastery at Les Mureaux, near Meulan, where she had withdrawn to in 1316. Marie was not buried in the royal necropolis of Basilica of Saint-Denis, but in the Cordeliers Convent, in Paris. Destroyed in a fire in 1580, the church was rebuilt in the following years. See also Marie of Brabant (disambiguation) Notes Sources Bradbury, Jim (2007). The Capetians, Kings of France 987–1328. Hambledon Continuum. Dunbabin, Jean (2011). The French in the Kingdom of Sicily, 1266–1305. Cambridge University Press. Gaude-Ferragu, Murielle (2016). Queenship in Medieval France, 1300-1500. Palgrave Macmillan. Jordan, William Chester (2009). A Tale of Two Monasteries: Westminster and Saint-Denis in the Thirteenth Century. Princeton University Press. Morris, Marc (2008). Edward I and the Forging of Britain. Windmill Books. Stanton, Anne Rudloff (2001). The Queen Mary Psalter: A Study of Affect and Audience. Vol. 91 Part 6. American Philosophical Society. Viard, Jules Marie Édouard (1930). Grandes Chroniques de France. Librairie Ancienne Honoré Champion. Passage 10: Marie of Luxembourg, Queen of France Marie of Luxembourg (1304 – 26 March 1324) was Queen of France and Navarre as the second wife of King Charles IV and I. She was the daughter of Henry VII, Holy Roman Emperor and Margaret of Brabant. Her two siblings were John of Luxembourg and Beatrice of Luxembourg, Queen of Hungary. Life Marie was betrothed in 1308 to Louis of Bavaria, son and heir to Rudolf I, Duke of Bavaria. The engagement was agreed on soon after Marie's father Henry became King of the Romans; Rudolf had been a supporter of her father during the struggle for power. It ended due to the death of Louis around 1311. During the same year, Marie's mother Queen Margaret died whilst travelling with Henry in Genoa. On 21 September 1322 in either Paris or Provins Marie married to Charles IV of France following the annulment of his first marriage to the adulterous Blanche of Burgundy. Blanche had given birth to two children, Philip and Joan, but both of them died young and Charles needed a son and heir to carry on the House of Capet. On 15 May 1323 Marie was consecrated Queen of France at Sainte-Chapelle by Guillaume de Melum, Archbishop of Sens. In the same year she became pregnant but she later miscarried a girl. Whilst pregnant again in March 1324, Marie travelling to Avignon with King Charles to visit the pope when Marie fell out of the bottom of the coach. As a result, she went into labour and her child, a boy (Louis), was born prematurely, and died several hours later; Queen Marie died on 26 March 1324 and was buried at Montargis in the Dominican church. Following her death Charles married Jeanne d'Évreux, but failed to father a son, so the direct House of Capet was succeeded by its branch, the House of Valois. Ancestors
Who is the paternal grandmother of Marie Of Brabant, Queen Of France?
Marie of Hohenstaufen
3,596
2wikimqa
4k
Passage 1: Power Le Poer Trench Power Le Poer Trench (1770–1839) was an Anglican clergyman who served in the Church of Ireland as firstly Bishop of Waterford and Lismore, then Bishop of Elphin and finally Archbishop of Tuam. Life He was the second surviving son of William Trench, 1st Earl of Clancarty, among his nine brothers and nine sisters was his elder brother Richard Trench, 2nd Earl of Clancarty, and Lady Emily La Touche was a younger sister. Born in Sackville Street, Dublin, on 10 June 1770, he was first educated at a preparatory school at Putney, whence he went for a short time to Harrow, and afterwards at the academy of Mr. Ralph at Castlebar, in the immediate neighbourhood of his home. Trench matriculated at Trinity College, Dublin, on 2 July 1787, where his tutor was Matthew Young, afterwards bishop of Clonfert and Kilmacduagh, and graduated B.A. on 13 July 1791. Later in the same year (27 November) Trench was ordained deacon, and, having received priest's orders on 24 June 1792, he was in the same month inducted into the benefice of Creagh, in which his father's residence and the great fair town of Ballinasloe were situated. In the following year (5 November 1793) he was presented to the benefice of Rawdenstown, County Meath. He obtained a faculty to hold the two cures together, and combined with their clerical duties the business of agent on his father's Galway estate. Trench was a man of great bodily strength and a fine horseman, and he retained a fondness for field sports to the end of his days. During the Irish rebellion of 1798 he acted as a captain in the local yeomanry raised by his father to resist the French invading army under Humbert. In 1802 Trench was appointed to the see of Waterford, in succession to Richard Marlay, and was consecrated on 21 November 1802. In 1810 he was translated to the bishopric of Elpin, and, on the death of Archbishop Beresford, was on 4 October 1819 advanced to the archepiscopal see of Tuam. In May 1834, on the death of James Verschoyle, the united sees of Killala and Achonry were, under the provisions of the Irish Church Temporalities Act, added to the charge of Trench. By the same act, the archdiocese of Tuam was reduced, on Trench's death, to an ordinary bishopric. In the history of the Irish church Trench chiefly deserves to be remembered for his activity in promoting the remarkable evangelical movement in the west of Ireland which was known in Connaught as the Second Reformation, and which, chiefly through the agency of the Irish Society, made a vigorous effort to win converts to Protestantism. From 1818 to his death Trench was president of the Irish Society; and it is evidence of his large-heartedness that the religious controversies in which his leadership of this movement involved no wise impaired the remarkable personal popularity which he enjoyed among his Roman Catholic neighbours. Holding strong views as to the paramount importance of the 'open bible,' Trench was a strenuous opponent of the mixed system of national education founded by Mr. Stanley (Lord Derby), and was one of the founders of the Church Education Society. Trench was a man of strong and masterful character, and during the twenty years of his archiepiscopate was one of the foremost figures in the Ireland of his day. He died on 26 March 1839. Trench married, on 29 January 1795, his cousin Anne, daughter of Walter Taylor of Castle Taylor, co. Galway. By her, he had two sons, William and Power, and six daughters. Elizabeth, his third daughter, married Captain Henry Gascoyne in 1830. Another daughter Anne married James O'Hara, MP for Galway in 1823. Passage 2: Richard Trench, 2nd Earl of Clancarty Richard Le Poer Trench, 2nd Earl of Clancarty, 1st Marquess of Heusden (19 May 1767 – 24 November 1837), styled The Honourable from 1797 to 1803 and then Viscount Dunlo to 1805, was an Anglo-Irish peer, a nobleman in the Dutch nobility, and a diplomat. He was an Irish, and later British, Member of Parliament and a supporter of Pitt. Additionally he was appointed Postmaster General of Ireland, and later, of the United Kingdom. Background and education Clancarty was the son of William Trench, 1st Earl of Clancarty and Anne, daughter of Charles Gardiner and his seat was Garbally Court in Ballinasloe, East County Galway where he was associated with the Great October Fair. His brother was Power Le Poer Trench (1770–1839), archbishop of Tuam. He was educated at Kimbolton School and St John's College, Cambridge. Political career Trench represented Newtown Limavady in the Irish House of Commons from 1796 to 1798. He sat further for County Galway from 1798 to a short time before the Act of Union, when he was replaced by "Humanity Dick" Martin. He was credited with resolving various border disputes in Holland, Germany and Italy at the Congress of Vienna, 1814–1815, and in his role as Ambassador to the Netherlands. For his service as ambassador to The Hague, he was awarded the hereditary title of Marquess of Heusden in the peerage of The Netherlands on 8 July 1815 by William I of the Netherlands, following the defeat of Napoleon in Brabant, in that same province's southern reaches. Trench was elected one of the 28 representative peers of Ireland on 16 December 1808. His seat in the House of Lords became hereditary when he was created Baron Trench (4 August 1815) and Viscount Clancarty (created 8 December 1823), in the Peerage of the United Kingdom, his older peerages being Irish peerages. He was a Commissioner for the Affairs of India and Custos Rotulorum of County Galway. In the same Royal Decree that awarded the Marquessate of Heusden, K.B. of 8 July 1815, numbers 13 and 14, and Arthur Wellesley was granted the Netherlands' Kingdom hereditary nobility-title Prince of Waterloo, following his recent exploits at Waterloo in modern-day Kingdom of Belgium. Postmaster General Between 1807 and 1809 Trench was one of the joint Postmasters General of Ireland and he was appointed Postmaster General of the United Kingdom being one of the last joint holders of that office from 1814 to 1816. Family On 6 February 1796 he married Henrietta Margaret Staples, daughter of John Staples and Harriet Conolly. They had the following children: Lady Lucy Le Poer Trench (d. 1839), married Robert Maxwell Lady Louisa Augusta Anne Le Poer Trench (b. 23 December 1796, d. 7 February 1881), married Reverend William Le Poer Trench Lady Harriet Margaret Le Poer Trench (b. 13 October 1799, d. 1885), married Thomas Kavanagh "the MacMurrough", a descendant of Art mac Art MacMurrough-Kavanagh Lady Emily Florinda Le Poer Trench (b. 7 November 1800), married Giovanni Cossiria Lady Frances Power Le Poer Trench (b. 22 January 1802, d. 28 December 1804) William Thomas Le Poer Trench, 3rd Earl of Clancarty (b 21 September 1803, d. 26 April 1872), married Lady Sarah Juliana Butler, daughter of Somerset Richard Butler, 3rd Earl of Carrick Hon. Richard John Le Poer Trench (b. 1805) Commander Hon. Frederick Robert Le Poer Trench (b. 23 July 1808, d. April 1867), married Catherine Maria Thompson Ancestry Passage 3: Power Henry Le Poer Trench Power Henry Le Poer Trench (11 May 1841 – 30 April 1899) was a British diplomat.Trench was the son of William Thomas Le Poer Trench, 3rd Earl of Clancarty and Lady Sarah Juliana Butler. Career Trench was Secretary of the British Embassy in Berlin between 1888 and 1893.In Mexico, he was the Envoy Extraordinary and Minister Plenipotentiary between 1893 and 1894.He was the British Minister in Tokyo in 1894-1895. See also List of Ambassadors from the United Kingdom to Japan Anglo-Japanese relations Notes Passage 4: Theodred II (Bishop of Elmham) Theodred II was a medieval Bishop of Elmham. The date of Theodred's consecration unknown, but the date of his death was sometime between 995 and 997. Passage 5: Nicholas Trench, 9th Earl of Clancarty Nicholas Le Poer Trench, 9th Earl of Clancarty, 8th Marquess of Heusden (born 1 May 1952), is an Anglo-Irish peer, as well as a nobleman in the Dutch nobility. Lord Clancarty serves as an elected Crossbench hereditary peer in the British House of Lords. His earldom is in the Peerage of Ireland. He was educated at Westminster School. He also studied at Ashford Grammar School, Plymouth Polytechnic, University of Colorado, Denver, USA, and Sheffield University. Family Lord Clancarty was born in Uxbridge, on 1 May 1952, the only son of Power Edward Ford Le Poer Trench, second son of the fifth Earl from his second marriage. He is married to the journalist Victoria Lambert and has one daughter with her. Membership of House of Lords In 1995 he succeeded to the titles on the death of his childless uncle, Brinsley Le Poer Trench, 8th Earl of Clancarty. He took his seat in the House of Lords at this time as Viscount Clancarty, a title in the Peerage of the United Kingdom, because titles in the Peerage of Ireland did not entitle their holders to sit even before the House of Lords Act 1999 removed the majority of the hereditary peers. Under the terms of that Act, Clancarty lost his automatic right to a seat; he was unsuccessful in the election by the Crossbench hereditary peers of 28 of their number to continue to sit after the Act came into force, finishing 37th in a field of 79 candidates.He was an unsuccessful candidate in four by-elections caused by the deaths of sitting hereditary peers, being runner-up on two occasions. In 2010 he returned to the House after winning the by-election to replace the 4th Viscount Colville of Culross.Besides being a British and an Irish peer, he also belongs to the Dutch nobility as Marquess of Heusden. Besides H.M. King Willem-Alexander of the Netherlands, who is also Marquess of Veere and Vlissingen, Lord Clancarty is the only marquess in Dutch nobility. Career Clancarty is a self-employed artist, freelance writer, and translator. Passage 6: Robert Le Poer Trench Robert Le Poer Trench (c.1811 – 8 February 1895) was a judge and an Attorney-General of Victoria.Trench was the third son of Ven. Charles Le Poer Trench, D.D., of Ballinasloe, County Galway, Archdeacon of Ardagh, and grandson of the first Earl of Clancarty. He entered as a student of the Middle Temple in May 1839, and was called to the Bar in June 1842. Having emigrated to Victoria, he was clerk of petty sessions at Kilmore, Victoria and afterwards at Ballarat. In 1855 he was admitted to the Victorian Bar, and quickly obtained a large practice, especially in mining cases. Though he never entered parliament he was Attorney-General in the first Graham Berry Government from August to October 1875, and in Berry's second Administration, from May 1877 to March 1878, when he was appointed a Commissioner of Land Tax, and a County Court Judge in April 1880. Mr. Trench, who was appointed Q.C. in 1878, subsequently retired on a pension. Passage 7: William Le Poer Trench Colonel The Hon. William Le Poer Trench CVO, JP (17 June 1837 – 16 September 1920) was an Anglo-Irish politician and British army officer. He was the third son of William Trench, 3rd Earl of Clancarty and Lady Sarah Juliana Butler. He married Harriet Maria Georgina Martins, daughter of Sir William Martins, on 21 April 1864. He fought in the Second Opium War between 1857 and 1858, commanding a ladder company at the capture of Guangzhou and Nankow, and was mentioned in despatches. He gained the rank of Colonel in the service of the Royal Engineers. Between 1872 and 1874, he was Member of Parliament (MP) for County Galway, having unseated the elected MP, John Philip Nolan, on petition; the case was one of the most controversial Irish cases of its time and permanently damaged the reputation of the judge, William Keogh. He held the office of Justice of the Peace for Westminster, London, Buckinghamshire, and Middlesex. He was made a Commander of the Royal Victorian Order in 1912.He was scandalised by the marriage on 10 July 1889 of his 20-year-old son and heir, William LePoer-Trench, to a London showgirl, Isabel Maud Penrice Bilton, who used the stage name of Belle. As a result, he did all in his power to dissolve the marriage. When this was unsuccessful he stopped his son's allowance, and resorted to selling lands in order to diminish his heir's eventual income, but his daughter-in-law's income from the stage was too great for these expedients to have much impact. Passage 8: William Le Poer Trench (Royal Navy officer) Rear-Admiral The Hon. William Le Poer Trench (4 July 1771 – 14 August 1846) was born in Garbally, Galway, Ireland to William Power Keating Trench, 1st Earl of Clancarty and Anne Gardiner. He acted for a considerable period as the agent of the estates of his father's family in Ireland.He was made a Lieutenant in the Royal Navy in 1793; promoted to the rank of Commander in 1799; to that of Post Captain 1802; and to that of Rear Admiral in 1840.In 1819 he was appointed Secretary to the Board of Customs and Port Duties in Ireland. Family He was married twice, first on 8 March 1800 to Sarah Cuppage, daughter of John Loftus Cuppage. Sarah died in June 1834, and on 1 February 1837 William married a second time to Margaret Downing, daughter of Dawson Downing and Anne Boyd. See also O'Byrne, William Richard (1849). "Trench, William Le Poer" . A Naval Biographical Dictionary . John Murray – via Wikisource. Passage 9: Brinsley Le Poer Trench, 8th Earl of Clancarty William Francis Brinsley Le Poer Trench, 8th Earl of Clancarty, 7th Marquess of Heusden (18 September 1911 – 18 May 1995) was a prominent ufologist. He was an Irish peer, as well as a nobleman in the Dutch nobility. Biography He was the fifth son of William Frederick Le Poer Trench, 5th Earl of Clancarty by Mary Gwatkin Ellis. He had four older half-brothers born to the 5th Earl's first wife, Isabel Maud Penrice Bilton, the actress known as Belle Bilton, who died of cancer in 1906. Brinsley was educated at the Pangbourne Nautical College. From 1956 to 1959 Clancarty edited the Flying Saucer Review and founded the International Unidentified Object Observer Corps. He also found employment selling advertising space for a gardening magazine housed opposite Waterloo station. In 1967, he founded Contact International and served as its first president. He also served as vice-president of the British UFO Research Association (BUFORA). Clancarty was an honorary life member of the now defunct Ancient Astronauts Society which supported the ideas put forward by Erich von Däniken in his 1968 book Chariots of the Gods?. In 1975 he succeeded to the earldom on the death of his half-brother, Grenville Sydney Rocheforte, 7th Earl of Clancarty, giving him a seat in the British Parliament. He used his new position to found a UFO Study Group at the House of Lords, introducing Flying Saucer Review to its library and pushing for the declassification of UFO data. Four years later he organised a celebrated debate in the House of Lords on UFOs which attracted many speeches on both sides of the question. In one debate, Lord Strabolgi, for the Government, declared that there was nothing to convince him that any alien spacecraft had ever visited the Earth. Private life Clancarty first married, in 1940, Diana (1919–1999), daughter of Sir William Younger, Bt. This marriage was dissolved in 1947. He married secondly, in 1961, Mrs Wilma Belknap (née Vermilyea) (1915–1995) and that marriage was dissolved in 1969. His third marriage was in 1974, to Mrs Mildred Allewyn Spong (née Bensusan) (1895–1975). She died in 1975 but Clancarty remarried a fourth time, in 1976, to Mrs May Beasley (née Radonicich) (1904–2003). He lived most of his life in South Kensington and died in Bexhill-on-Sea in 1995, leaving his extensive collection of papers to Contact International. He was succeeded to the earldom by his nephew Nicholas Le Poer Trench (b. 1952). Hollow Earth theory In 1974, Trench published Secret of the Ages: UFOs from Inside the Earth, a book which theorised that the centre of the Earth was hollow, with entrances to its interior located at both the north and south polar areas. The interior, he suggested, consisted of large tunnel systems connecting a large cavern world. Trench also believed that the lost continent of Atlantis actually once existed and that these tunnels were probably constructed all over the world by the Atlanteans, for various purposes. Trench believed that there was no actual North Pole, but instead a large area with a warm sea dipping gradually into the interior of the Earth. He said that humans were 'living on the deck of a ship, unaware of the life going on under our feet'. One argument he put forward for this theory was that whilst the Earth is spherical, it is flattened at the poles. Additionally, he questioned how all icebergs could be composed of frozen fresh water, if no rivers were flowing from the inside of the Earth to the outside. He had also suggested that a large proportion of unidentified flying objects (UFOs) emanated from the Earth's interior. These objects were likely to have been created by a group of much more technically advanced beings, similar to humans, but a group that likely possessed extrasensory abilities, as well as the ability to manipulate psychic phenomena. Another argument for the Hollow Earth theory was that everything he suggested, nebulae, comets and planets, are hollow and these conditions would certainly prove favourable for a hollow Earth. Whilst Trench had in one of his earlier books disregarded the Hollow Earth theory, he admitted to at the time 'being educated along with millions of other people to believe that the Earth had a liquid molten core'. Other claims According to Trench in his book The Sky People, Adam and Eve, Noah and many of the other characters from the Bible originally lived on Mars. Trench believed that Adam and Eve were experimental creations of extraterrestrials. His claim was that the Biblical description of the Garden of Eden was inconsistent with what was on Earth and as Mars contained canals, that the Garden of Eden must have been located on Mars. He further claimed that the north polar ice cap melted on Mars, and this caused the descendants of Adam and Eve to move to Earth.Trench also claimed to know a former U.S. test pilot who said he was one of six persons present at a meeting between President Eisenhower and a group of aliens, which allegedly took place at Edwards Air Force Base on 4 April 1954. Clancarty reported that the test pilot told him "Five different alien craft landed at the base. Three were saucer-shaped and two were cigar shaped... the aliens looked something like humans, but not exactly."He claimed that he could trace his descent from 63,000 BC, when beings from other planets had landed on Earth in spaceships. Bibliography The Sky People (1960) Men Among Mankind (1962) Forgotten Heritage (1964) The Flying Saucer Story (1966) Operation Earth (1969) The Eternal Subject (1973) Secret of the Ages: UFO's from Inside the Earth (1974). Reptiles from the Internal World (1979) China in the Closet: A Romantic Mystery (1981) Egos and Sub-Egos (1983) UFOs: Just Shiny Birds? with Anna Robb (1984). Passage 10: William Trench, 3rd Earl of Clancarty William Thomas Le Poer Trench, 3rd Earl of Clancarty, 2nd Marquess of Heusden (21 September 1803 – 26 April 1872), styled Viscount Dunlo between 1805 and 1837, was an Irish peer, as well a nobleman in the Dutch nobility. He was educated at St John's College, Cambridge.Trench was born in Castleton, County Kildare, Ireland the son of Richard Trench, 2nd Earl of Clancarty and Henrietta Margaret Staples. On 8 September 1832, he married Lady Sarah Juliana Butler. They had six children. Richard Somerset Le Poer Trench, 4th Earl of Clancarty (13 January 1834 – 29 May 1891) married Lady Adeliza Georgiana Hervey Major Hon. Frederick Le Poer Trench (10 February 1835 – 17 December 1913) married (1) Harriet Mary Trench (2) Catherine Simpson Colonel William Le Poer Trench (17 June 1837 – 16 September 1920) married Harriet Maria Georgina Martins Lady Anne Le Poer Trench (1839 – 12 March 1924) married Frederic Sydney Charles Trench Power Henry Le Poer Trench (11 May 1841 – 30 April 1899) Lady Sarah Emily Grace Le Poer Trench (6 December 1843 – 2 August 1875) married John Melville Hatchell.
When did William Le Poer Trench's father die?
26 April 1872
3,432
2wikimqa
4k
Passage 1: Renaldo Rama Renaldo Rama (born 27 January 1990) is an Albanian footballer who plays as a forward. Club career The central midfielder has previously played for A.O. Kastellas and Olympiacos at youth level and German club TuS Koblenz at senior level, as well as Gramozi Ersekë in Albania. He made his debut on the professional league level in the 2. Bundesliga for TuS Koblenz on 20 March 2009 when he came on as a substitute in the 83rd minute in a game against FC Hansa Rostock. On 3 February 2009, he signed a contract with TuS Koblenz, but after one year, he resigned and left the team. The next season, Rama signed a contract with KS Apolonia for two years. He managed to play in 29 games with 9 goals. In season 2013–2014, AEK Athens bought him, using his Greek passport (Renaldo Rama finished high school in Greece). He left the club on 3 July 2014.Rama spent the 2014–15 season at Fostiras in the Greek Football League, he made seventeen appearances and scored twice for the Greek club. Rama then left to join Albanian Superliga club Kukësi on 4 August 2015, he signed a one-year contract with the club. Honours AEK AthensFootball League 2: 12014(6th Group) Passage 2: Ismail Rama Ismail Rama (born 3 November 1935) is an Albanian shooter who competed at the 1972 Summer Olympic Games in the 50 metre rifle prone, he finished 22nd. Passage 3: M. S. Sathyu Mysore Shrinivas Sathyu (born 6 July 1930) is a film director, stage designer and art director from India. He is best known for his directorial Garm Hava (1973), which was based on the partition of India. He was awarded Padma Shri in 1975. Early and personal life Born into a Kannada Brahmin family, Sathyu grew up in Mysore. He pursued his higher education at Mysore and later Bangalore. In 1952, he quit college while working on his Bachelor of Science degree. Sathyu is married to Shama Zaidi, a north Indian Shia Muslim. They have two daughters. Career He freelanced as an animator in 1952–53. After being unemployed for nearly four years, he got his first salaried job as assistant director to filmmaker Chetan Anand. He worked in theatre as a designer and director, including designing sets and lights for productions of Hindustani Theatre, Okhla Theatre of Habib Tanvir, Kannada Bharati and other groups of Delhi. In films, he has worked as an art director, camera-man, screenwriter, producer and director. His first film. His fas an independent Art director or Haqeeqat, a film by Chetan Anand, which won him recognition and the 1965 Filmfare Award for Best Art Direction. His filmography includes over 15 documentaries and 8 feature films in Hindi, Urdu and Kannada.His best known work, Garm Hava (Scorching Winds, 1973), is one of the last cinema productions featuring 1950s Marxist cultural activists including Balraj Sahni and Kaifi Azmi. Garm Hava won several Indian national awards in 1974, including a National Integration Award. It was screened in the competitive section at Cannes and was also the Indian entry at the Oscars. It won the Filmfare award for best screenplay.M. S. Sathyu currently is associated mainly with television and stage. In 2013, Sathyu featured in the popular Google Reunion ad, where he played the role of Yusuf, an elderly Pakistani man who is reunited with his childhood pre-partition friend from India, Baldev (Vishwa Mohan Badola). The commercial went viral on social media.Sathyu is one of the patrons of Indian People's Theatre Association (IPTA). He directed musical play Gul E Bakavali written by Sudheer Attavar; represented 8th World Theatre Olympics in year 2018 . He also directed plays like 'Dara Shikoh', Amrita,Bakri, Kuri,Akhri Shama and many more In 2014, his debut film, Garm Hava was re-released after restoration. Awards 1965 : Filmfare Best Art Direction Award: Haqeeqat (for black-and-white film category) 1974 : Cannes Film Festival: Golden Palm : Garm Hava: Nominated. 1974 : National Film Award: Nargis Dutt Award for Best Feature Film on National Integration: Garam Hawa 1975 : Padma Shri 1981-82 : Karnataka State Film Award for First Best Film for "Bara" 1981-82 : Karnataka State Film Award for Best Director for "Bara" 1982 : Filmfare Award for Best Film – Kannada for "Bara" 1982 : Filmfare Award for Best Director – Kannada for "Bara" 1984 : National Film Award: Nargis Dutt Award for Best Feature Film on National Integration: Sookha 1984 : Filmfare Critics Award for Best Movie Hindi : Sookha 1994 : Sangeet Natak Akademi Award: Stagecraft 2014 :Sangeet Natak Akademi Fellowship : Theatre Production Theatre plays Gul E Bakavali musical Play written by Sudheer Attavar Dara Shikoh written by Danish Iqbal Mudrarkshas Aakhri Shama Rashmon Bakri ("Kuri" in Kannada) Girija Ke sapne Mote Ram Ke Sathyagrah Emil's Enemies Amrita : Films Feature Films Ek Tha Chotu Ek Tha Motu Garm Hawa (Hot Wind) 1973 Chithegu Chinthe 1978 - Screened at 7th IFFI. Kanneshwara Rama (The Legendary Outlaw) Kahan Kahan Se Guzar Gaya (1981) Bara (Famine), based on a short story by U.R. Anantha Murthy (1982) Sookha Hindi version of the Kannada movie Bara (1983) Ghalige (Kannada) Kotta (1999) Ijjodu ( Kannada) 2009Short films and Documentaries Irshad Black Mountain Ghalib Islam in India Television TV serials Pratidhwani 1985 Choli Daaman 1987–88 Kayar (Coir) 1992 Antim Raja (The Last Raja of Coorg) 1986Tele-films Aangan Ek Hadsa Char Pehlu ThangamTelevision and YouTube Advertisements Reunion, an advertisement for Google Search Passage 4: Urata Rama Urata Rama (born 20 December 1986) is a Kosovar sports shooter and physical educator, who belongs to the Jeton Ramaj Shooting Club in Vitina and has participated at the Olympic level since 2003. In 2012, she was one of six athletes nominated by the Olympic Committee of Kosovo, but she was rejected for the 2012 Summer Olympics by the International Olympic Committee, which only accepted judoka Majlinda Kelmendi though as a representative of Albania. Rama, whose cousin Lumturie Rama also shoots competitively, competed at the 2015 European Games in Baku in the ISSF 10 meter air rifle, and went on to compete in the women's 10 metre air rifle event at the 2016 Summer Olympics. Passage 5: Ian Barry (director) Ian Barry is an Australian director of film and TV. Select credits Waiting for Lucas (1973) (short) Stone (1974) (editor only) The Chain Reaction (1980) Whose Baby? (1986) (mini-series) Minnamurra (1989) Bodysurfer (1989) (mini-series) Ring of Scorpio (1990) (mini-series) Crimebroker (1993) Inferno (1998) (TV movie) Miss Lettie and Me (2002) (TV movie) Not Quite Hollywood: The Wild, Untold Story of Ozploitation! (2008) (documentary) The Doctor Blake Mysteries (2013) Passage 6: Kanneshwara Rama Kanneshwara Rama (Kannada: ಕನ್ನೇಶ್ವರ ರಾಮ; English: The Legendary Outlaw) is a 1977 Kannada-language political film directed by M. S. Sathyu. The film features an ensemble cast including Anant Nag, Shabana Azmi, Amol Palekar, B. V. Karanth and Shimoga Venkatesh. The film is based on the novel Kannayya Rama written by S. K. Nadig. The film is set in the 1920s during which a rebellious youth, Kanneshwara Rama, who opposes the unjust orders given by the village head and becomes outlawed from the village.The film was produced by the Moola Brothers under the production company Sharadha Movie Productions. The film is based on the novel Kannayya Rama written by S. K. Nadig. The screenplay of the film was also written by S. K. Nadig. The cinematography of the film was done by Ishan Arya and Ashok Gunjal, while the editing was handled by S. Chakravarthy. The music for the film was composed by B. V. Karanth, while the lyrics were written by N. Kulkarni. This film features the debut of Shabana Azmi in Kannada cinema. The film is Sathyu's second feature film after the 1973 film Garm Hava.Kanneshwara Rama premiered at the International Film Festival of India. The film was theatrically released on 30 March 1989 and was a critical and box office success, completing a 100-day run in theatres. It was screened in many national and international film festivals, including the Bengaluru International Film Festival in 2017. The film has drawn comparisons to Garm Hava. Plot Present day The film starts with Kanneshwara Rama, a long-sought-after fugitive who has been caught by the police. He is being paraded through the streets of Shimoga before being taken to the state capital for his execution. On the way, Rama sees many people in the crowd who have figured in his life at one point or another and starts thinking about those events. Flashback Back in his old days, Rama is a hot-headed peasant who fumes at the slightest attempt of intimidation. He despised meekness and that is one of the reasons for his contempt towards his docile wife. Rama defied the village head, resulting in a midnight scuffle in which he ends up killing the person. He is caught and sent to jail. In prison, Rama meets Mahatma Gandhi’s followers who are political prisoners. Under cover of a nationalistic disturbance, he escapes from the place and joins a group of bandits. The leader of the group is Junja, who zealously guards his gang's hoard of gold, watched over by Malli, his mistress. Junja gets fond of Rama, something that is resented by some members of the gang, except Chennira who becomes his ally. Junja is mortally wounded in an encounter with the police and names Rama as his successor. Malli quietly decamps with the hoarded treasure in the dark of night. Rama becomes notorious as an outrageously bold dacoit. He helps the poor, providing a dowry for girls of marriageable age and breaking the hold of feudal landlords in the area. He becomes a hero in the eyes of the people, attaining a status akin to Robinhood. He raids a landlord's safe and accidentally finds refuge in Malli's house. She is now a high-priced prostitute and they become lovers. However, Rama finds an opportunity to steal her jewels and does not hesitate. Rama's daring exploits, his growing popularity, and his successes begin to worry the government. The tension with the police reaches its peak when he rescues a group of nationalists from the police, takes the policemen captive, and humiliates the British Captain. He is both amused and impressed by Gandhi's policy of non-violence, but what catches his attention is their building of a cause and the symbolic flag, an idea that started to germinate in his mind. Some members of Rama's gang are disloyal to him. He out-maneuvers them in their break-away attempt to rob an armed treasury and forgives the culprits, against Chennira's advice. However, Rama begins to wonder whether any group can be loyal to an individual for long. He feels that the guiding principle should be an idea, symbolized by a flag and a base, both of which are necessary. He frees a village under the bondage to a religious order, adopts it, and places his flag on an old fort that guards it. Rama becomes a legend, carving out an independent principality of his own. Rama becomes a legend in his own lifetime. Ballad singers compose songs praising his courage and the police are afraid of him. The British Government is alarmed. The District Collector sends a large force to capture Rama at any cost. The Police Superintendent first tries to cajole Malli into giving him away but she refuses to do their bidding. He then threatens the people in the village and takes some hostages. The police offensive against Rama is intensified. At an encounter, most of his gang is killed, including the trusted Chennira. Rama runs to his villagers for refuge but they are too scared to help him. Enraged, he sets the village on fire. Even Malli is not able to deter him. The Police Superintendent tries to make Malli help him again. At first, she refuses but when the relatives of the hostages plead with her, she agrees. Present day Rama is now alone and helpless. He abandons his weapons at the altar of a temple and visits Malli at night. A trap is set around her house and as soon as Malli sends a signal, the police surround the area. Malli defends her actions by saying that his vindictiveness drove her to it. He says he had only come to give her his treasures so that they could be given to the villagers as compensation. Malli now regrets her betrayal but it is too late. Cast Soundtrack The music was composed by B. V. Karanth. Passage 7: Manuel García Calderón Manuel García Calderón García Rama (born 28 September 1953) is a Spanish football manager, currently in charge of CD Móstoles B. Managerial career Born in Madrid, García Calderón made his managerial debuts with Real Madrid's youth system. In 1996, he was appointed CD Toledo manager in Segunda División, after previous stints at CD Numancia and CD San Fernando; while in charge he only suffered two defeats, and his side finished 9th. In August 1997, after suffering team relegation with SD Huesca, García Calderón was named Getafe CF manager. He was relieved from his duties in April of the following year, after losing his last three games. García Calderón subsequently managed Algeciras CF, AD Alcorcón and CD Móstoles, all in Segunda División B. On 28 June 2006 he was appointed at the helm of CD Illescas, being sacked on 7 November of the following year.On 18 June 2008 García Calderón returned to his former club Getafe, being appointed manager of the reserves. He was relieved from his duties on 9 January 2009, after achieving five consecutive defeats.In 2014 García Calderón was named manager of the newly formed CD Móstoles B. Passage 8: Valdet Rama Valdet Skënder Rama (born 20 November 1987) is an Albanian professional footballer who plays as a midfielder for German club Wuppertaler SV. He also holds German citizenship. Early life Rama is a Kosovo Albanian and fled to Germany at the age of nine years. There he spent his youth in the Ruhr district and went through the ranks of three local clubs before joining former German champions Rot-Weiss Essen in 2004. Club career Early career Rama made his debut on the professional league level in the 2. Bundesliga for FC Ingolstadt 04 on 17 August 2008 when he started a game against Greuther Fürth. He scored a goal on his debut. Hannover 96 After Ingolstadt was relegated at the end of the 2008–09 season, his contract became invalid and he was able to join a new club on a free transfer. On 26 May 2009, he announced his move to Bundesliga side Hannover 96 where he signed a three-year contract. Örebro In February 2011, he signed for Swedish club Örebro SK. He made a big impact in his first year with the club, scoring eight goals from his position as a winger. During the second season he often found himself benched and his manager criticized his lack of defensive work. This caused his agent to lash out against the club, claiming that Rama was one of the best players in the league and that he had been humiliated by the managers comments. He also demanded that Örebro sell him during the summer. Rama however ended up staying with the club until the end of the 2012 Allsvenskan season, after which Örebro was relegated. Valladolid After the 2012 Allsvenskan season ended, on 31 January 2013 Rama moved Real Valladolid on loan until the end of the 2012–13 La Liga's season. He made his debut on 9 March 2013, in a match against Málaga which finished 1–1 and he came on as a substitute in the 71st minute in place of Daniel Larsson.His first goal with Valladolid came on 20 January 2014 in a match against Athletic Bilbao, where he scored in the last 90th minute and the match finished in the loss 4–2. With this goal, Rama became the first Albanian player ever to score in La Liga and in the entire Spanish football.Rama finished the 2013–14 La Liga season with 26 appearances and 1 goal scored. His last match in which he played was early on 27 March 2014 against Real Sociedad and only as substitute in the 61st minute. Then he was called up only in one match on 3 May 2014 against Espanyol and did not play any minute.On 11 July 2014, Rama left Valladolid as he interrupted his contract with the club, where the contract was valid until 30 June 2015. 1860 Munich On 27 August 2014, Rama had started the medical tests with 2. Bundesliga side TSV 1860 Munich. Two days later, the transfer was made official with Rama joining on a two-year contract.He made his competitive debut later on 14 September by starting in the week 5 match against St. Pauli which was won 1–2 away. In the next match he provided an assist to rescue his side a point against FC Ingolstadt. Rama's first score-sheet contributions came on 19 October where he scored his team's only goal in the 4–1 loss at Erzgebirge Aue.He was on the scoresheet also in the DFB-Pokal round 2 tie against SC Freiburg which gave his side the temporary lead as the opponents bounced back to win 5–2, much to 1860 Munich elimination. He finished his first season with Die Löwen by making 28 league appearances, scoring three times. In the 2015–16 season, Rama declined, scoring only once in 16 league appearances. His season was also marred by injuries. Following the end of the season, Rama's contract was not extended and left as a free agent. He described his spell with the club as "difficult" due to injuries. Yanbian Funde Rama transferred to Chinese Super League side Yanbian Funde on a two-year contract in July 2017. He made his debut on 13 August in a 1–1 draw against Changchun Yatai Kukësi On 31 January 2019, after more than a year without a club, Rama joined Albanian Superliga side Kukësi on a six-month contract with an option to renew for one more year; his monthly wage was reportedly €9,000, excluding bonuses.He won his first trophy with Kukësi on 2 June following the 2–1 win at Elbasan Arena against Tirana in the Albanian Cup final. He participated in the build up that led to both two goals of his side, earning him praise from the media. SV Meppen On 20 August 2019, SV Meppen announced the signing of Rama on a two-year deal with an option for a third year. Having made three substitute appearances in the 2021–22 season he agreed the termination of his contract in January 2022. Wuppertaler SV On 3 January 2022, Rama joined Wuppertaler SV in the fourth-tier Regionalliga West. International career As soon as Rama moved to Spain to play in La Liga he declared that he was eager to play for Albania and was contacted by the Albanian Football Association in order to plan a call-up for the next matches. On 25 March 2013 he received the Albanian citizenship and became fully eligible to play for Albania.He made his international debut on 26 March 2013 in a friendly match against Lithuania finished in the victory 4–1, where Rama played as a starter and substituted off in the 64th minute with Armando Vajushi. On 7 June 2013, he scored first goal against Norway finished in the 1–1 draw. He finished first year (2013) with Albania making a total of 8 appearances, all as a starter, and substituted off 3 times. In those 8 appearances he also scored 3 goals. In August 2016, Rama opted to play for newly recognized Kosovo national team. However, in an interview in September 2017, Rama didn't exclude the opportunity to play for Albania once again. Career statistics Club As of 3 January 2022 International As of match played 13 June 2015As of match played 13 June 2015 Scores and results list Albania's goal tally first, score column indicates score after each Rama goal. Passage 9: Rafet Rama Rafet Rama (born 5 December 1971) is a Kosovan politician and lawmaker who ran for the 2016 presidential election, in which he was defeated by Hashim Thaçi. He is a member of the Democratic Party of Kosovo. Passage 10: Milaim Rama Milaim Rama (born 29 February 1976) is a former professional footballer who spent most of his career playing for Thun. In addition to Thun, he also played for FC Augsburg, Schaffhausen. Born in SFR Yugoslavia, he represented the Switzerland national team at international level. International career Rama had the right to represent two countries at the international level, such as Albania or Switzerland, with the latter he made his debut on 20 August 2003 in a friendly match against France after coming on as a substitute at 46th minute in place of Stéphane Chapuisat, becoming the first Kosovan to debut with Switzerland. His last international match was on 21 June 2004 in UEFA Euro 2004 group stage again against France. Personal life Rama was born in Viti, SFR Yugoslavia to Kosovo Albanian parents from the village Zhiti near Viti. At the age of 17, he immigrated to Switzerland and in 2003 received the Swiss passport. Rama is the father of Kosovo international Alketa Rama.
Where was the director of film Kanneshwara Rama born?
Mysore
3,532
2wikimqa
4k
Passage 1: Cumulus (disambiguation) Cumulus is a type of cloud with the appearance of a lump of cotton wool. Cumulus may also refer to: Computing and technology Cumulus (software), digital asset management software developed by Canto Software Cumulus Corporation, a defunct computer hardware company Cumulus Networks, a computer software company Gliders Reinhard Cumulus, glider US Aviation Cumulus, motorglider Other uses Cumulus Media, a radio broadcasting company Cumulus oophorus, cells which surround a human egg after fertilisation Passage 2: Lump of labour fallacy In economics, the lump of labour fallacy is the misconception that there is a finite amount of work—a lump of labour—to be done within an economy which can be distributed to create more or fewer jobs. It was considered a fallacy in 1891 by economist David Frederick Schloss, who held that the amount of work is not fixed.The term originated to rebut the idea that reducing the number of hours employees are allowed to labour during the working day would lead to a reduction in unemployment. The term is also commonly used to describe the belief that increasing labour productivity, immigration, or automation causes an increase in unemployment. Whereas opponents of immigration argue that immigrants displace a country's workers, this is a fallacy, as the number of jobs in the economy is not fixed and immigration increases the size of the economy and may increase productivity, innovation, and overall economic activity, as well as reduce incentives for off-shoring and business closures, thus creating more jobs.The lump of labor fallacy is also known as the lump of jobs fallacy, fallacy of labour scarcity, fixed pie fallacy, and the zero-sum fallacy—due to its ties to zero-sum games. The term "fixed pie fallacy" is also used more generally to refer to the idea that there is a fixed amount of wealth in the world. This and other zero-sum fallacies can be caused by zero-sum bias. Immigration The lump of labour fallacy has been applied to concerns around immigration and labour. Given a fixed availability of employment, the lump of labour position argues that allowing immigration of working-age people reduces the availability of work for native-born workers ("they are taking our jobs").However, skilled immigrating workers can bring capabilities that are not available in the native workforce, for example in academic research or information technology. Additionally, immigrating workforces also create new jobs by expanding demand, thus creating more jobs, either directly by setting up businesses (therefore requiring local services or workers), or indirectly by raising consumption. As an example, a greater population that eats more groceries will increase demand from shops, which will therefore require additional shop staff. Employment regulations Advocates of restricting working hours regulation may assume that there is a fixed amount of work to be done within the economy. By reducing the amount that those who are already employed are allowed to work, the remaining amount will then accrue to the unemployed. This policy was adopted by the governments of Herbert Hoover in the United States and Lionel Jospin in France, in the 35-hour working week (though in France various exemptions to the law were granted by later centre-right governments).Many economists agree that such proposals are likely to be ineffective, because there are usually substantial administrative costs associated with employing more workers. These can include additional costs in recruitment, training, and management that would increase average cost per unit of output. This overall would lead to a reduced production per worker, and may even result in higher unemployment. Early retirement Early retirement has been used to induce workers to accept termination of employment before retirement age following the employer's diminished labour needs. Government support for the practice has come from the belief that this should lead to a reduction in unemployment. The unsustainability of this practice has now been recognised, and the trend in Europe is now towards postponement of the retirement age.In an editorial in The Economist a thought experiment is proposed in which old people leave the workforce in favour of young people, on whom they become dependent for their living through state benefits. It is then argued that since growth depends on having either more workers or greater productivity, the society cannot really become more prosperous by paying an increasing number of its citizens unproductively. The article also points out that even early retirees with private pension funds become a burden on society as they also depend on equity and bond income generated by workers. Arguments in favor of the concept There have been critiques of the idea that the concept is a fallacy. Arguments include that Schloss' concept is misapplied to working hours and that he was originally critiquing workers intentionally restricting their output, that prominent economists like John Maynard Keynes believed shorter working hours could allieviate unemployment, and that claims of it being a fallacy are used to argue against proposals for shorter working hours without addressing the non-economic arguments for them. See also Indivisibility of labour Labour (economics) Luddite fallacy Parable of the broken window Working time Zero-sum bias Passage 3: The Lump of Coal The Lump of Coal is a Christmas short story written by Lemony Snicket and illustrated by Brett Helquist. Originally published in the December 10–12, 2004 issue of the now-defunct magazine USA Weekend, it was re-released as a stand-alone book in 2008. It is meant to parody traditional children's Christmas stories, à la the 1823 poem 'Twas the Night Before Christmas. Though illustrated and relatively short, the book uses vocabulary above that of most children, including the term objets d'art. Many elements of the story are easily recognizable as Snicket-esque to A Series of Unfortunate Events readers, including a culturally intelligent and talented protagonist who is dismissed by many a mumpsimus. Plot summary It is Christmas time. A living lump of coal falls off a barbecue grill. He wishes for a miracle to happen. The lump of coal is artistic and wants to be an artist. He goes in search of something. First, he finds an art gallery that, he believes, shows art by lumps of coal. But when he comes in, he sadly discovers the art is by humans who use lumps of coal. He then finds a Korean restaurant called Mr. Wong's Korean Restaurant and Secretarial School, but he goes in and discovers that all things used must be 100% Korean (although the owner does not use a Korean name or proper Korean spices). The lump of coal continues down the street and runs into a man dressed like Santa Claus. The lump of coal tells the man about his problem, and the man gets an idea. He suggests he put the lump of coal in Jasper (his bratty son)'s stocking. The son finds it and is ecstatic; he has wanted to make art with coal. So he makes portraits and he and the lump of coal become rich. They move to Korea and open an authentic Korean restaurant, and have a gallery of their art. See also Lemony Snicket bibliography Passage 4: Energy value of coal The energy value of coal, or fuel content, is the amount of potential energy coal contains that can be converted into heat. This value can be calculated and compared with different grades of coal and other combustible materials, which produce different amounts of heat according to their grade. While chemistry provides ways of calculating the heating value of a certain amount of a substance, there is a difference between this theoretical value and its application to real coal. The grade of a sample of coal does not precisely define its chemical composition, so calculating the coal's actual usefulness as a fuel requires determining its proximate and ultimate analysis (see "Chemical Composition" below). Chemical composition Chemical composition of the coal is defined in terms of its proximate and ultimate (elemental) analyses. The parameters of proximate analysis are moisture, volatile matter, ash, and fixed carbon. Elemental or ultimate analysis encompasses the quantitative determination of carbon, hydrogen, nitrogen, sulfur and oxygen within the coal. Additionally, specific physical and mechanical properties of coal and particular carbonization properties The calorific value Q of coal [kJ/kg] is the heat liberated by its complete combustion with oxygen. Q is a complex function of the elemental composition of the coal. Q can be determined experimentally using calorimeters. Dulong suggests the following approximate formula for Q when the oxygen content is less than 10%: Q = 337C + 1442(H - O/8) + 93S,where C is the mass percent of carbon, H is the mass percent of hydrogen, O is the mass percent of oxygen, and S is the mass percent of sulfur in the coal. With these constants, Q is given in kilojoules per kilogram. See also Coal assay techniques Energies per unit mass Heat of combustion Passage 5: Jugband Blues "Jugband Blues" is a song by the English psychedelic rock band Pink Floyd, released on their second album, A Saucerful of Secrets, in 1968. Written by Syd Barrett, it was his sole compositional contribution to the album, as well as his last published for the band. Barrett and Pink Floyd's management wanted the song to be released as a single, but were vetoed by the rest of the band and producer Norman Smith. "Jugband Blues" is directed towards anyone within Barrett's proximity. Background and recording "Jugband Blues" was written around the same time as "Vegetable Man". Both songs contain the same cynical humour, but while on "Vegetable Man" Barrett focuses his humour on himself, on "Jugband Blues" it is directed towards those around him."Jugband Blues" was either wholly or partly recorded on 19 October 1967 at De Lane Lea Studios. The interview with producer Norman Smith, recorded for the DVD documentary Meddle: A Classic Album Under Review (2007), suggests that at least two separate recording sessions took place. The first session was evidently to record the basic Pink Floyd band track, which was possibly cut at EMI's Abbey Road Studios, since Smith clearly states in the interview that he was unable to use Abbey Road for the brass band session, and was obliged to book De Lane Lea Studios in Holborn instead. Smith's description of the De Lane Lea session implies that it was specifically booked to overdub the brass band onto an existing band track, and he makes no mention of the other members of the group, suggesting that only Barrett and the members of the brass band were present for this overdub session. According to Smith, it had been his initial idea to add a brass arrangement to the basic track, which led Barrett to suggest using a Salvation Army band. Smith recalled that after some considerable effort he was able to contract the eight-piece Salvation Army International Staff Band for the session, which was booked from 7pm to 10pm, but Barrett was almost an hour late arriving. Smith then invited Barrett to outline his musical ideas for the ensemble, but Syd told them he wanted them to simply "play whatever they want" regardless of the rest of the group. Dismayed, Smith had to insist on scored parts, and he was obliged to sketch out an arrangement himself -- according to his account, Barrett walked out of the studio shortly afterwards and did not return. In the interview Smith also specifically mentions playing an existing version of the track for the brass players, to give them some idea of what they were expected to play. About The Salvation Army, band manager Andrew King said that Barrett "wanted a massive Salvation Army freak-out, but that's the only time I can remember Norman [Smith] putting his foot down." The song features a distinctive three-tiered structure: starting off in 34 meter, then into 24 and finishing off in 44. Video The promotional video for the song was filmed in December 1967, for the Central Office of Information in London. The video was supposed to be about Britain, and was meant to be distributed in the US and Canada. The video features Barrett (shown with an acoustic guitar for the first time) and the group miming to the song in a more conventional stage setting, with psychedelic projections in the background. The original audio to the promo is lost, and most versions use the BBC recording from late 1967, consequently causing sync issues most evident as Barrett sings the opening verse. The original film was considered to be lost, until it was re-discovered in the Manchester Arts Lab in 1999. Barrett and Waters first watched the promo video during the second week of December 1967. Reception In a contemporary negative review for A Saucerful of Secrets, Jim Miller of Rolling Stone asserts that ‘Jugband Blues’ "hardly does any credit to Barrett's credentials as a composer." Legacy Barrett, along with Pink Floyd's managers, Peter Jenner and King, wanted to release the song as a single in the new year, before being vetoed by both the band and Norman Smith. Jenner said that "Jugband Blues", along with two others that Syd wrote around this time, ("Scream Thy Last Scream" and "Vegetable Man") were "amazing songs." When compared to "Bike" and "The Scarecrow", Jenner said "You think, 'Well, OK, those are all right, but these are powerful disturbing art.' I wouldn't want anyone to have to go as mad and disturbed as Syd did to get that, but if you are going to go that disturbed give me something like that. That's great art." Jenner had also called "Jugband Blues" "an extraordinary song, the ultimate self-diagnosis on a state of schizophrenia, [and] the portrait of a nervous breakdown."Barrett, by the beginning of the recording sessions for A Saucerful of Secrets, was already shrinking into a delirious state of mind, exacerbated by his feelings of alienation from the rest of the band. The common interpretation of the lyrics is that they reflect his schizophrenia and it has been argued that they could also be read as a criticism of the other band members for forcing him out. King said of the song: "The most alienated, extraordinary lyrics. It's not addressed to the band, it's addressed to the whole world. He was completely cut off." Jenner said "I think every psychiatrist should be made to listen to those songs ["Jugband Blues", "Scream Thy Last Scream" and "Vegetable Man"]. I think they should be part of the curriculum of every medical college along with those Van Gogh paintings like The Crows.""Jugband Blues" is one of two songs (the other being "Set the Controls for the Heart of the Sun") from A Saucerful of Secrets that were later included on the compilation album Echoes: The Best of Pink Floyd. The song was preceded on the compilation by "Wish You Were Here", with lyrics by Roger Waters written in tribute to Barrett. The band Opal released a cover of the song on the Barrett tribute album Beyond the Wildwood in 1987. Personnel Syd Barrett – acoustic guitar, electric guitar, lead vocals Richard Wright – Farfisa organ, tin whistle Roger Waters – bass guitar Nick Mason – drums, castanets, kazoowith: The Salvation Army International Staff BandRay Bowes (cornet), Terry Camsey (cornet), Mac Carter (trombone), Les Condon (E♭ bass), Maurice Cooper (euphonium), Ian Hankey (trombone), George Whittingham (B♭ bass), plus one other uncredited musician. Passage 6: High Coal, West Virginia High Coal or Highcoal is an unincorporated community and coal town located in Boone County, West Virginia, United States. Passage 7: The Lump The Lump is a short animated film released in 1991. It tells the story of an unattractive and unpopular man named George. One day, a lump appears on his head that looks like an attractive face. By pretending the lump is his real face, he gains fame and fortune, but soon he gets into trouble when he enters into the company of several corrupt politicians. A National Film Board of Canada film, The Lump was written and directed by John Weldon. Harvey Atkin contributed the voice. It was nominated for the Genie Award for Best Animated Short at the 13th Genie Awards in 1992, and won the Gordon Bruce Award for Humor at the Ottawa International Animation Festival in that year. Passage 8: Joel the Lump of Coal "Joel the Lump of Coal" is a song by Las Vegas-based rock band The Killers featuring late night talk show host Jimmy Kimmel. It was released on December 1, 2014. The song marks the ninth consecutive year in which the band has released a Christmas song. As with their previous Christmas releases, all proceeds from this song go to AIDS charities as part of the Product Red campaign. The song's announcement and debut occurred on Jimmy Kimmel Live!, where the music video and a montage about the recording process aired. Music video The animated music video first aired on Jimmy Kimmel Live! (December 1, 2014). The style of the video is similar to that of the stop motion animated Rudolph the Red-Nosed Reindeer (1964) and other Rankin/Bass Productions holiday-themed films in digital collage form. The song tells the story of Joel, a lump of coal living at the North Pole. Joel is excited when Santa chooses him to be a child's present, but he is disappointed to learn that instead of being a special gift, Santa is taking him to a naughty boy for Christmas. Joel reluctantly accepts his fate, but he soon realizes that he is just the present the naughty boy needs to help him change his ways. At the end, selfless Joel turns himself into a diamond to make the naughty boy happy.The song is written by Jimmy Kimmel, Jonathan Bines, and the Killers (Flowers, Keuning, Vannucci and Stoermer) with additional material by Tony Barbieri. The video is directed by Jonathan Kimmel, produced by Jennifer Sharron, and edited by Jason Bielski. The animation is by Sean Michael Solomon, Julian Petschek, Jonathan Kimmel, Jesse Griffith and Patrick Campbell, with Bernd Reinhardt as Director of Photography and Jim Alario as cameraman. The sound mix was recorded at Henson Studios, with field sound recorded by Brian Angely and Todd JeanPierre. Track listing Digital Download"Joel the Lump of Coal" – 3:58 Charts Passage 9: Ministry of Coal The Ministry of Coal is an Indian government ministry headquartered in New Delhi. The portfolio is held by Cabinet Minister Pralhad Joshi. The Ministry of Coal is charged with exploration of coal and lignite reserves in India, production, supply, distribution and price of coal through the government-owned corporations Coal India Limited and its subsidiaries, as well as Neyveli Lignite Corporation.The Ministry of Coal also manages the Union Government's 49 percent equity participation in Singareni Collieries Company, a public sector undertaking that is a joint venture with Government of Telangana. in which equity is held partly by the State Government of Telangana (51%) and the Government of India. Ministers of Coal List of Ministers of State Organisations Central Public Sector Undertakings Coal India Neyveli Lignite Corporation Statutory Bodies Coal Mines Provident Fund Organisation (CMPFO) Coal Mines Welfare Organisation Commissioner Of Payments COAL CONTROLLER'S ORGANIZATION (CCO) Functions And Responsibilities The Ministry of Coal is responsible for development and exploitation of coal and lignite reserves in India. The subjects allocated to the Ministry which include attached and sub-ordinate or other organisations including PSUs concerned with their subjects under the Government of India (Allocation of Business) Rules, 1961, as amended from time to time, are as follows: Exploration and development of coking coal and non-coking coal and lignite deposits in India All matters relating to production, supply, distribution and prices of coal Development and operation of coal washeries other than those for which Department of Steel (ISPAT Vibhag) is responsible Low-Temperature carbonisation of coal and production of synthetic oil from coal Administration of the Coal Mines (Conservation and Development) Act, 1974 (28 of 1974) The Coal Mines Provident Fund Organisation The Coal Mines Welfare Organisation Administration of the Coal Mines Provident Fund and Miscellaneous Provision Act, 1948 (46 of 1948) Administration of the Coal Mines Labour Welfare Fund Act, 1947 (32 of 1947) Rules under the Mines Act, 1952 (32 of 1952) for the levy and collection of duty of excise on coke and coal produced and dispatched from mines and administration of rescue fund Administration of the Coal Bearing Areas (Acquisition and Development) Act, 1957 (20 of 1957) Passage 10: Singles: Individually Wrapped Singles: Individually Wrapped is a greatest hits album by Odds, released in 2000. The album contains singles from all four of the band's studio albums, as well as a rendition of the Christmas song "Kings of Orient" which the band recorded for the 1991 Christmas compilation A Lump of Coal. Track listing "Someone Who's Cool" (3:17) "Truth Untold" (3:55) "It Falls Apart" (3:38) "Love Is the Subject" (4:43) "Jackhammer" (long version) (4:20) "Satisfied" (3:00) "Nothing Beautiful" (3:06) "Eat My Brain" (4:26) "Make You Mad" (4:07) "Wendy Under the Stars" (4:15) "Yes (Means It's Hard to Say No)" (single remix) (3:14) "I Would Be Your Man" (3:26) "King of the Heap" (single remix) (3:57) "Heterosexual Man" (3:32) "Mercy to Go" (5:18) "Kings of Orient (We Three Kings)" (4:26)
Which song came out first, Joel The Lump Of Coal or Jugband Blues?
Jugband Blues
3,517
2wikimqa
4k
Passage 1: John Westley Rev. John Wesley (1636–78) was an English nonconformist minister. He was the grandfather of John Wesley (founder of Methodism). Life John Wesly (his own spelling), Westley, or Wesley was probably born at Bridport, Dorset, although some authorities claim he was born in Devon, the son of the Rev. Bartholomew Westley and Ann Colley, daughter of Sir Henry Colley of Carbery Castle in County Kildare, Ireland. He was educated at Dorchester Grammar School and as a student of New Inn Hall, Oxford, where he matriculated on 23 April 1651, and graduated B.A. on 23 January 1655, and M.A. on 4 July 1657. After his appointment as an evangelist, he preached at Melcombe Regis, Radipole, and other areas in Dorset. Never episcopally ordained, he was approved by Oliver Cromwell's Commission of Triers in 1658 and appointed Vicar of Winterborne Whitechurch.The report of his interview in 1661 with Gilbert Ironside the elder, his diocesan, according to Alexander Gordon writing in the Dictionary of National Biography, shows him to have been an Independent. He was imprisoned for not using the Book of Common Prayer, imprisoned again and ejected in 1662. After the Conventicle Act 1664 he continued to preach in small gatherings at Preston and then Poole, until his death at Preston in 1678. Family He married a daughter of John White, who was related also to Thomas Fuller. White, the "Patriarch of Dorchester", married a sister of Cornelius Burges. Westley's eldest son was Timothy (born 1659). Their second son was Rev. Samuel Wesley, a High Church Anglican vicar and the father of John and Charles Wesley. A younger son, Matthew Wesley, remained a nonconformist, became a London apothecary, and died on 10 June 1737, leaving a son, Matthew, in India; he provided for some of his brother Samuel's daughters. Notes Additional sources Matthews, A. G., "Calamy Revised", Oxford University Press, 1934, page 521. This article incorporates text from a publication now in the public domain: "Wesley, Samuel (1662-1735)". Dictionary of National Biography. London: Smith, Elder & Co. 1885–1900. Passage 2: Guillaume Wittouck Guillaume Wittouck (1749 - 1829) was a Belgian lawyer and High Magistrate. He was the Grandfather of industrialist Paul Wittouck and of Belgian navigator Guillaume Delcourt. Biography Guillaume Wittouck, born in Drogenbos on 30 October 1749 and died in Brussels on 12 June 1829, lawyer at the Brabant Council, became Counselor at the Supreme Court of Brabant in 1791. During the Brabant Revolution, he sided with the Vonckists, who were in favor of new ideas. When Belgium joined France, he became substitute for the commissioner of the Directory at the Civil Court of the Department of the Dyle, then under the consulate, in 1800, judge at the Brussels Court of Appeal, then from 1804 to 1814, under the Empire, counselor at the Court of Appeal of Brussels, then advisor to the Superior Court of Brussels. He married in Brussels (Church of Saint Nicolas) on 29 June 1778, Anne Marie Cools, born in Gooik on 25 January 1754, died in Brussels on 11 April 1824, daughter of Jean Cools and Adrienne Galmaert descendants of the Seven Noble Houses of Brussels.Guillaume Wittouck acquired on 28th Floreal of the year VIII (18 May 1800) the castle of Petit-Bigard in Leeuw-Saint-Pierre with a field of one hundred hectares. Petit-Bigard will remain the home of the elder branch until its sale in 1941. Passage 3: Kaya Alp Kaya Alp (Ottoman Turkish: قایا الپ, lit. 'Brave Rock') was, according to Ottoman tradition, the son of Kızıl Buğa or Basuk and the father of Suleyman Shah. He was the grandfather of Ertuğrul Ghazi, the father of the founder of the Ottoman Empire, Osman I. He was also famously known for being the successing name of Ertokus Bey’s son Kaya Alp. He was a descendant of the ancestor of his tribe, Kayı son of Gun son of Oghuz Khagan, the legendary progenitor of the Oghuz Turks. Passage 4: Rathold Rátót Rathold (I) from the kindred Rátót (Hungarian: Rátót nembeli (I.) Rátót (Ratolt)) was a Hungarian distinguished nobleman from the gens Rátót, who served as ispán (comes) of Somogy County in 1203.He was the eldest son of voivode Leustach Rátót. As his brother, Julius I Rátót had no successors, Rathold was the ancestor of the Gyulafi branch of the Rátót clan. Passage 5: Fujiwara no Nagara This is about the 9th-century Japanese statesman. For the 10th-century Japanese poet also known as Nagayoshi, see Fujiwara no Nagatō. Fujiwara no Nagara (藤原長良, 802 – 6 August 856), also known as Fujiwara no Nagayoshi, was a Japanese statesman, courtier and politician of the early Heian period. He was the grandfather of Emperor Yōzei. Life Nagara was born as the eldest son of the sadaijin Fujiwara no Fuyutsugu, a powerful figure in the court of Emperor Saga. He was also a descendant of the early Japanese emperors and was well trusted by Emperor Ninmyō since his time as crown prince, and attended on him frequently. However, after Ninmyō took the throne, Nagara's advancement was overtaken by his younger brother Fujiwara no Yoshifusa. He served as director of the kurōdo-dokoro (蔵人所) and division chief (督) in the imperial guard before finally making sangi and joining the kugyō in 844, ten years after his younger brother. In 850, Nagara's nephew Emperor Montoku took the throne, and Nagara was promoted to shō shi-i no ge (正四位下) and then ju san-mi (従三位), and in 851 to shō san-mi (正三位). In the same year, though, Nagara was overtaken once more as his brother Fujiwara no Yoshimi, more than ten years his junior, was promoted to chūnagon. In 854, when Yoshimi was promoted to dainagon, Nagara was promoted to fill his old position of chūnagon. In 856 he was promoted to 従二位 (ju ni-i), but died shortly thereafter at the age of 55. Legacy After Nagara's death, his daughter Takaiko became a court lady of Emperor Seiwa. In 877, after her son Prince Sadaakira took the throne as Emperor Yōzei, Nagara was posthumously promoted to shō ichi-i (正一位) and sadaijin, and again in 879 to daijō-daijin. Nagara was overtaken in life by his brother Yoshifusa and Yoshimi, but he had more children, and his descendants thrived. His third son Fujiwara no Mototsune was adopted by Yoshifusa, and his line branched into various powerful clans, including the five regent houses. Before the Middle Ages, there may have been a tendency to view Mototsune's biological father Nagara rather than his adoptive father Yoshifusa as his parent, making Nagara out as the ancestor of the regent family. This may have impacted the Ōkagami, leading it to depict Nagara as the head of the Hokke instead of Yoshifusa. Personality Nagara had a noble disposition, both tender-hearted and magnanimous. Despite being overtaken by his brothers, he continued to love them deeply. He was treated his subordinates with tolerance, and was loved by people of all ranks. When Emperor Ninmyō died, Fuyutsugu is said to have mourned him like a parent, even abstaining from food as he prayed for the happiness of the Emperor's spirit. When he served Emperor Montoku in his youth, the Emperor treated him as an equal, but Nagara did not abandon formal dress or display an overly familiar attitude. Genealogy Father: Fujiwara no Fuyutsugu Mother: Fujiwara no Mitsuko (藤原美都子), daughter of Fujiwara no Matsukuri (藤原真作) Wife: Nanba no Fuchiko (難波渕子) Eldest son: Fujiwara no Kunitsune (藤原国経, 828–908) Second son: Fujiwara no Tōtsune (藤原遠経, 835–888) Wife: Fujiwara no Otoharu (藤原乙春), daughter of Fujiwara no Fusatsugu (藤原総継) Third son: Fujiwara no Mototsune (藤原基経, 836–891), adopted by Fujiwara no Yoshifusa Fourth son: Fujiwara no Takatsune (藤原高経, ?–893) Fifth son: Fujiwara no Hirotsune (藤原弘経, 838–883) Sixth son: Fujiwara no Kiyotsune (藤原清経, 846–915) Daughter: Fujiwara no Takaiko (藤原高子, 842–910), court lady of Emperor Seiwa, mother of Emperor Yōzei Unknown wife (possibly Nanba no Fuchiko (難波渕子)) Daughter: Fujiwara no Shukushi (藤原淑子, 838–906), wife of Fujiwara no Ujimune, adoptive mother of Emperor Uda, Naishi-no-kami (尚侍) Daughter: Fujiwara no Ariko (藤原有子, ?–866), wife of Taira no Takamune, Naishi-no-suke (典侍) Notes Passage 6: Prithvipati Shah Prithvipati Shah (Nepali: पृथ्वीपति शाह) was the king of the Gorkha Kingdom in the South Asian subcontinent, present-day Nepal. He was the grandfather of Nara Bhupal Shah and reigned from 1673–1716.King Prithvipati Shah ascended to the throne after the demise of his father. He was the longest serving king of the Gorkha Kingdom but his reign saw a lot of struggles. Passage 7: Baldwin I Rátót Baldwin (I) from the kindred Rátót (Hungarian: Rátót nembeli (I.) Balduin; died after 1255) was a Hungarian distinguished nobleman from the gens Rátót, who served as master of the cupbearers three times. His father was Rathold Rátót, ispán (comes) of Somogy County in 1203. His older brother was Dominic I Rátót.He served as master of the cupbearers between 1233 and 1234. After that he functioned as ispán of Moson County in 1235. He was appointed master of the cupbearers for the second time in 1235, a position which he held until 1238. He was ispán of Vas County from 1240 to 1244. After that he functioned as ispán of Nyitra County in 1244. He served as master of the cupbearers for the third time between 1247 and 1254, besides that he held the office of ispán of Bánya from 1247 to 1251. He finished his career as ispán of Vas County in 1255. Passage 8: Baldwin II Rátót Baldwin (II) from the kindred Rátót (Hungarian: Rátót nembeli (II.) Balduin; died after 1283) was a Hungarian distinguished nobleman from the gens Rátót as the son of Baldwin I Rátót, who served as ispán (comes) of Zala County from 1275 to 1276 and in 1276.His older brother was Julius II Rátót. Baldwin's only son, Lawrence was the ancestor of the Rátóti and Gyulaffy de Rátót noble families. Passage 9: Lyon Cohen Lyon Cohen (born Yehuda Leib Cohen; May 11, 1868 – August 17, 1937) was a Polish-born Canadian businessman and a philanthropist. He was the grandfather of singer/poet Leonard Cohen. Biography Cohen was born in Congress Poland, part of the Russian Empire, to a Jewish family on May 11, 1868. He immigrated to Canada with his parents in 1871. He was educated at the McGill Model School and the Catholic Commercial Academy in Montreal. In 1888, he entered the firm of Lee & Cohen in Montreal; later became partner with his father in the firm of L. Cohen & Son; in 1895, he established W. R. Cuthbert & Co; in 1900, he organized the Canadian Improvement Co., a dredging contractor; in 1906, he founded The Freedman Co. in Montreal; and in May 1919, he organized and became President of Canadian Export Clothiers, Ltd. The Freedman Company went on to become one of Montreal’s largest clothing companies.In 1897, Cohen and Samuel William Jacobs founded the Canadian Jewish Times, the first English-language Jewish newspaper in Canada. The newspaper promoted the Canadianization of recent East European Jewish immigrants and encouraged their acceptance of Canadian customs as Cohen felt that the old world customs of immigrant Jews were one of the main causes of anti-Semitism. In 1914, the paper was purchased by Hirsch Wolofsky, owner of the Yiddish-language Keneder Adler, who transformed it into the Canadian Jewish Chronicle.He died on August 17, 1937, at the age of 69. Philanthropy Cohen was elected the first president of the Canadian Jewish Congress in 1919 and organized the Jewish Immigrant Aid Services of Canada. Cohen was also a leader of the Young Men’s Hebrew Benevolent Society (later the Baron de Hirsch Institute) and the United Talmud Torahs, a Jewish day school in Montreal. He also served as president of Congregation Shaar Hashomayim and president of the Jewish Colonization Association in Canada. Personal life Cohen married Rachel Friedman of Montreal on February 17, 1891. She was the founder and President of Jewish Endeavour Sewing School. They had three sons and one daughter: Nathan Bernard Cohen, who served as a lieutenant in the World War; he married Lithuanian Jewish immigrant Masha Klonitsky and they had one daughter and one son: Esther Cohen and singer/poet Leonard Cohen. Horace Rives Cohen, who was a captain and quartermaster of his battalion in World War I; Lawrence Zebulun Cohen, student at McGill University, and Sylvia Lillian Cohen. Passage 10: Abd al-Muttalib Shayba ibn Hāshim (Arabic: شَيْبَة إبْن هَاشِم; c. 497–578), better known as ʿAbd al-Muṭṭalib, (Arabic: عَبْد ٱلْمُطَّلِب, lit. 'Servant of Muttalib') was the fourth chief of the Quraysh tribal confederation. He was the grandfather of the Islamic prophet Muhammad. Early life His father was Hashim ibn 'Abd Manaf,: 81  the progenitor of the distinguished Banu Hashim, a clan of the Quraysh tribe of Mecca. They claimed descent from Ismā'īl and Ibrāhīm. His mother was Salma bint Amr, from the Banu Najjar, a clan of the Khazraj tribe in Yathrib (later called Madinah). Hashim died while doing business in Gaza, before Abd al-Muttalib was born.: 81 His real name was "Shaiba" meaning 'the ancient one' or 'white-haired' because of the streak of white through his jet-black hair, and is sometimes also called Shaybah al-Ḥamd ("The white streak of praise").: 81–82  After his father's death he was raised in Yathrib with his mother and her family until about the age of eight, when his uncle Muttalib ibn Abd Manaf went to see him and asked his mother Salmah to entrust Shaybah to his care. Salmah was unwilling to let her son go and Shaiba refused to leave his mother without her consent. Muṭṭalib then pointed out that the possibilities Yathrib had to offer were incomparable to Mecca. Salmah was impressed with his arguments, so she agreed to let him go. Upon first arriving in Mecca, the people assumed the unknown child was Muttalib's servant and started calling him 'Abd al-Muttalib ("servant of Muttalib").: 85–86 Chieftain of Hashim clan When Muṭṭalib died, Shaiba succeeded him as the chief of the Hāshim clan. Following his uncle Al-Muṭṭalib, he took over the duties of providing the pilgrims with food and water, and carried on the practices of his forefathers with his people. He attained such eminence as none of his forefathers enjoyed; his people loved him and his reputation was great among them.: 61  'Umar ibn Al-Khaṭṭāb's grandfather Nufayl ibn Abdul Uzza arbitrated in a dispute between 'Abdul-Muṭṭalib and Ḥarb ibn Umayyah, Abu Sufyan's father, over the custodianship of the Kaaba. Nufayl gave his verdict in favour of 'Abdul-Muṭṭalib. Addressing Ḥarb ibn Umayyah, he said: Why do you pick a quarrel with a person who is taller than you in stature; more imposing than you in appearance; more refined than you in intellect; whose progeny outnumbers yours and whose generosity outshines yours in lustre? Do not, however, construe this into any disparagement of your good qualities which I highly appreciate. You are as gentle as a lamb, you are renowned throughout Arabia for the stentorian tones of your voice, and you are an asset to your tribe. Discovery of Zam Zam Well 'Abdul-Muṭṭalib said that while sleeping in the sacred enclosure, he had dreamed he was ordered to dig at the worship place of the Quraysh between the two deities Isāf and Nā'ila. There he would find the Zamzam Well, which the Jurhum tribe had filled in when they left Mecca. The Quraysh tried to stop him digging in that spot, but his son Al-Ḥārith stood guard until they gave up their protests. After three days of digging, 'Abdul-Muṭṭalib found traces of an ancient religious well and exclaimed, "Allahuakbar!" Some of the Quraysh disputed his claim to sole rights over water, then one of them suggested that they go to a female shaman who lived afar. It was said that she could summon jinns and that she could help them decide who was the owner of the well. So, 11 people from the 11 tribes went on the expedition. They had to cross the desert to meet the priestess but then they got lost. There was a lack of food and water and people started to lose hope of ever getting out. One of them suggested that they dig their own graves and if they died, the last person standing would bury the others. So all began digging their own graves and just as Abdul-Muṭṭalib started digging, water spewed out from the hole he dug and everyone became overjoyed. It was then and there decided that Abdul-Muttalib was the owner of the Zam Zam well. Thereafter he supplied pilgrims to the Kaaba with Zam Zam water, which soon eclipsed all the other wells in Mecca because it was considered sacred.: 86–89 : 62–65 The Year of the Elephant According to Muslim tradition, the Ethiopian governor of Yemen, Abrahah al-Ashram, envied the Kaaba's reverence among the Arabs and, being a Christian, he built a cathedral on Sana'a and ordered pilgrimage be made there.: 21  The order was ignored and someone desecrated (some saying in the form of defecation: 696 note 35 ) the cathedral. Abrahah decided to avenge this act by demolishing the Kaaba and he advanced with an army towards Mecca.: 22–23 There were thirteen elephants in Abrahah's army: 99 : 26  and the year came to be known as 'Ām al-Fīl (the Year of the Elephant), beginning a trend for reckoning the years in Arabia which was used until 'Umar ibn Al-Khaṭṭāb replaced it with the Islamic Calendar in 638 CE (17 AH), with the first year of the Islamic Calendar being 622 CE. When news of the advance of Abrahah's army came, the Arab tribes of Quraysh, Kinānah, Khuzā'ah and Hudhayl united in defence of the Kaaba. A man from the Ḥimyar tribe was sent by Abrahah to advise them that he only wished to demolish the Kaaba and if they resisted, they would be crushed. "Abdul-Muṭṭalib told the Meccans to seek refuge in the nearest high hills while he, with some leading members of Quraysh, remained within the precincts of the Kaaba. Abrahah sent a dispatch inviting 'Abdul-Muṭṭalib to meet him and discuss matters. When 'Abdul-Muṭṭalib left the meeting he was heard saying, "The Owner of this House is its Defender, and I am sure He will save it from the attack of the adversaries and will not dishonour the servants of His House.": 24–26 It is recorded that when Abrahah's forces neared the Kaaba, Allah commanded small birds (abābīl) to destroy Abrahah's army, raining down pebbles on it from their beaks. Abrahah was seriously wounded and retreated towards Yemen but died on the way.: 26–27  This event is referred to in the following Qur'anic chapter: Have you not seen how your Lord dealt with the owners of the Elephant? Did He not make their treacherous plan go astray? And He sent against them birds in flocks, striking them with stones of baked clay, so He rendered them like straw eaten up. Most Islamic sources place the event around the year that Muhammad was born, 570 CE, though other scholars place it one or two decades earlier. A tradition attributed to Ibn Shihab al-Zuhri in the musannaf of ʽAbd al-Razzaq al-Sanʽani places it before the birth of Muhammad's father. Sacrificing his son Abdullah Al-Harith was 'Abdul-Muṭṭalib's only son at the time he dug the Zamzam Well.: 64  When the Quraysh tried to help him in the digging, he vowed that if he were to have ten sons to protect him, he would sacrifice one of them to Allah at the Kaaba. Later, after nine more sons had been born to him, he told them he must keep the vow. The divination arrows fell upon his favourite son Abdullah. The Quraysh protested 'Abdul-Muṭṭalib's intention to sacrifice his son and demanded that he sacrifice something else instead. 'Abdul-Muṭṭalib agreed to consult a "sorceress with a familiar spirit". She told him to cast lots between Abdullah and ten camels. If Abdullah were chosen, he had to add ten more camels, and keep on doing the same until his Lord accepted the camels in Abdullah's place. When the number of camels reached 100, the lot fell on the camels. 'Abdul-Muṭṭalib confirmed this by repeating the test three times. Then the camels were sacrificed, and Abdullah was spared.: 66–68 Family Wives Abd al-Muttalib had six known wives. Sumra bint Jundab of the Hawazin tribe. Lubnā bint Hājar of the Khuza'a tribe. Fatima bint Amr of the Makhzum clan of the Quraysh tribe. Halah bint Wuhayb of the Zuhrah clan of the Quraysh tribe. Natīla bint Janab of the Namir tribe. Mumanna'a bint Amr of the Khuza'a tribe. Children According to Ibn Hisham, ʿAbd al-Muṭṭalib had ten sons and six daughters.: 707–708 note 97  However, Ibn Sa'd lists twelve sons.: 99–101 By Sumra bint Jundab: Al-Ḥārith.: 708  He was the firstborn and he died before his father.: 99  Quthum.: 100  He is not listed by Ibn Hisham.By Fatima bint Amr: Al-Zubayr.: 707  He was a poet and a chief; his father made a will in his favour.: 99  He died before Islam, leaving two sons and daughters.: 101 : 34–35  Abu Talib, born as Abd Manaf,: 99 : 707  father of the future Caliph Ali. He later became chief of the Hashim clan. Abdullah, the father of Muhammad.: 99 : 707  Umm Hakim al-Bayda,: 100 : 707  the maternal grandmother of the third Caliph Uthman.: 32  Barra,: 100 : 707  the mother of Abu Salama.: 33  Arwa.: 100 : 707  Atika,: 100 : 707  a wife of Abu Umayya ibn al-Mughira.: 31  Umayma,: 100 : 707  the mother of Zaynab bint Jahsh and Abd Allah ibn Jahsh.: 33 By Lubnā bint Hājar: Abd al-'Uzzā, better known as Abū Lahab.: 100 : 708 By Halah bint Wuhayb: Ḥamza,: 707  the first big leader of Islam. He killed many leaders of the kufar and was considered as the strongest man of the quraysh. He was martyred at Uhud.: 100  Ṣafīyya.: 100 : 707  Al-Muqawwim.: 707  He married Qilaba bint Amr ibn Ju'ana ibn Sa'd al-Sahmia, and had children named Abd Allah, Bakr, Hind, Arwa, and Umm Amr (Qutayla or Amra). Hajl.: 707  He married Umm Murra bint Abi Qays ibn Abd Wud, and had two sons, named Abd Allah, Ubayd Allah, and three daughters named Murra, Rabi'a, and Fakhita.By Natīlah bint Khubāb: al-'Abbas,: 100 : 707  ancestor of the Abbasid caliphs. Ḍirār,: 707  who died before Islam.: 100  Jahl, died before Islam Imran, died before IslamBy Mumanna'a bint 'Amr: Mus'ab, who, according to Ibn Saad, was the one known as al-Ghaydāq.: 100  He is not listed by Ibn Hisham. Al-Ghaydaq, died before Islam. Abd al-Ka'ba, died before Islam.: 100  Al-Mughira,: 100  who had the byname al-Ghaydaq. The family tree and some of his important descendants Death Abdul Muttalib's son 'Abdullāh died four months before Muḥammad's birth, after which Abdul Muttalib took care of his daughter-in-law Āminah. One day Muhammad's mother, Amina, wanted to go to Yathrib, where her husband, Abdullah, died. So, Muhammad, Amina, Abd al-Muttalib and their caretaker, Umm Ayman started their journey to Medina, which is around 500 kilometres away from Makkah. They stayed there for three weeks, then, started their journey back to Mecca. But, when they reached halfway, at Al-Abwa', Amina became very sick and died six years after her husband's death. She was buried over there. From then, Muhammad became an orphan. Abd al-Muttalib became very sad for Muhammad because he loved him so much. Abd al-Muttalib took care of Muhammad. But when Muhammad was eight years old, the very old Abd al-Muttalib became very sick and died at age 81-82 in 578-579 CE. Shaybah ibn Hāshim's grave can be found in the Jannat al-Mu'allā cemetery in Makkah, Saudi Arabia. See also Family tree of Muhammad Family tree of Shaiba ibn Hashim Sahaba
Who is the paternal grandfather of Baldwin I Rátót?
Leustach Rátót
3,948
2wikimqa
4k
Passage 1: Gabrielle Beaumont Gabrielle Beaumont (7 April 1942 – 8 October 2022) was a British film and television director. Her directing credits range from Hill Street Blues to Star Trek: The Next Generation. She became the first woman to direct an episode of Star Trek, with the episode "Booby Trap". Beaumont lobbied to have Joan Collins cast as Alexis Colby in Dynasty.Beaumont was best known for directing, writing and producing the television special Diana: A Tribute to the People's Princess. She directed a film version of Bernard Taylor's The Godsend.Daphne du Maurier was her cousin.Beaumont died at her home in Fornalutx on 8 October 2022, at the age of 80. Selected filmography Sources: Diana: A Tribute to the People's Princess Beastmaster III: The Eye of Braxus The Other Woman Moment of Truth: Cradle of Conspiracy Fatal Inheritance Riders Star Trek: The Next Generation L.A. Law He's My Girl Hill Street Blues Gone Are the Dayes Secrets of a Mother and Daughter Dynasty Death of a Centerfold: The Dorothy Stratten Story M*A*S*H The Waltons The Godsend Passage 2: The Godsend (film) The Godsend is a 1980 British horror film directed by Gabrielle Beaumont, written by Olaf Pooley, and starring Malcolm Stoddard, Cyd Hayman, Angela Pleasence, Patrick Barr, Wilhelmina Green, and Joanne Boorman. It follows a family who adopt an infant girl from a strange woman, only to find that, as they raise her, their other children begin to die in a series of mysterious accidents. It is based on the 1976 novel The Godsend by Bernard Taylor. The film was released in the United States on 11 January 1980 by The Cannon Group, Inc. Plot Alan and Kate Marlowe are out an walk with their kids, Davy, Lucy, Sam, and baby Matthew. Kate meets a pregnant stranger and she comes home with them. It is apparent that Alan finds something "off" about her right away, as she intensely stares at him, but he does not say anything. Left briefly unattended, she cuts their telephone line. Alan is about to drive her home, but she goes into labor, and Kate helps her deliver a baby girl. The next day, Kate sees the woman is gone, having abandoned the child with them. Despite Alan's reservations, Kate wants to keep the baby, whom they name Bonnie. Later on, they find Matthew dead in a playpen with Bonnie. At a family picnic, Davy and Bonnie wander off, and they search for them desperately. Kate finds Bonnie on the bank of a creek with scratches on her hands, while Alan finds that Davy has drowned in the creek. Alan attempts to perform CPR on Davy, but is unsuccessful. Later, Kate and Alan agree that the scratches on Bonnie must have been from Davy saving her. Bonnie starts to break things and Sam gets blamed for them, despite him saying he did not do it. Kate attributes this to Sam's jealousy of Bonnie. One day, the family is playing hide and seek and Alan finds Sam dead in a barn. Later, Alan finds Bonnie's ribbon next to where Sam's body was. The Marlowes begin to receive letters accusing them of killing their children and Kate falls into a depression. When a reporter comes to their house and upsets Kate, Alan agrees to move the family to London. Bonnie becomes ill with the mumps, and purposely kisses Alan as he takes a nap. He becomes ill with the mumps too, and has a flashback in a dream, to the circumstances of the deaths of his sons, and Bonnie being nearby in each one. At a playground, Alan watches Bonnie throw an unoccupied swing in the path of a swing Lucy is swinging on. The chains on the swing twist together, but Lucy does not fall off, and Alan is able to save her before she is hurt. Alan tries to discuss his concerns about Bonnie with Kate, saying she is not normal. Kate strongly disagrees, saying that Bonnie loves Lucy and was only playing. Alan says Bonnie loves Lucy the same way she loved their three boys, and Kate is disgusted at the insinuation. Alan tells Kate his theories about Bonnie being involved in the deaths, but she is still in disbelief. Alan uses an analogy about Bonnie, saying that a cuckoo lays its eggs in another nest, and the fledgling pushing the others out to get the full attention of the parents. Alan wants to send Bonnie away, but Kate refuses, so he kidnaps Lucy. Alan goes to see Kate, who is distraught that Alan will not tell her where Lucy is. Alan gives Kate an ultimatum to choose Bonnie or Lucy. She refuses to do so and he leaves. Later, they find out that Kate has had an accident and is in the hospital. Alan rushes back to London, where he learns that Kate had been pregnant, but miscarried due to the accident. Back at their apartment, Alan finds out from neighbor, Mr. Taverner, that Kate tripped over a doll at the top of a staircase and that Mrs. Taverner has taken Bonnie on a trip. Kate comes to Alan's work to tell him she wants a divorce. He is alarmed to learn that Bonnie is home alone with Lucy. Alan calls Lucy, telling her to go next door to the Taverners. Bonnie has them locked in, and as Kate and Alan get home, Bonnie has used mind control on Lucy to make her jump out of a window to her death. Alan tries to kill Bonnie, but Mr. Taverner pulls him off of her. Kate decides to stay with Bonnie, and Alan leaves her. At a park, Alan sees the strange woman who gave birth to Bonnie, and is now pregnant, and talking to the mother. He runs after them to warn the family, but they are already gone. Cast Release The Cannon Group, Inc. released The Godsend theatrically in the United States on 11 January 1980, premiering it in Los Angeles. It screened in numerous U.S. cities through the following weeks, as well as in Canada. The film screened in the United Kingdom in June 1981 as a double feature alongside Schizoid (1980). Critical response Joe Pollack of the St. Louis Post-Dispatch wrote that, "though not a perfect film, [it] is a pretty good example... The film has moments when it drags, but it has many others that are both fascinating and scary." The Austin American-Statesman's Patrick Taggart panned the film as "nothing but a study in how decent actors—Malcolm Stoddard and Cyd Hayman—are made to throw talent into a bottomless pit of ineptness on all fronts." Bob Curtright of The Wichita Eagle praised the film as "a cut above similar fare. It's low-key and sneaky rather an extravagant and graphic."George Meyer, a film professor and critic, wrote in The Tampa Tribune that, "instead of frightening the viewer with costly gimmicks, Beaumont exploits some basic human fears, most of them involving our protective feelings about children," adding that while the film "makes good use of its limitations, it retains the look and feel of a limited effort. If it weren't for those few squirmy moments, the film's appeal would be even more limited." John Dodd of the Edmonton Journal commended the film's focus on suspense over graphic violence, but felt it would have been better-suited as a television film. Home media Scream Factory released the film on Blu-ray in 2015 as part of a double-feature with The Outing (1987). The disc went out-of-print in February 2021. Passage 3: Peter Levin Peter Levin is an American director of film, television and theatre. Career Since 1967, Levin has amassed a large number of credits directing episodic television and television films. Some of his television series credits include Love Is a Many Splendored Thing, James at 15, The Paper Chase, Family, Starsky & Hutch, Lou Grant, Fame, Cagney & Lacey, Law & Order and Judging Amy.Some of his television film credits include Rape and Marriage: The Rideout Case (1980), A Reason to Live (1985), Popeye Doyle (1986), A Killer Among Us (1990), Queen Sized (2008) and among other films. He directed "Heart in Hiding", written by his wife Audrey Davis Levin, for which she received an Emmy for Best Day Time Special in the 1970s. Prior to becoming a director, Levin worked as an actor in several Broadway productions. He costarred with Susan Strasberg in "[The Diary of Ann Frank]" but had to leave the production when he was drafted into the Army. He trained at the Carnegie Mellon University. Eventually becoming a theatre director, he directed productions at the Long Wharf Theatre and the Pacific Resident Theatre Company. He also co-founded the off-off-Broadway Theatre [the Hardware Poets Playhouse] with his wife Audrey Davis Levin and was also an associate artist of The Interact Theatre Company. Passage 4: Hanro Smitsman Hanro Smitsman, born in 1967 in Breda (Netherlands), is a writer and director of film and television. Film and Television Credits Films Brothers (2017) Schemer (2010) Skin (2008) Raak (aka Contact) (2006) Allerzielen (aka All Souls) (2005) (segment "Groeten uit Holland") Engel en Broer (2004) 2000 Terrorists (2004) Dajo (2003) Gloria (2000) Depoep (2001) Television 20 leugens, 4 ouders en een scharrelei (2013) De ontmaskering van de vastgoedfraude (TV mini-series, 2013) Moordvrouw (2012-) Eileen (2 episodes, 2011) Getuige (2011) Vakantie in eigen land (2011) De Reis van meneer van Leeuwen(2010) De Punt (2009) Roes (2 episodes, 2008) Fok jou! (2006) Van Speijk (2006) Awards In 2005, Engel en Broer won Cinema Prize for Short Film at the Avanca Film Festival.In 2007, Raak (aka Contact) won the Golden Berlin Bear Award at the Berlin International Film Festival, the Spirit Award at the Brooklyn Film Festival, the first place jury prize for "Best Live Action under 15 minutes" at the Palm Springs International Short Film Festival, and the Prix UIP Ghent Award for European Short Films at the Flanders International Film Festival.In 2008, Skin won the Movie Squad Award at the Nederlands Film Festival, an actor in the film also won the Best Actor Award. It also won the Reflet d’Or for Best Film at the Cinema tous ecrans Festival in Geneva in the same year. Passage 5: Betrayal (1932 film) Betrayal is a 1932 British crime film directed by Reginald Fogwell and starring Stewart Rome, Marjorie Hume and Leslie Perrins. A woman attempts to save her husband from being hanged for a crime he didn't commit. It is based on a play No Crime of Passion by Hubert G. Griffith. Cast Stewart Rome as John Armytage Marjorie Hume as Diana Armytage Leslie Perrins as Clive Wilson Henry Hewitt as Sir Robert Blackburn KC J. Fisher White as John Lawrence KC Frank Atherley as Judge E. H. Williams as- Butler Charles Childerstone as Doctor Passage 6: Brian Johnson (special effects artist) Brian Johnson (born 29 June 1939 or 29 June 1940) is a British designer and director of film and television special effects. Life and career Born Brian Johncock, he changed his surname to Johnson during the 1960s. Joining the team of special effects artist Les Bowie, Johnson started his career behind the scenes for Bowie Films on productions such as On The Buses, and for Hammer Films. He is known for his special effects work on TV series including Thunderbirds (1965–66) and films including Alien (1979), for which he received the 1980 Academy Award for Best Visual Effects (shared with H. R. Giger, Carlo Rambaldi, Dennis Ayling and Nick Allder). Previously, he had built miniature spacecraft models for Stanley Kubrick's 1968 film 2001: A Space Odyssey.Johnson's work on Space: 1999 influenced the effects of the Star Wars films of the 1970s and 1980s. Impressed by his work, George Lucas visited Johnson during the production of the TV series to offer him the role of effects supervisor for the 1977 film. Having already been commissioned for the second series of Space: 1999, Johnson was unable to accept at the time. He worked on the sequel, The Empire Strikes Back (1980), whose special effects were recognised in the form of a 1981 Special Achievement Academy Award (which Johnson shared with Richard Edlund, Dennis Muren and Bruce Nicholson). Awards Johnson has won Academy Awards for both Alien (1979) and The Empire Strikes Back (1980). He was further nominated for an Academy Award for his work on Dragonslayer (1981). In addition, Johnson is the recipient of a Saturn Award for The Empire Strikes Back and a BAFTA Award for James Cameron's Aliens. Filmography Special effects Director Scragg 'n' Bones (2006) Passage 7: Rachel Feldman Rachel Feldman is an American director of film and television and screenwriter of television films. Life and career Born in New York City, New York, Feldman began her career as a child actor performing extensively in commercials and television series.Her credits as a television director include: ((The Rookie)), ((Criminal Minds)), ((Blue Bloods)), and some beloved shows like Doogie Howser, M.D., The Commish, Dr. Quinn, Medicine Woman, Picket Fences, Sisters,Lizzie McGuire, at the start of her career. She has written and directed several features including: Witchcraft III: The Kiss of Death (1991), Post Modern Romance (1993), She's No Angel (2001) starring Tracey Gold, Recipe for a Perfect Christmas (2005) starring Christine Baranski, Love Notes (2007) starring Laura Leighton, Lilly (2023) starring Patricia Clarkson. Films Feature Films Lilly (2023) - Director/Writer Love Notes (2007) - Writer Recipe for a Perfect Christmas ((2005) - Writer She's No Angel (2001) - Writer/Director Witchcraft III: The Kiss of Death (1991) - Director Shorts Here Now (2017) - Writer/Director Happy Sad Happy (2014) - Writer/Director Post Modern Romance (1993) - Writer/Director Wunderkind (1984) - Writer/Director Guistina (1981) - Writer/Director Activism Feldman is active in the fight for gender equality in the film and television industry. Her activism takes form in speaking out about issues such as equal pay, job stability for women, sexual harassment, sexual discrimination and female representation within the industry. Feldman is also an activist for women behind the camera, who can be seen in the Geena Davis produced documentary This Changes Everything. Feldman was the former chair of the DGA Women's Steering Committee (WSC). The focus of the WSC is to support and uplift women in the film and television industry. Personal life and education Feldman grew up in the Bronx and now lives in Los Angeles. She attended New York University where she received a Master of Fine Arts Degree and has taught classes in directing and screenwriting at the USC School of Cinematic Arts.Feldman is married to artisan contractor and colorist Carl Tillmanns; together they have two children, Nora and Leon. They are both alumni of Sarah Lawrence College, where they first met. Passage 8: Ian Barry (director) Ian Barry is an Australian director of film and TV. Select credits Waiting for Lucas (1973) (short) Stone (1974) (editor only) The Chain Reaction (1980) Whose Baby? (1986) (mini-series) Minnamurra (1989) Bodysurfer (1989) (mini-series) Ring of Scorpio (1990) (mini-series) Crimebroker (1993) Inferno (1998) (TV movie) Miss Lettie and Me (2002) (TV movie) Not Quite Hollywood: The Wild, Untold Story of Ozploitation! (2008) (documentary) The Doctor Blake Mysteries (2013) Passage 9: Howard W. Koch Howard Winchel Koch (April 11, 1916 – February 16, 2001) was an American producer and director of film and television. Life and career Koch was born in New York City, the son of Beatrice (Winchel) and William Jacob Koch. His family was Jewish. He attended DeWitt Clinton High School and the Peddie School in Hightstown, New Jersey. He began his film career as an employee at Universal Studios office in New York then made his Hollywood filmmaking debut in 1947 as an assistant director. He worked as a producer for the first time in 1953 and a year later made his directing debut. In 1964, Paramount Pictures appointed him head of film production, a position he held until 1966 when he left to set up his own production company. He had a production pact with Paramount for over 15 years.Among his numerous television productions, Howard W. Koch produced the Academy Awards show on eight occasions. Dedicated to the industry, he served as President of the Academy of Motion Picture Arts and Sciences from 1977 to 1979. In 1990 the Academy honored him with The Jean Hersholt Humanitarian Award and in 1991 he received the Frank Capra Achievement Award from the Directors Guild of America. Together with actor Telly Savalas, Howard Koch owned the thoroughbred racehorse Telly's Pop, winner of several important California races for juveniles including the Norfolk Stakes and Del Mar Futurity. Howard W. Koch suffered from Alzheimer's disease and died in at his home in Beverly Hills, California on February 16, 2001. He had two children from a marriage of 64 years to Ruth Pincus, who died in March 2009. In 2004, his son Hawk Koch was elected to the Board of Governors of the Academy of Motion Picture Arts and Sciences. Filmography Director Film (director) Shield for Murder (1954) Big House, U.S.A. (1955) Untamed Youth (1957) Bop Girl Goes Calypso (1957) Jungle Heat (1957) The Girl in Black Stockings (1957) Fort Bowie (1957) Violent Road (1958) Frankenstein 1970 (1958) Born Reckless (1958) Andy Hardy Comes Home (1958) The Last Mile (1959) Badge 373 (1973)Television (director) Maverick (1957) (1 episode) Hawaiian Eye (1959) (2 episodes) Cheyenne (1958) (1 episode) The Untouchables (1959) (4 episodes) The Gun of Zangara (1960) (TV movie taken from The Untouchables (1959 TV series)) Miami Undercover (1961) (38 episodes) Texaco Presents Bob Hope in a Very Special Special: On the Road with Bing (1977) Producer Film (producer): War Paint (1953) Beachhead (1954) Shield for Murder (1954) Big House, U.S.A. (1955) Rebel in Town (1956) Frankenstein 1970 (1958) Sergeants 3 (1962) The Manchurian Candidate (1962) Come Blow Your Horn (1963) Robin and the 7 Hoods (1964) The Odd Couple (1968) On a Clear Day You Can See Forever (1970) A New Leaf (1971) Plaza Suite (1971) Last of the Red Hot Lovers (1972) Jacqueline Susann's Once Is Not Enough (1975) The Other Side of Midnight (1977) Airplane! (1980) Some Kind of Hero (1982) Airplane II: The Sequel (1982) Ghost (1990)Television (producer) Magnavox Presents Frank Sinatra (1973) Passage 10: Reginald Fogwell Reginald Fogwell (23 November 1893, Dartmouth, Devon -1977) was a British film director, producer and screenwriter. Selected filmography Director The Warning (1928) Cross Roads (1930) The Written Law (1930) Madame Guillotine (1931) Guilt (1931) Betrayal (1932) The Wonderful Story (1932) Murder at the Cabaret (1936)Screenwriter Two Can Play (1926) The Guns of Loos (1928) Glorious Youth (1929) Warned Off (1930) Such Is the Law (1930) Prince of Arcadia (1933) Two Hearts in Waltz Time (1934)
Do director of film Betrayal (1932 Film) and director of film The Godsend (Film) share the same nationality?
yes
3,122
2wikimqa
4k
Passage 1: Magic Mountain Magic Mountain or The Magic Mountain may refer to: Books The Magic Mountain, a novel by Thomas Mann Places Magic Mountain (California), a landform that was Nike missile location LA-98R Magic Mountain (British Columbia), a hydrothermal vent field on the Pacific Ocean sea floor Magic Mountain site, a prehistoric archaeological site in Colorado Magic Mountain, Vermont, a natural ski area in Londonderry, Vermont Magic Mountain (Washington), a mountain on the border of North Cascades National Park and Snoqualmie National Forest, Washington, USA Parks and Recreation Magic Mountain (roller coaster), a steel roller coaster in Castelnuovo del Garda, Italy Magic Mountain, Glenelg, a former theme park in Glenelg, Australia Magic Mountain Resort, a small ski area south of Twin Falls, Idaho Magic Mountain, Merimbula, a theme park in Australia Magic Mountain (New Brunswick), a water park in Moncton, New Brunswick Magic Mountain, Nobby Beach, a former theme park on the Gold Coast, Australia Six Flags Magic Mountain, a theme park in Valencia, California Film and TV The Magic Mountain (1982 film), a film directed by Hans W. Geißendörfer The Magic Mountain (2015 film), a film directed by Anca Damian Magic Mountain (TV series), an Australian and Chinese children's programme Music "Magic Mountain" (song), by Eric Burdon & War (1977) Magic Mountain (Hans Koller album) (1997) Magic Mountain (Black Stone Cherry album) (2014) "Magic Mountain", a song by Blonde Redhead from Misery Is a Butterfly (2004) "Magic Mountain", a song by the Drums from Encyclopedia (2014) Passage 2: 1994–95 Orlando Magic season The 1994–95 NBA season was the Magic's 6th season in the National Basketball Association. After building through the draft in previous years, the Magic made themselves even stronger by signing free agents Horace Grant, who won three championships with the Chicago Bulls, and Brian Shaw during the off-season. The Magic got off to a fast start winning 22 of their first 27 games, then later holding a 37–10 record at the All-Star break. Despite losing seven of their final eleven games in April, the Magic easily won the Atlantic Division with a 57–25 record. They also finished with a 39–2 home record, tied for second best in NBA history. Shaquille O'Neal continued to dominate the NBA with 29.3 points, 11.4 rebounds and 2.4 blocks per game, and was named to the All-NBA Second Team, while second-year star Penny Hardaway averaged 20.9 points, 7.2 assists and 1.7 steals per game, while being named to the All-NBA First Team, and Grant gave the Magic one of the most dominant starting lineups in the NBA, averaging 12.8 points and 9.7 rebounds per game, as he was named to the NBA All-Defensive Second Team. In addition, Nick Anderson provided the team with 15.8 points and 1.6 steals per game, while three-point specialist Dennis Scott played a sixth man role, averaging 12.9 points per game off the bench, Donald Royal contributed 9.1 points and 4.0 rebounds per game as the team's starting small forward, and Shaw contributed 6.4 points and 5.2 assists per game off the bench. O'Neal and Hardaway were both selected to play in the 1995 NBA All-Star Game, with head coach Brian Hill coaching the Eastern Conference. O'Neal also finished in second place in Most Valuable Player voting, while Hardaway finished in tenth place, and Scott finished in fifth place in Sixth Man of the Year voting.In the Eastern Conference First Round of the playoffs, the Magic overwhelmed the Boston Celtics with a 124–77 victory in Game 1. Despite losing Game 2 at home, 99–92, the Magic would eliminate the Celtics at the Boston Garden to win the series, 3–1. These matches would be the final 2 basketball games ever played at the Garden. Coincidentially, O’Neal played his final game in Boston 16 years later with the 2010–11 Boston Celtics before retiring from the NBA at 39 years old.In the Eastern Conference Semi-finals, the Magic were matched up against the 5th-seeded Chicago Bulls. The Bulls were on an emotional high as Michael Jordan had just returned from his baseball career to play basketball. Jordan was now wearing the number 45 for the Bulls. The Magic won the first game 94–91. Tensions rose when Anderson indicated that Jordan was no longer the same player when Anderson was quoted by the media saying, "No. 45 doesn't explode like No. 23 used to. No. 23, he could just blow by you. He took off like a space shuttle. No. 45, he revs up, but he really doesn't take off." The comment motivated Jordan to return to number 23 and the Bulls evened the series with a 104–94 road win in Game 2. With the series tied at two games a piece, the Magic won Game 5 at home, 103–95. The Magic would eliminate the Bulls in Game 6 as the Magic won, 108–102 to advance to the conference finals.In the Eastern Conference finals, the Magic would beat Reggie Miller, and the 2nd-seeded and Central Division champion Indiana Pacers in a tough 7-game series that saw the home team win every game. The Magic were off to their first ever NBA Finals appearance. In the Finals, the Magic faced off against the 6th-seeded and defending NBA champion Houston Rockets. Shaq would be up against Hakeem Olajuwon in a battle of All-Star Centers. Game 1 was played in Orlando and the game was lost at the free-throw line. Anderson missed four consecutive free throws with the Magic up by three at the waning seconds of the game and the Rockets tied the game at the buzzer. The Rockets would then win Game 1 in overtime, 120–118. The Magic would not recover from their Game 1 loss as the Rockets swept the series in four straight. Following the season, Anthony Avent was traded to the newly expansion Vancouver Grizzlies, and Tree Rollins retired. For the season, the Magic added new blue pinstripe road uniforms, while the black pinstripe jerseys became their alternate. Both uniforms remained in use until 1998. Orlando did not make another appearance in the NBA Finals until 2009. Draft picks Roster Regular season Season standings Record vs. opponents Game log Regular season Playoffs Player statistics Season Playoffs Awards and honors Shaquille O'Neal – All-NBA 2nd team, Scoring Champion, All-Star Penny Hardaway – All-NBA 1st Team, All-Star Horace Grant – All-Defensive 2nd Team Brian Hill – All-Star East Head Coach Transactions Trades Free agents Player Transactions Citation: Passage 3: The Magic Christian The Magic Christian may refer to: Magic Christian (magician) (born 1945) Magic Christian Music, an album by Badfinger featuring three songs from the 1969 film The Magic Christian (film), a 1969 film The Magic Christian (novel), a 1959 comic novel by Terry Southern See also Christian views on magic Magic cristian, American musician Phil Cristian Passage 4: Celebrate the Magic Celebrate the Magic was a nighttime show at the Magic Kingdom park of Walt Disney World, that premiered on November 13, 2012. It replaced The Magic, the Memories and You display, a similar show that ran at the Magic Kingdom and Disneyland from January 2011 to September 4, 2012.Celebrate the Magic takes place on Cinderella Castle and includes a contemporary musical score, projection mappings, pyrotechnics and lighting. A three-dimensional computer-generated rendering of Cinderella Castle was released by Disney in August 2012, revealing some of the various designs that will be displayed on the structure.On October 26, 2016, it was announced that the show would be replaced by Once Upon a Time formerly from Tokyo Disneyland. The last Celebrate the Magic took place on November 3, 2016. Plot Tinker Bell introduces the show as she appears flying over the castle's turrets. The castle is transformed into a paper canvas as Walt Disney appears sketching Mickey Mouse in his iconic Steamboat Willie appearance. Tinkerbell enchants a paintbrush, which then becomes the host of the show. A kaleidoscope featuring images of Mickey, Donald Duck and Goofy are projected followed soon after by short clips from Cinderella, Pinocchio and The Princess and the Frog. The show then progresses into longer, classic scenes from Disney films, including; Alice in Wonderland, Dumbo, Wreck-It Ralph, The Lion King, Tarzan, The Jungle Book, Lady and the Tramp, Tangled, Toy Story, Pirates of the Caribbean and Frozen. The show's climax features a fast-paced montage of characters and scenes from such other Disney films as Snow White and the Seven Dwarfs, Bambi, Sleeping Beauty, Pocahontas, Up, Peter Pan, The Little Mermaid, Finding Nemo, Beauty and the Beast, Aladdin, and Tangled. During the montage Walt Disney appears again, via archival footage, reciting one of his most famous quotes; "I only hope that we never lose sight of one thing – that it was all started by a mouse". The show then proceeds into a synchronized pyrotechnic finale. Seasonal outlook Similar to its predecessor, Celebrate the Magic will showcase sequences from that will be appropriately themed to seasonal parts of the year. The show premiered with the original Christmas segment from The Magic, the Memories and You. The summer months show films such as Phineas and Ferb, The Little Mermaid and Lilo & Stitch, in addition, segments featuring Disney Princesses and couples for Valentine's Day and Disney Villains for Halloween are shown, and in the winter, Frozen is showcased. The summer edition debuted during the Monstrous Summer All-Nighter event on May 24, 2013 until August 31, 2013. The Halloween edition featuring the Disney villains debuted on September 1, 2013 until October 31, 2013. A new segment based on Frozen debuted on November 17, 2013 replacing a segment based on Brave. See also Celebrate! Tokyo Disneyland Disneyland Forever Together Forever: A Pixar Nighttime Spectacular Once Upon a Time Passage 5: Above Rubies Above Rubies is a 1932 British comedy film directed by Frank Richardson and starring Zoe Palmer, Robin Irvine and Tom Helmore. It is set in Monte Carlo.It was made at Walton Studios as a quota quickie for release by United Artists. Cast Zoe Palmer as Joan Wellingford Robin Irvine as Philip Tom Helmore as Paul John Deverell as Lord Middlehurst Franklyn Bellamy as Dupont Allan Jeayes as Lamont Madge Snell as Lady Wellingford Passage 6: Magic Keyboard The Magic Keyboard is an Apple trademark used on several of their keyboards, referring to: Magic Keyboard (Mac), a wireless keyboard released by Apple in 2015 Magic Keyboard for iPad, a wireless keyboard with an integrated trackpad for use in iPads with a Smart Connector, released in 2020 The built-in keyboard of the MacBook Pro since 2019 and MacBook Air since 2020. Older Apple notebook keyboards that used the butterfly-switch mechanism do not use this brand name. Passage 7: Got the Magic Got the Magic may refer to: Got the Magic (Celtic Harp Orchestra album), 2003 Got the Magic (Spyro Gyra album), 1999 Passage 8: The Magic Aster The Magic Aster (马兰花; Ma Lan Hua) is a Chinese animated film released June 19, 2009 by Shanghai Animation Film Studio, Xiamen Shangchen Science and Technology company and the Shanghai Chengtai investment management company. Cast The film included a notable cast of celebrities for the voice over of the on-screen characters. Passage 9: The Magic House The Magic House may refer to: The Magic House (film), a 1939 Czech film The Magic House (TV series), a 1994–1996 British children's television puppet show that aired on Scottish Television The Magic House, St. Louis Children's Museum, children's museum in Missouri The Magic House is a magical event in the television series Teletubbies about a puppet who walks around his pink house and sings from one of his windows. Passage 10: A Price Above Rubies A Price Above Rubies is a 1998 British-American drama film written and directed by Boaz Yakin and starring Renée Zellweger. The story centers on a young woman who finds it difficult to conform to the restrictions imposed on her by her community. Reviews of the film itself were mixed, though there were generally positive reviews of Zellweger's performance. The title derives from a Jewish Sabbath tradition. The acrostic Sabbath chant The Woman of Valor (eishet chayil) begins with the verse "... Who can find a woman of valor, her price is far above rubies ... ," which in turn is excerpted from The Book of Proverbs. This chant traditionally is a prelude to the weekly toast (kiddush) which begins the Sabbath meal. Plot summary Sonia is a young Brooklyn woman who has just given birth to her first child. She is married, through an arranged marriage, to Mendel, a devout Hasidic Jew who is too repressed and immersed in his studies to give his wife the attention she craves. He even condemns her for making sounds during sex and considers nudity with sex "indecent". Sonia is distressed and later, after a panic attack, she tries to kiss her sister-in-law Rachel. Rachel persuades her to talk to the Rebbe but Sonia cannot truly articulate what is upsetting her, instead resorting to a metaphor of a fire burning her up. Sonia develops a relationship with Sender, who brings her into his jewellery business. Her husband forgets her birthday and Sonia says she longs for something beautiful in her life - even if it is a terrible beauty. Sender is the only release for Sonia's sexuality but she is repelled by his utter lack of morals. He is also abrupt and self-centred in his seductions, never waiting for Sonia to achieve orgasm. Sonia sometimes sees and hears her brother. He appears as a child and judges her actions. On one occasion she buys a non-kosher egg roll whilst in Chinatown and her brother tells her off and an elderly street beggar-woman sees him and offers him candy. She comments on another woman's earrings and this leads Sonia to track down the maker of a ring she had discovered earlier that day. The maker is Puerto Rican artist and jewellery designer Ramon, who works as a salesperson in the jewellery quarter but keeps his artistry a secret from everyone in the business. Later Sonia's husband tells her she cannot continue to work. She is furious. Her husband insists they see a marriage counsellor (their rabbi) but the man decides Sonia is not being a good enough Jew. She says she is tired of being afraid and if she is so offensive to God then 'let him do what he wants to me.' The counsellor says we bring suffering upon ourselves but Sonia protests that her relatives who were murdered in the Holocaust and her brother who died when he was ten did not deserve their suffering. The counsellor says that 'we' do not question the ways of God but Sonia corrects this to 'you' and asserts that she will question whatever she wants to. Sonia stops wearing her wig and starts wearing a headscarf instead. She introduces Ramon and some samples to a jewellery buyer who expresses an interest in his potential as a designer. They argue at Ramon's flat as she becomes bossy over his career, and he tries to get her to model (clothed) with a naked male model so he can complete a sculpture. She runs away. Sonia's marriage breaks down irrevocably. Sonia is locked out of her apartment, and finds that her son has been given to Rachel. She is told she may live in a tiny apartment owned by Sender and kept for 'business purposes'. When she arrives, Sender is eating at a table and it is clear he has set her up as his mistress when she asks what the price is to stay: he says above that of emeralds but below the price of rubies. This is 'freedom'. Sonia hands him back the keys and leaves. None of her friends will take her phone calls and Sonia is homeless. She meets the beggar-woman on the street and is taken to an empty studio and given food. The woman refers to an old legend (one her brother spoke of at the start of the film), to encourage Sonia. Meantime, Mendel takes back his son - for nights only. Rachel protests but he says he would appreciate her caring for his son during the day when he is studying. Sonia now goes to Ramon's place and he lets her stay. She says he was right to be wary of her when they met as she has destroyed every good thing she had. But Ramon disagrees, removes her jewellery, and points out that her necklace is 'a chain'. (It is unclear if the necklace is of religious significance or if he means the need to have financial security through jewellery is a chain or restriction). The two end up kissing. Sonia dreams her brother returns from the lake to say he swam, and she - as her childhood self - says she swam too. When she wakes up in Ramon's bed there is a prominent crucifix on the wall. Sonia goes to speak to the widow of the Rebbe. The widow tells her that Sonia's words about being consumed by fire had awoken a fire in the Rebbe and for the first time in 20 years he had said 'I love you.' It is implied that they made love and the Rebbe had a heart attack. The widow is not unhappy with this outcome. She assists Sonia in reclaiming property from Sender's safe. With Ramon's ring back in her keeping she returns it to Ramon. She doesn't want to stay as she does not feel she belongs. Ramon offers her time to think about what she wants. Mendel arrives. Sonia asks after her son and then if Mendel misses her. He shakes his head. He asks the same of her and she shakes her head. They laugh. He apologises for forgetting her birthday but he knows that this was not all it was about. He gives her a ruby as token of his regret and invites her to visit their son. Mendel leaves and Sonia says, 'God bless you'. Cast Renée Zellweger as Sonia Horowitz Christopher Eccleston as Sender Horowitz Julianna Margulies as Rachel Allen Payne as Ramon Garcia Glenn Fitzgerald as Mendel Horowitz Shelton Dane as Yossi Kim Hunter as Rebbetzin John Randolph as Rebbe Moshe Kathleen Chalfant as Beggar Woman Peter Jacobson as Schmuel Edie Falco as Feiga Allen Swift as Mr. Fishbein Production It was shot in Brooklyn during 1997. Entertainment Weekly reported that a group of onlookers, upset over the film's depiction of Judaism, got in the way of shooting one day. The producers faced backlash for casting Zellweger, who did not follow Judaism, in the lead role. Director Boaz Yakin remarked, "Zellweger was the best actor for the part. She is an actor. The Jews that worked on this film knew less about the Hasidic lifestyle than Renee did after reading 10 books about it. So, being a Jew doesn't qualify you to act the part any more than any other thing. It was more important for each actor and actress to find the emotional light of their character and learn to wear it like a second skin." Reception Roger Ebert of Chicago Sun-Times gave the movie three stars. While impressed by Zellweger's "ferociously strong performance", he found the film did not teach us "much about her society", and that the Hasidic community could have been treated in greater depth. Charles Taylor of Salon likewise appreciated Zellweger's performance, while also finding the cultural aspect treated too superficially. He described Sonia's choices as "clichés left over from the Liberated Woman movies of 20 years ago", and the movie generally as "that old middle-of-the-road groaner about the good and bad in every race". Maria Garcia of Film Journal International was more positively inclined to the movie, and called it a "beautifully wrought, skillfully rendered, and brilliantly acted film".
Which film came out earlier, Above Rubies or The Magic Aster?
Above Rubies
3,299
2wikimqa
4k
Passage 1: Daniel Burman Daniel Burman (born 29 August 1973, in Buenos Aires) is an Argentine film director, screenplay writer, and producer. According to film critic Joel Poblete, who writes for Mabuse, a cinema magazine, Daniel Burman is one of the members of the so-called "New Argentina Cinema", which began circa 1998. In fact, film critic Anthony Kaufman, writing for indieWIRE, said Burman's A Chrysanthemum Burst in Cincoesquinas (1998) has been cited as the beginning of the "New Argentine Cinema" wave. Biography Burman is of Polish-Jewish descent, and he was born and raised in Buenos Aires. He holds both Argentine and Polish citizenship, like his films' character, Ariel. He studied law before changing to audiovisual media production.In 1995, he launched his own production company together with Diego Dubcovsky, BD Cine (Burman and Dubcovsky Cine). Burman is also a founding member of the Academy of Argentine Cinema.His loose trilogy of films, Esperando al Mesías (2000), El Abrazo Partido (2004), and Derecho de Familia (2006), were all written and directed by Burman and star Uruguayan actor Daniel Hendler. They are largely autobiographical, dealing with the life of a young neurotic Jew in contemporary Buenos Aires. He frequently collaborates with other Argentine Jews, notably writer and klezmer musician Marcelo Birmajer, and César Lerner. His comedic touches often bring comparison to Woody Allen, a comparison Burman is quick to reject. He said, "It's not a measurable comparison. But I'm very happy with it. I admire him more than anyone else in the world."Burman's films have been featured in many film festivals around the world. El abrazo partido (2003) took the Grand Jury Prize at the Berlin International Film Festival, as well as best actor for Hendler. Burman was co-producer of the successful 2004 film, The Motorcycle Diaries, as well as Garage Olimpo (1999). Opinions on filmmaking In an interview with Brian Brooks, who writes for indieWIRE.com, an online community of independent filmmakers and aficionados, Burman discussed his approach to filmmaking. He said: "I don't have goals when I make a film, except to create as faithfully as possible the story I wanted to tell, and that the sensations that provoked me to tell the story are also caused when reading the script.""I don't love film in itself; it's not like I was debating the merits of using different types of camera-work, like traveling shots. I love film because it's a story-telling tool," he said in an interview he did for TimesSquare.com. Interconnections between films It is arguable that the loose trilogy of films — Esperando al Mesías (2000), El Abrazo Partido (2003), and Derecho de Familia (2006) — happen in the same "universe". The three share common traits: They are written and directed by Burman and all star Daniel Hendler in the title role as a young Jew. Additionally, several actors and actresses appear twice in the films. Because Hendler's characters share similar traits (they are all named Ariel: Ariel Goldstein, Ariel Makaroff and Ariel Perelman respectively) and because some characters from one film seem to appear in another, the trilogy is usually considered as happening in the same universe Several continuity changes show that the three Ariels are different people: In the first movie, Ariel's father is a restaurant owner, and his mother dies; in the second film, his father has been long gone, and his mother tends to a small shop; in the third movie, his father dies in the film, and his mother has been long dead. However, a character named Estela from the first film appears in the second, and is both times played by Melina Petriella. This at least connects the first two movies to the same universe. Additionally, Juan José Flores Quispe appears in the second and third movie as "Ramón". Although his character, unlike Estela, varies from film to film, this suggests that the second and third film also share the same universe and, thus, the trilogy itself is set in the same storyline, with the "Ariel persona" showing either different aspects of the same character or simply being a mere coincidence. Filmography Producer El Crimen del Cacaro Gumaro (2014) a.k.a. "The Popcorn Chronicles" Director ¿En qué estación estamos? (1992, short) Post data de ambas cartas (1993, short) Help o el pedido de auxilio de una mujer viva (1994, short) Niños envueltos (1995, short) Un Crisantemo Estalla en Cinco Esquinas (1998) a.k.a. A Chrysanthemum Burst in Cincoesquinas Esperando al Mesías (2000) a.k.a. Waiting for the Messiah Todas Las Azafatas Van Al Cielo (2002) a.k.a. Every Stewardess Goes to Heaven El Abrazo Partido (2004) a.k.a. Lost Embrace 18-J (2004) Derecho de Familia (2006) a.k.a. Family Law Encarnación (2007) El nido vacío (2008) Brother and Sister (2010) La suerte en tus manos (2012) Mystery of Happiness (2014) The Tenth Man (2016) Television La pista (1997) Un cuento de Navidad (2002) Yosi, the Regretful Spy (2022) Awards Bangkok World Film Festival: Best Film; El Abrazo partido; 2004. Berlin International Film Festival: Silver Berlin Bear; Jury Grand Prix; for El Abrazo partido; 2004. Clarin Entertainment Awards: Clarin Award Best Film Screenplay; for Derecho de familia; 2006. Clarin Entertainment Awards: Won Clarin Award Best Film Screenplay; for El Abrazo partido; 2004. Festróia - Tróia International Film Festival: Golden Dolphin; for Todas las azafatas van al cielo; 2002. Havana Film Festival: Best Unpublished Screenplay; for El abrazo partido; 2002. Havana Film Festival: Won Grand Coral, Third Prize; for Esperando al mesías; 2000. Lleida Latin-American Film Festival: Best Director; Best Film; ICCI Screenplay Award; all for El Abrazo partido; 2004. Lleida Latin-American Film Festival: Best Film; for El Esperando al mesías 2001. Mar del Plata Film Festival: Audience Award; Best Ibero-American Film; SIGNIS Award; all for Derecho de familia; 2006. Santa Fe Film Festival: Luminaria Award; Best Latino Film; for Todas las azafatas van al cielo; 2002. Sochi International Film Festival: FIPRESCI Prize; for Un Crisantemo estalla en cinco esquinas; 1998. Sundance Film Festival: NHK Award; for Every Stewardess Goes to Heaven (Latin America); 2001. Valladolid International Film Festival: FIPRESCI Prize; for Esperando al mesías; For an honest, both realistic and symbolic depiction of human hopes in Buenos Aires nowadays; 2002. Passage 2: S. N. Mathur S.N. Mathur was the Director of the Indian Intelligence Bureau between September 1975 and February 1980. He was also the Director General of Police in Punjab. Passage 3: Dana Blankstein Dana Blankstein-Cohen (born March 3, 1981) is the executive director of the Sam Spiegel Film and Television School. She was appointed by the board of directors in November 2019. Previously she was the CEO of the Israeli Academy of Film and Television. She is a film director, and an Israeli culture entrepreneur. Biography Dana Blankstein was born in Switzerland in 1981 to theatre director Dedi Baron and Professor Alexander Blankstein. She moved to Israel in 1983 and grew up in Tel Aviv. Blankstein graduated from the Sam Spiegel Film and Television School, Jerusalem in 2008 with high honors. During her studies she worked as a personal assistant to directors Savi Gabizon on his film Nina's Tragedies and to Renen Schorr on his film The Loners. She also directed and shot 'the making of' film on Gavison's film Lost and Found. Her debut film Camping competed at the Berlin International Film Festival, 2007. Film and academic career After her studies, Dana founded and directed the film and television department at the Kfar Saba municipality. The department encouraged and promoted productions filmed in the city of Kfar Saba, as well as the established cultural projects, and educational community activities. Blankstein directed the mini-series "Tel Aviviot" (2012). From 2016-2019 was the director of the Israeli Academy of Film and Television. In November 2019 Dana Blankstein Cohen was appointed the new director of the Sam Spiegel Film and Television School where she also oversees the Sam Spiegel International Film Lab. In 2022, she spearheaded the launch of the new Series Lab and the film preparatory program for Arabic speakers in east Jerusalem. Filmography Tel Aviviot (mini-series; director, 2012) Growing Pains (graduation film, Sam Spiegel; director and screenwriter, 2008) Camping (debut film, Sam Spiegel; director and screenwriter, 2006) Passage 4: Brian Kennedy (gallery director) Brian Patrick Kennedy (born 5 November 1961) is an Irish-born art museum director who has worked in Ireland and Australia, and now lives and works in the United States. He was the director of the Peabody Essex Museum in Salem for 17 months, resigning December 31, 2020. He was the director of the Toledo Museum of Art in Ohio from 2010 to 2019. He was the director of the Hood Museum of Art from 2005 to 2010, and the National Gallery of Australia (Canberra) from 1997 to 2004. Career Brian Kennedy currently lives and works in the United States after leaving Australia in 2005 to direct the Hood Museum of Art at Dartmouth College. In October 2010 he became the ninth Director of the Toledo Museum of Art. On 1 July 2019, he succeeded Dan Monroe as the executive director and CEO of the Peabody Essex Museum. Early life and career in Ireland Kennedy was born in Dublin and attended Clonkeen College. He received B.A. (1982), M.A. (1985) and PhD (1989) degrees from University College-Dublin, where he studied both art history and history. He worked in the Irish Department of Education (1982), the European Commission, Brussels (1983), and in Ireland at the Chester Beatty Library (1983–85), Government Publications Office (1985–86), and Department of Finance (1986–89). He married Mary Fiona Carlin in 1988.He was Assistant Director at the National Gallery of Ireland in Dublin from 1989 to 1997. He was Chair of the Irish Association of Art Historians from 1996 to 1997, and of the Council of Australian Art Museum Directors from 2001 to 2003. In September 1997 he became Director of the National Gallery of Australia. National Gallery of Australia (NGA) Kennedy expanded the traveling exhibitions and loans program throughout Australia, arranged for several major shows of Australian art abroad, increased the number of exhibitions at the museum itself and oversaw the development of an extensive multi-media site. Although he oversaw several years of the museum's highest ever annual visitation, he discontinued the emphasis of his predecessor, Betty Churcher, on showing "blockbuster" exhibitions. During his directorship, the NGA gained government support for improving the building and significant private donations and corporate sponsorship. However, the initial design for the building proved controversial generating a public dispute with the original architect on moral rights grounds. As a result, the project was not delivered during Dr Kennedy's tenure, with a significantly altered design completed some years later. Private funding supported two acquisitions of British art, including David Hockney's A Bigger Grand Canyon in 1999, and Lucian Freud's After Cézanne in 2001. Kennedy built on the established collections at the museum by acquiring the Holmgren-Spertus collection of Indonesian textiles; the Kenneth Tyler collection of editioned prints, screens, multiples and unique proofs; and the Australian Print Workshop Archive. He was also notable for campaigning for the construction of a new "front" entrance to the Gallery, facing King Edward Terrace, which was completed in 2010 (see reference to the building project above). Kennedy's cancellation of the "Sensation exhibition" (scheduled at the NGA from 2 June 2000 to 13 August 2000) was controversial, and seen by some as censorship. He claimed that the decision was due to the exhibition being "too close to the market" implying that a national cultural institution cannot exhibit the private collection of a speculative art investor. However, there were other exhibitions at the NGA during his tenure, which could have raised similar concerns. The exhibition featured the privately owned Young British Artists works belonging to Charles Saatchi and attracted large attendances in London and Brooklyn. Its most controversial work was Chris Ofili's The Holy Virgin Mary, a painting which used elephant dung and was accused of being blasphemous. The then-mayor of New York, Rudolph Giuliani, campaigned against the exhibition, claiming it was "Catholic-bashing" and an "aggressive, vicious, disgusting attack on religion." In November 1999, Kennedy cancelled the exhibition and stated that the events in New York had "obscured discussion of the artistic merit of the works of art". He has said that it "was the toughest decision of my professional life, so far."Kennedy was also repeatedly questioned on his management of a range of issues during the Australian Government's Senate Estimates process - particularly on the NGA's occupational health and safety record and concerns about the NGA's twenty-year-old air-conditioning system. The air-conditioning was finally renovated in 2003. Kennedy announced in 2002 that he would not seek extension of his contract beyond 2004, accepting a seven-year term as had his two predecessors.He became a joint Irish-Australian citizen in 2003. Toledo Museum of Art The Toledo Museum of Art is known for its exceptional collections of European and American paintings and sculpture, glass, antiquities, artist books, Japanese prints and netsuke. The museum offers free admission and is recognized for its historical leadership in the field of art education. During his tenure, Kennedy has focused the museum's art education efforts on visual literacy, which he defines as "learning to read, understand and write visual language." Initiatives have included baby and toddler tours, specialized training for all staff, docents, volunteers and the launch of a website, www.vislit.org. In November 2014, the museum hosted the International Visual Literacy Association (IVLA) conference, the first Museum to do so. Kennedy has been a frequent speaker on the topic, including 2010 and 2013 TEDx talks on visual and sensory literacy. Kennedy has expressed an interest in expanding the museum's collection of contemporary art and art by indigenous peoples. Works by Frank Stella, Sean Scully, Jaume Plensa, Ravinder Reddy and Mary Sibande have been acquired. In addition, the museum has made major acquisitions of Old Master paintings by Frans Hals and Luca Giordano.During his tenure the Toledo Museum of Art has announced the return of several objects from its collection due to claims the objects were stolen and/or illegally exported prior being sold to the museum. In 2011 a Meissen sweetmeat stand was returned to Germany followed by an Etruscan Kalpis or water jug to Italy (2013), an Indian sculpture of Ganesha (2014) and an astrological compendium to Germany in 2015. Hood Museum of Art Kennedy became Director of the Hood Museum of Art in July 2005. During his tenure, he implemented a series of large and small-scale exhibitions and oversaw the production of more than 20 publications to bring greater public attention to the museum's remarkable collections of the arts of America, Europe, Africa, Papua New Guinea and the Polar regions. At 70,000 objects, the Hood has one of the largest collections on any American college of university campus. The exhibition, Black Womanhood: Images, Icons, and Ideologies of the African Body, toured several US venues. Kennedy increased campus curricular use of works of art, with thousands of objects pulled from storage for classes annually. Numerous acquisitions were made with the museum's generous endowments, and he curated several exhibitions: including Wenda Gu: Forest of Stone Steles: Retranslation and Rewriting Tang Dynasty Poetry, Sean Scully: The Art of the Stripe, and Frank Stella: Irregular Polygons. Publications Kennedy has written or edited a number of books on art, including: Alfred Chester Beatty and Ireland 1950-1968: A study in cultural politics, Glendale Press (1988), ISBN 978-0-907606-49-9 Dreams and responsibilities: The state and arts in independent Ireland, Arts Council of Ireland (1990), ISBN 978-0-906627-32-7 Jack B Yeats: Jack Butler Yeats, 1871-1957 (Lives of Irish Artists), Unipub (October 1991), ISBN 978-0-948524-24-0 The Anatomy Lesson: Art and Medicine (with Davis Coakley), National Gallery of Ireland (January 1992), ISBN 978-0-903162-65-4 Ireland: Art into History (with Raymond Gillespie), Roberts Rinehart Publishers (1994), ISBN 978-1-57098-005-3 Irish Painting, Roberts Rinehart Publishers (November 1997), ISBN 978-1-86059-059-7 Sean Scully: The Art of the Stripe, Hood Museum of Art (October 2008), ISBN 978-0-944722-34-3 Frank Stella: Irregular Polygons, 1965-1966, Hood Museum of Art (October 2010), ISBN 978-0-944722-39-8 Honors and achievements Kennedy was awarded the Australian Centenary Medal in 2001 for service to Australian Society and its art. He is a trustee and treasurer of the Association of Art Museum Directors, a peer reviewer for the American Association of Museums and a member of the International Association of Art Critics. In 2013 he was appointed inaugural eminent professor at the University of Toledo and received an honorary doctorate from Lourdes University. Most recently, Kennedy received the 2014 Northwest Region, Ohio Art Education Association award for distinguished educator for art education. == Notes == Passage 5: Peter Levin Peter Levin is an American director of film, television and theatre. Career Since 1967, Levin has amassed a large number of credits directing episodic television and television films. Some of his television series credits include Love Is a Many Splendored Thing, James at 15, The Paper Chase, Family, Starsky & Hutch, Lou Grant, Fame, Cagney & Lacey, Law & Order and Judging Amy.Some of his television film credits include Rape and Marriage: The Rideout Case (1980), A Reason to Live (1985), Popeye Doyle (1986), A Killer Among Us (1990), Queen Sized (2008) and among other films. He directed "Heart in Hiding", written by his wife Audrey Davis Levin, for which she received an Emmy for Best Day Time Special in the 1970s. Prior to becoming a director, Levin worked as an actor in several Broadway productions. He costarred with Susan Strasberg in "[The Diary of Ann Frank]" but had to leave the production when he was drafted into the Army. He trained at the Carnegie Mellon University. Eventually becoming a theatre director, he directed productions at the Long Wharf Theatre and the Pacific Resident Theatre Company. He also co-founded the off-off-Broadway Theatre [the Hardware Poets Playhouse] with his wife Audrey Davis Levin and was also an associate artist of The Interact Theatre Company. Passage 6: Olav Aaraas Olav Aaraas (born 10 July 1950) is a Norwegian historian and museum director. He was born in Fredrikstad. From 1982 to 1993 he was the director of Sogn Folk Museum, from 1993 to 2010 he was the director of Maihaugen and from 2001 he has been the director of the Norwegian Museum of Cultural History. In 2010 he was decorated with the Royal Norwegian Order of St. Olav. Passage 7: Jesse E. Hobson Jesse Edward Hobson (May 2, 1911 – November 5, 1970) was the director of SRI International from 1947 to 1955. Prior to SRI, he was the director of the Armour Research Foundation. Early life and education Hobson was born in Marshall, Indiana. He received bachelor's and master's degrees in electrical engineering from Purdue University and a PhD in electrical engineering from the California Institute of Technology. Hobson was also selected as a nationally outstanding engineer.Hobson married Jessie Eugertha Bell on March 26, 1939, and they had five children. Career Awards and memberships Hobson was named an IEEE Fellow in 1948. Passage 8: Ian Barry (director) Ian Barry is an Australian director of film and TV. Select credits Waiting for Lucas (1973) (short) Stone (1974) (editor only) The Chain Reaction (1980) Whose Baby? (1986) (mini-series) Minnamurra (1989) Bodysurfer (1989) (mini-series) Ring of Scorpio (1990) (mini-series) Crimebroker (1993) Inferno (1998) (TV movie) Miss Lettie and Me (2002) (TV movie) Not Quite Hollywood: The Wild, Untold Story of Ozploitation! (2008) (documentary) The Doctor Blake Mysteries (2013) Passage 9: A Chrysanthemum Bursts in Cincoesquinas Un crisantemo estalla en cinco esquinas (English: A Chrysanthemum Bursts in Cincoesquinas) is a 1998 Argentine, Brazilian, French, and Spanish comedy-drama film written and directed by Daniel Burman, in feature film debut. It was produced by Diego Dubcovsky. It stars José Luis Alfonzo, Pastora Vega and Martin Kalwill, among others. Film critic Anthony Kaufman, writing for indieWIRE, said Burman's A Chrysanthemum Burst in Cincoesquinas (1998) has been cited as the beginning of the "New Argentine Cinema" wave. Synopsis The story takes place in South America at the turn of the 20th century. As a child, Erasmo was left with a nurse by his parents, who had to escape a waging civil war. Erasmo is now a grown man. He has lost his parents, and now his foster mother is brutally murdered. He seeks to avenge her death, and the culprit is the landowner and head of state, El Zancudo. Erasmo befriends a poor Jew named Saul, who is prepared to help him in his undertaking. Along the way, Erasmo finds allies, adversaries, love, and then Magdalena. Cast José Luis Alfonzo as Erasmo Pastora Vega as La Gallega Martin Kalwill as Saul Valentina Bassi as Magdalena Millie Stegman as La Boletera Walter Reyno as El Zancudo Roly Serrano as Cachao Ricardo Merkin as Doctor Aldo Romero as Lucio María Luisa Argüello as Elsa Sandra Ceballos as Mother Guadalupe Farías Gómez as Albina Antonio Tarragó Ross as Chamamecero Distribution The film was first presented at the Berlin International Film Festival on February 11, 1998. It opened in Argentina on May 7, 1998. It screened at the Muestra de Cine Argentino en Medellín, Colombia. Awards Wins Sochi International Film Festival, Sochi, Russia: FIPRESCI Prize, Daniel Burman. Passage 10: Jason Moore (director) Jason Moore (born October 22, 1970) is an American director of film, theatre and television. Life and career Jason Moore was born in Fayetteville, Arkansas, and studied at Northwestern University. Moore's Broadway career began as a resident director of Les Misérables at the Imperial Theatre in during its original run. He is the son of Fayetteville District Judge Rudy Moore.In March 2003, Moore directed the musical Avenue Q, which opened Off-Broadway at the Vineyard Theatre and then moved to Broadway at the John Golden Theatre in July 2003. He was nominated for a 2004 Tony Award for his direction. Moore also directed productions of the musical in Las Vegas and London and the show's national tour. Moore directed the 2005 Broadway revival of Steel Magnolias and Shrek the Musical, starring Brian d'Arcy James and Sutton Foster which opened on Broadway in 2008. He directed the concert of Jerry Springer — The Opera at Carnegie Hall in January 2008.Moore, Jeff Whitty, Jake Shears, and John "JJ" Garden worked together on a new musical based on Armistead Maupin's Tales of the City. The musical premiered at the American Conservatory Theater, San Francisco, California in May 2011 and ran through July 2011.For television, Moore has directed episodes of Dawson's Creek, One Tree Hill, Everwood, and Brothers & Sisters. As a writer, Moore adapted the play The Floatplane Notebooks with Paul Fitzgerald from the novel by Clyde Edgerton. A staged reading of the play was presented at the New Play Festival at the Charlotte, North Carolina Repertory Theatre in 1996, with a fully staged production in 1998.In 2012, Moore made his film directorial debut with Pitch Perfect, starring Anna Kendrick and Brittany Snow. He also served as an executive producer on the sequel. He directed the film Sisters, starring Tina Fey and Amy Poehler, which was released on December 18, 2015. Moore's next project will be directing a live action Archie movie. Filmography Films Pitch Perfect (2012) Sisters (2015) Shotgun Wedding (2022)Television Soundtrack writer Pitch Perfect 2 (2015) (Also executive producer) The Voice (2015) (1 episode)
What is the place of birth of the director of film A Chrysanthemum Bursts In Cincoesquinas?
Buenos Aires
3,859
2wikimqa
4k
Passage 1: Agnes of Jesus Agnes of Jesus, OP (born Agnès Galand and also known as Agnes of Langeac; November 17, 1602 – October 19, 1634) was a French Catholic nun of the Dominican Order. She was prioress of her monastery at Langeac, and is venerated in the Catholic Church, having been beatified by Pope John Paul II on November 20, 1994. Life Agnès Galand was born on November 17, 1602, in Le Puy-en-Velay, France, the third of seven children of Pierre Galand, a cutler by trade, and his wife, Guillemette Massiote. When she was five years old, Galand was entrusted to a religious institute for her education. Even from that early age, she showed a strong sense of spiritual maturity. She consecrated herself to the Virgin Mary at the age of seven. Galand joined the Dominican Monastery of St. Catherine of Siena at Langeac in 1623. At her receiving of the religious habit she took the name Agnes of Jesus. Soon after her own profession, she was assigned to serve as the Mistress of novices for the community. Galand was elected to lead her community as prioress in 1627. She was later deposed from this office, but she accepted her demotion with indifference and grace.She died on October 19, 1634, in Langeac. Spiritual legacy Notable visions Galand was noted even during her lifetime as a mystic. Louis Marie de Montfort records the following anecdote: I shall simply relate an incident which I read in the life of Mother Agnes of Jesus, a Dominican nun of the convent of Langeac in Auvergne. ... One day the Blessed Virgin appeared to Mother Agnes and put a gold chain around her neck to show her how happy she was that Mother Agnes had become the slave of both her and her Son. And St. Cecilia, who accompanied our Lady, said to her, "Happy are the faithful slaves of the Queen of Heaven, for they will enjoy true freedom." In 1631, Galand experienced the most famous of her visions, in which the Blessed Virgin Mary urged her to pray for an unknown priest with the command, "Pray to my Son for the Abbé of Prébrac (near Cugnaux)." Jean-Jacques Olier was the current holder of that office, and while at a retreat led by Vincent de Paul, he experienced a vision in which Galand appeared to him, though he was unacquainted with her. He sought out the nun who had appeared to him in the dream. When he met Galand, she told him: "I have received orders from the Holy Virgin to pray for you. God has destined you to open the first seminaries in France." Olier would go on to found the Society of Saint-Sulpice. Before her death, she related to her community her great desire that they pray for priests. She also had visions of both her guardian angel and Satan. Veneration and beatification A cause for her beatification was introduced on April 19, 1713. She was declared venerable on March 19, 1808, by Pope Pius VII. Pope John Paul II beatified her on November 20, 1994. At her beatification ceremony, John Paul II called Galand "truly blessed", noting her willingness to submit to God's plan for her, "offering her intellect, will, and freedom to the Son of Man, that he might transform them and harmonize them totally with his own!" Her feast day is October 19.Hyacinthe-Marie Cormier, beatified on the same day as Galand, cited the example of Galand's life as his inspiration for joining the Dominican Order. He would go on to be elected the seventy-sixth Master General of the Dominicans in 1904. Passage 2: Agnes of Brandenburg Agnes of Brandenburg (c. 1257 – 29 September 1304) was a Danish Queen consort by marriage to King Eric V of Denmark. As a widow, she served as the regent of Denmark for her son, King Eric VI, during his minority from 1286 until 1293. She was duchess regnant of Estonia. Life She was born to John I, Margrave of Brandenburg (d. 1266) and Brigitte of Saxony, the daughter of Albert I, Duke of Saxony. She married King Eric V of Denmark at Schleswig on 11 November 1273. The marriage was probably agreed upon during King Eric's captivity in Brandenburg by Agnes' father from 1261 to 1264. Tradition claims that the King of Denmark was released from captivity on his promise to marry Agnes without a dowry. Denmark and Brandenburg, however, had a long tradition of dynastic marriages between them. Regency In 1286, she became a Queen dowager and the Regent of Denmark during the minority of her son. The details of her regency are not known more closely, and it is hard to determine which of the decisions were made by her, and which was made by the council. Peder Nielsen Hoseøl was also very influential in the regency, and she is likely to have received support from her family. In 1290, she financed a granted lime painting in St. Bendt's Church in Ringsted, which depicts her in a dominating way. Her son was declared of legal majority in 1293, thus ending her formal regency. Later life Married in 1293 to count Gerhard II of Holstein-Plön (d. 1312) with whom she had the son John III, Count of Holstein-Plön. She often visited Denmark also after her second marriage, and it continued to be a second home. She died on 29 September 1304, and was buried in Denmark. Gallery Passage 3: Agnes of Hohenstaufen Agnes of Hohenstaufen (1176 – 7 or 9 May 1204) was the daughter and heiress of the Hohenstaufen count palatine Conrad of the Rhine. She was Countess of the Palatinate herself from 1195 until her death, as the wife of the Welf count palatine Henry V. Life Agnes' father Conrad of Hohenstaufen was a younger half-brother of Emperor Frederick Barbarossa, who had enfeoffed him with the Electoral Palatinate in 1156. A cautious and thoughtful politician, he aimed for peace and reconciliation in the Empire. Even before 1180, he had betrothed his daughter to Henry V, the eldest son of the rebellious Saxon duke Henry the Lion, in order to defuse the re-emerging conflict between the Hohenstaufen and Welf dynasties. In 1193, however, Barbarossa's son and successor, Emperor Henry VI, wanted to create a political alliance with King Philip II of France and planned to marry his cousin Agnes to Philip II. When the young Welf scion Henry V heard of this plan, he contacted Agnes' parents. Her father avoided definitive statements on her betrothal, as he preferred a marriage with the French king, but also did not want to offend Henry V, whom Agnes revered fanatically. Agnes' mother Irmengard (d. 1197), daughter of Count Berthold I of Henneberg, continued to advocate her daughter's marriage with the Welf prince. A little later she took advantage of the absence of her husband, who stayed at Henry VI's court, to thwart the Emperor's plan. She invited the young Welf to Stahleck Castle, where he and Agnes were married in January or February 1194.Furious Emperor Henry VI felt betrayed and demanded that Conrad immediately annul the marriage. Conrad, however, dropped his initial resistance to the marriage and, seeing as it had already been blessed in Church, chose to convince his nephew Henry VI of the domestic political benefits of this marriage. Conrad's sons had died young and Henry VI could assure the succession in the Electoral Palatinate by enfeoffing Henry the Welf. Additionally, Conrad and Agnes on the occasion of the marital union convinced the emperor to pardon Henry the Lion, who had been deposed and outlawed by Frederick Barbarossa in 1180. The reconciliation between Emperor Henry VI and Duke Henry the Lion was solemnly held in March 1194 at the Imperial Palace of Tilleda. Agnes and her husband Henry V had done their bit to prepare for this major domestic event with their unscheduled marriage at Stahleck Castle. Moreover, Emperor Henry VI had to settle the conflict with the House of Welf, to ensure peace in the Holy Roman Empire while enforcing his claims on the Kingdom of Sicily after the death of King Tancred on 20 February 1194. Issue Agnes and Henry had a son and two daughters: Henry, was Count Palatine of the Rhine from 1212 to 1214 Irmengard (1200–1260), married Herman V, Margrave of Baden-Baden Agnes (1201–1267), married Duke Otto II of Bavaria. Agnes and Otto became the ancestors of the House of Wittelsbach in Bavaria and the Palatinate. Her daughter Elisabeth was the mother of Conradin. Her son Louis was the father of Emperor Louis IV. Legacy During the Romanticism period in the 19th century, the historic picture of Agnes of Hohenstaufen was blissfully idealised. In Christian Dietrich Grabbe's drama entitled Henry VI, published in 1830, she is depicted as a carefree but resolute girl, who even addresses the Imperial Diet to assert her marriage with the man she loves. Fighting for the love and happiness of her reluctant fiancé, she brings about the ultimate reconciliation of the Welf and Hohenstaufen families on the deathbed of her father-in-law, Henry the Lion, who called her "a rose blossoming between to rocks". In fact, it was Agnes' mother Irmengard who had arranged the marriage. The opera Agnes von Hohenstaufen by the Italian composer Gaspare Spontini, based on the libretto by Ernst Raupach, had its premiere on 12 June 1829 at the Royal Opera Berlin. Passage 4: Matilda of Brabant, Countess of Artois Matilda of Brabant (14 June 1224 – 29 September 1288) was the eldest daughter of Henry II, Duke of Brabant and his first wife Marie of Hohenstaufen. Marriages and children On 14 June 1237, which was her 13th birthday, Matilda married her first husband Robert I of Artois. Robert was the son of Louis VIII of France and Blanche of Castile. They had: Blanche of Artois (1248 – 2 May 1302). Married first Henry I of Navarre and secondly Edmund Crouchback, 1st Earl of Lancaster. Robert II, Count of Artois (1250 – 11 July 1302 at the Battle of the Golden Spurs).On 8 February 1250, Robert I was killed while participating in the Seventh Crusade. On 16 January 1255, Matilda married her second husband Guy III, Count of Saint-Pol. He was a younger son of Hugh I, Count of Blois and Mary, Countess of Blois. They had: Hugh II, Count of Blois (died 1307), Count of Saint Pol and later Count of Blois Guy IV, Count of Saint-Pol (died 1317), Count of Saint Pol Jacques I of Leuze-Châtillon (died 11 July 1302 at the Battle of the Golden Spurs), first of the lords of Leuze, married Catherine de Condé and had issue; his descendants brought Condé, Carency, etc. into the House of Bourbon. Beatrix (died 1304), married John I of Brienne, Count of Eu Jeanne, married Guillaume III de Chauvigny, Lord of Châteauroux Gertrude, married Florent, Lord of Mechelen (French: Malines). Passage 5: Agnes of the Palatinate Agnes of the Palatinate (1201–1267) was a daughter of Henry V, Count Palatine of the Rhine and his first wife Agnes of Hohenstaufen, daughter of Conrad, Count Palatine of the Rhine. Agnes was Duchess of Bavaria by her marriage to Otto II Wittelsbach, Duke of Bavaria. Family Agnes was the youngest of three children born to her father by both of his marriages. Her father's second wife, also called Agnes, was the daughter of Conrad II, Margrave of Lusatia. Agnes' older sister was Irmgard, wife of Herman V, Margrave of Baden-Baden and her brother was Henry VI, Count Palatine of the Rhine. Marriage Agnes married Otto II at Worms when he came of age in 1222. With this marriage, the Wittelsbach family inherited Palatinate and kept it as a Wittelsbach possession until 1918. Since that time also the lion has become a heraldic symbol in the coat-of-arms for Bavaria and the Palatinate. In 1231 upon the death of Otto's father, Louis I, Duke of Bavaria, Otto and Agnes became Duke and Duchess of Bavaria. After a dispute with Emperor Frederick II was ended, Otto joined the Hohenstaufen party in 1241. Their daughter, Elizabeth, was married to Frederick's son Conrad IV. Because of this, Otto was excommunicated by the pope. Within thirty-one years of marriage, the couple had five children: Louis II, Duke of Bavaria (13 April 1229, Heidelberg – 2 February 1294, Heidelberg). Henry XIII, Duke of Bavaria (19 November 1235, Landshut – 3 February 1290, Burghausen). Elisabeth of Bavaria, Queen of Germany (c. 1227, Landshut – 9 October 1273), married to: 1246 in Vohburg to Conrad IV of Germany; 1259 in Munich to Count Meinhard II of Gorizia-Tyrol, Duke of Carinthia. Sophie (1236, Landshut – 9 August 1289, Castle Hirschberg), married 1258 to Count Gerhard IV of Sulzbach and Hirschberg. Agnes (c. 1240–c. 1306), a nun.Otto died 29 November 1253. Agnes died fourteen years later in 1267. She is buried at Scheyern. Ancestry Passage 6: Henry V, Count Palatine of the Rhine Henry V, the Elder of Brunswick (German: Heinrich der Ältere von Braunschweig; c. 1173 – 28 April 1227), a member of the House of Welf, was Count Palatine of the Rhine from 1195 until 1212. Life Henry was the eldest son of Henry the Lion, Duke of Saxony and Bavaria and Matilda, the eldest daughter of King Henry II of England and Eleanor of Aquitaine. After his father's deposition by the Hohenstaufen emperor Frederick Barbarossa, he grew up in England. When the family returned to Germany in 1189, young Henry distinguished himself by defending the Welf residence of Braunschweig against the forces of the emperor's son King Henry VI. Peace was established the next year, provided that Henry and his younger brother Lothar (d. 1190) were held in hostage by the king. He had to join the German forces led by Henry VI, by then emperor, on the 1191 campaign to the Kingdom of Sicily and participated in the siege of Naples. Taking advantage of the Emperor falling ill, Henry finally deserted, fled to Marseille, and returned to Germany, where he falsely proclaimed Henry VI's death and tried to underline his own abilities as a possible successor. This partly led to the withdrawal of Henry VI and the captivity of Empress Constance. Though he was banned, he became heir to the County Palatine of the Rhine through his 1193 marriage to Agnes, a cousin of Emperor Henry VI and daughter of the Hohenstaufen count palatine Conrad. He and the emperor reconciled shortly afterwards, and upon Conrad's death in 1195, Henry was enfeoffed with his County Palatine. A close ally of the emperor, he accompanied him on the conquest of Sicily in 1194/95 and on the Crusade of 1197.After the sudden death of the emperor in 1197, Henry's younger brother Otto IV became one of two rival kings of the Holy Roman Empire. At first he supported him, but switched sides to Philip of Swabia in 1203. Having divided the Welf allodial lands with his brothers Otto and William of Winchester, Henry then ruled over the northern Saxon territories around Stade and Altencelle and also was confirmed as count palatine by King Philip. When the German throne quarrel ended with Philip's assassination in 1208, Henry again sided with Otto IV. In Imperial service, he tried to ward off the territorial claims by the Rhenish Prince-archbishops of Cologne, Trier and Mainz, though to no avail. After he inherited further significant properties in Saxony from his brother William in 1213, Henry ceded the Palatinate to his son Henry the Younger and moved north. After his son's early death the next year, he left his Welf properties to his nephew, William's son Otto the Child, who became the first Duke of Brunswick-Lüneburg in 1235. Henry died in 1227 and is entombed in Brunswick Cathedral. Marriage and children In 1193, Henry married Agnes of Hohenstaufen (1177–1204), daughter of Count Palatine Conrad. They had the following children: Henry VI (1197–1214), married Matilda, daughter of Duke Henry I of Brabant Irmengard (1200–1260), married Margrave Herman V of Baden Agnes (1201–1267), married Otto II of Wittelsbach, Count palatine of the Rhine from 1214, Duke of Bavaria from 1231.Around 1209, he married Agnes of Landsberg (d. 1248), daughter of the Wettin margrave Conrad II of Lusatia. Ancestors Passage 7: Anna George de Mille Anna George de Mille (1878–1947) was an American feminist and Georgism advocate. She was the mother of Agnes George de Mille. Biography Anna de Mille was born in San Francisco in 1878 to Henry George and Annie Corsina Fox George. Throughout her life, she served as a prominent leader of the single-tax movement, in many leadership roles including vice president of the International Union for Land Value Taxation and Free Trade in London, and a director of the Robert Schalkenbach Foundation. In 1932 she partnered with Oscar H. Geiger to establish the Henry George School of Social Science. She served as the president of the board of trustees of said school. De Mille went on several tours promoting the single-tax movement, and was a large donor to the Henry George Collection at the New York Public Library. She served as an officer in the Henry George Foundation of Pittsburgh.In 1950, she published Henry George, Citizen of the World, a biography of her father, which was published by the University of North Carolina Press after being released in The American Journal of Economics and Sociology. De Mille also helped raise money for the restoration of Henry George's birthplace. Passage 8: Agnes of Aquitaine, Queen of Aragon and Navarre Agnes of Aquitaine (end of 1072 – 6 June 1097) was a daughter of William VIII, Duke of Aquitaine, and his third wife, Hildegarde of Burgundy.In 1081, Agnes was betrothed to Peter I of Aragon and Navarre. In 1086, the couple married in Jaca; upon Peter's succession, Agnes became queen of Aragon and Navarre. By him, Agnes had two children, both of whom predeceased their father: Peter (died 1103) and Isabella (died 1104). Agnes died in 1097, and her husband remarried to a woman named Bertha. Passage 9: Judith of Hohenstaufen Judith of Hohenstaufen, also known as Judith of Hohenstaufen or Judith of Swabia (c. 1133/1134 – 7 July 1191), a member of the Hohenstaufen dynasty, was Landgravine of Thuringia from 1150 until 1172 by her marriage with the Ludovingian landgrave Louis II. She was baptized as Judith, but was commonly called Jutta or Guta. Sometimes the Latinate form Clementia was used, or Claritia or Claricia. Life Judith was a daughter of Duke Frederick II of Swabia (1090–1147) and his second wife Agnes of Saarbrücken, thereby a younger half-sister of Emperor Frederick Barbarossa (1122–1190). She first appeared in contemporary sources in 1150, upon her marriage with Landgrave Louis II of Thuringia. This wedlock was intended to cement the relationship between the Thuringian Ludovingians and the imperial House of Hohenstaufen, to strengthen Emperor Barbarossa in his fierce conflict with Duke Henry the Lion and the House of Welf. When in 1168 her husband reconciled with Henry the Lion, Judith began the construction of Runneburg Castle in Weißensee. The neighbouring Counts of Beichlingen objected, and protested to Emperor Barbarossa. However, the emperor sided with his half-sister and rejected the protests. Runneburg Castle was situated halfway between Wartburg Castle and Neuenburg Castle and became the residence of the Landgraves of Thuringia. Later during the conflicts between Germany's most powerful dynasties, the strategically located Runneburg Castle became one of the most important castles in the area. Judith survived both her husband and her eldest son Landgrave Louis III. She died on 7 July 1191 and was buried in Reinhardsbrunn monastery next to her husband. Her name is still omnipresent in Weißensee, which shows how highly she was regarded during her lifetime. Grave stone Judith's grave stone was created in the 14th century, well after her death. It must have been installed after the fire of 1292. It was moved from Reinhardsbrunn to the choir of the St. George's Church in Eisenach. The Landgravine is depicted holding a lap dog in her left arm, while her right hand holds a scepter. A wide cantilevered canopy, held up by two angels, is extended over her head. The angels appear to sit on a pillow behind her head. The inscription reads S. SOROR FRIDERICI INPERATORIS ("the sister of Emperor Frederick"). Due to the canopy, this grave stone was larger than those of the other Landgraves of Thuringia (which are also on display in the St. George church in Eisenach). It must have made her grave very visible, even when the grave stone was part of the church floor. The presence of the Emperor's sister in the family tree introduced additional honor, which is why her family background was emphasized in the inscription. Marriage and issue In 1150, Judith married Louis II, Landgrave of Thuringia. They had the following children: Louis III (1151–1190), succeeded her husband as Landgrave of Thuringia Herman I (d. 1217), succeeded his brother as Landgrave of Thuringia Henry Raspe III (c. 1155 – 18 July 1217), Count of Gudensberg Frederick (c. 1155 – 1229), Count of Ziegenhain Judith, married Herman II, Count of Ravensberg Passage 10: Agnes of Waiblingen Agnes of Waiblingen (1072/73 – 24 September 1143), also known as Agnes of Germany, Agnes of Poitou and Agnes of Saarbrücken, was a member of the Salian imperial family. Through her first marriage, she was Duchess of Swabia; through her second marriage, she was Margravine of Austria. Family She was the daughter of Henry IV, Holy Roman Emperor, and Bertha of Savoy. First marriage In 1079, aged seven, Agnes was betrothed to Frederick, a member of the Hohenstaufen dynasty; at the same time, Henry IV invested Frederick as the new duke of Swabia. The couple married in 1086, when Agnes was fourteen. They had twelve children, eleven of whom were named in a document found in the abbey of Lorsch: Hedwig-Eilike (1088–1110), married Friedrich, Count of Legenfeld Bertha-Bertrade (1089–1120), married Adalbert, Count of Elchingen Frederick II of Swabia Hildegard Conrad III of Germany Gisihild-Gisela Heinrich (1096–1105) Beatrix (1098–1130), became an abbess Kunigunde-Cuniza (1100–1120/1126), wife of Henry X, Duke of Bavaria (1108–1139) Sophia, married Konrad II, Count of Pfitzingen Fides-Gertrude, married Hermann III, Count Palatine of the Rhine Richildis, married Hugh I, Count of Roucy Second marriage Following Frederick's death in 1105, Agnes married Leopold III (1073–1136), the Margrave of Austria (1095–1136). According to a legend, a veil lost by Agnes and found by Leopold years later while hunting was the instigation for him to found the Klosterneuburg Monastery.Their children were: Adalbert Leopold IV Henry II of Austria Berta, married Heinrich of Regensburg Agnes, "one of the most famous beauties of her time", married Wladyslaw II of Poland Ernst Uta, wife of Liutpold von Plain Otto of Freising, bishop and biographer Conrad, Bishop of Passau, and Archbishop of Salzburg Elisabeth, married Hermann, Count of Winzenburg Judith, m. c. 1133 William V of Montferrat. Their children formed an important Crusading dynasty. Gertrude, married Vladislav II of BohemiaAccording to the Continuation of the Chronicles of Klosterneuburg, there may have been up to seven other children (possibly from multiple births) stillborn or who died in infancy. In 2013, documentation regarding the results of DNA testing of the remains of the family buried in Klosterneuburg Abbey strongly favor that Adalbert was the son of Leopold and Agnes.In 1125, Agnes' brother, Henry V, Holy Roman Emperor, died childless, leaving Agnes and her children as heirs to the Salian dynasty's immense allodial estates, including Waiblingen. In 1127, Agnes' second son, Konrad III, was elected as the rival King of Germany by those opposed to the Saxon party's Lothar III. When Lothar died in 1137, Konrad was elected to the position.
Where was the place of death of Agnes Of Hohenstaufen's husband?
Brunswick
3,954
2wikimqa
4k
Passage 1: Nola Fairbanks Nola Fairbanks (born Nola Jo Modine; December 10, 1924 – February 8, 2021) was an American actress. She was also the aunt of actor Matthew Modine. Early life Fairbanks was born Nola Jo Modine in Santa Paula, California, on December 10, 1924, the daughter of Zella Vonola Fairbanks and Alexander Revard Modine. She is the granddaughter of Mormon pioneers Ralph Jacobus Fairbanks (aka R.J. "Dad" Fairbanks) and Celestia Adelaide (Johnson) Fairbanks, from Payson, Utah and Death Valley, California. She is a descendant of Jonathan Fairbanks, whose 17th century wood-frame house still stands in Dedham, Massachusetts.As a child, she joined the Meglin Kiddies Dance Troupe where Shirley Temple was also a student. While her father, Alexander Revard Modine, worked for the Texaco Oil Company, Nola Jo's mother, Zella Vonola Fairbanks Modine, washed clothes to pay for her singing and dancing lessons during the Great Depression. Career Her only movie role was as a "glorified extra" in The Corn Is Green in 1945, starring Bette Davis. Soon after, she joined the Lionel Barrymore production of the musical, Halloween at the Hollywood Bowl, and performed on The Standard Hour in addition to the Hollywood Canteen for servicemen. Next, she went on tour as a soloist with the Sonja Henie Ice Show, completing two national tours. When the tours ended in New York, she stayed on with the show, named Howdy Mr. Ice at the Center Theatre in Rockefeller Center. Her Broadway debut was in 1950 in the chorus of Cole Porter's Out of This World . She soon became an understudy and before long, assumed the lead. Summer stock performances included Miss Liberty with Dick Haymes in the Dallas Theatre as well as Die Fledermaus and finally Bloomer Girl in Toronto, Canada. Next, she joined the Broadway cast of Paint Your Wagon opposite James Barton, when Olga San Juan left the role of Jennifer Rumson. She took the show on tour with Burl Ives in the part of her father, Ben Rumson. In 1952, she starred in the first musical production at the new Jones Beach Theatre in Long Island, New York. Mike Todd was the producer of this production of the Johann Strauss II operetta Eine Nacht in Venedig starring alongside Enzo Stuarti and Thomas Hayward. After a winning performance on The Arthur Godfrey Radio Show, she appeared on his television show. Her final Broadway performance came when she was asked to replace Florence Henderson in the lead role in Fanny, co-starring Ezio Pinza. She revived her career in 1978 with appearances in a short-lived sketch comedy TV series, Madhouse Brigade, produced by her husband. In 1981, he produced an off-Broadway show called Romance Is where Fairbanks performed with an ensemble cast. The show closed after a few performances. Personal life Fairbanks married James Larkin in 1954 and had four children. They divorced in 1990. She died on February 8, 2021, at the age of 96, in Greenwich, Connecticut. Passage 2: Emel Say Emel Say (1927 – 17 February 2011) was a Turkish painter. She was the daughter of painter Zehra Say and the aunt of pianist Fazıl Say. Life Emel Say was born in 1927. Her grandfather was a politician, who left the Committee of Union and Progress and opened a dance hall. Her mother Zehra Say was the first woman in modern Turkey to be married at an official wedding. Emel's mind was always on music. When she was fifteen, she received singing classes from Professor Carl Ebert, who had established the Ankara Conservatory. At first, Emel Say wanted to be an opera artist, but she changed her mind when she fell in love with a piece of land in Hatay, southern Turkey. She had to put her interest in music on hold when she married. Her husband Fuat Say, unlike the tolerance that Fuat Say had shown her mother Zehra, did not send Emel to school, nor was this really possible in Hatay at the time. Raising her three sons was the only thing she did until she divorced her husband when she was 30 years old. After her divorce, her interest in music did not come to fruition; her mother got sick, so she had to focus on getting a job and income. She started her work life as the secretary of Fuat Bezmen. She worked for him for around ten years.She worked in the United States for about five years.When her mother, the famous painter Zehra Say, went into the later stages of Alzheimer's disease, she was no longer able to continue painting. Zehra made her daughter Emel promise to finish her painting Maui Adası (Island of Maui), which she had started to paint from a postcard Emel had brought back from a trip to Hawaii. She did not know how she could paint. At first, she cried but then she tried to paint and from that point on did not stop. She completed her mother's painting and it was displayed at an exhibition at the Çiçek Bar. There she was discovered by the sculptor Gürdal Duyar who at first asked what had happened to Maui Adası, and then when Emel told him that her mother had insisted she finish it, she tried at it. Duyar expressed to her that she was a natural talent and had been a painter within all along. If it was not for this reassuring encounter she may not have gone into painting at all in her life. She started painting after the age of 60.Together with Duyar, who became a close friend of hers and other friends and family, she would often work on paintings and listen to music together late into the night. One of these nights, Duyar made a portrait for one of Fazıl's musician friends starting at midnight and finishing towards the morning as recalled by this friend of Fazıl's.Say died in 2011. Art She was a student in the studio of the painter Osman Özal. She, together with the other (former) students, would meet on Wednesdays at the İzmir Art and Sculpture Museum, and work in the studio there. They became known as the "Group Wednesday", and held collaborative exhibitions.She and Duyar had exhibitions at the CEP Gallery in the time period between 1977 and 1990.In 1995, her work, along with the work of Gürdal Duyar, was exhibited in the grand opening exhibition of the Asmalımescit Art Gallery. Technique She made many miniatures. One of the techniques she often used was using two different types of paint in the same painting, acrylic and gouache. Exhibitions Emel Say painting exhibition, Underground Art Gallery (till 20 May 1992) Asmalımescit Art Gallery grand opening exhibition (1995) Emel Say painting exhibition, Çatı Sanatevi (till 7 May 2000) Emel Say painting exhibition, Underground Art Gallery (till 21 May 2004) 9th exhibition, Çiçek Bar (till 18 December 2004) Mixed Exhibition of works by Osman Özals students, Dr. Selahattin Akçiçek Culture and Art Center in Konak, İzmir (till 15 April 2012) but extended? Friends and Family She is proud of how her mother, Zehra Say, even after her marriage, went to school and became an art teacher, which was quite an accomplishment at that time. She is also proud of her father Fuat Say for supporting his wife, her mother. Say's uncle's grandson, Fazıl Say, did make a career of his musical talent. When talking about him, she said that he is "A Genius!", "When he was just four years old, his mother had bought a small organ, like a toy... Fazıl started to play the songs on the radio with this organ. How many times could a composer the likes of him come to this Earth!"She became friends with the poet and writer Gülsüm Cengiz around after her time in the United States. He was visiting her and Zehra at their home and they continued in conversation late into the night, and they learned about the 1960 military coup towards the morning after someone was banging on the door and they turned on the radio.She was also close friends with Gürdal Duyar, and they had exhibitions together. Passage 3: Marcus Annius Libo (consul 161) Marcus Annius Libo (died 163) was a Roman senator. He was suffect consul in the nundinium of January-April 161 with Quintus Camurius Numisius Junior as his colleague. Libo was the nephew of emperor Antoninus Pius, and cousin to emperor Marcus Aurelius. Libo came from a Roman family that had settled in Hispania generations before, and had returned to Rome more recently. His father was Marcus Annius Libo, consul in 128, and his mother was a noblewoman whose name has been surmised as Fundania, daughter of Lucius Fundanius Lamia Aelianus, consul in 116. Libo had a sister, Annia Fundania Faustina, wife of Titus Pomponius Proculus Vitrasius Pollio, whose second consulship was in 176. Governor of Syria The only portion of his cursus honorum we know is the portion immediately after Libo stepped down from his consulate. To support his co-emperor Lucius Verus' campaign against the Parthians, Marcus Aurelius appointed Libo governor of the province of Syria. Anthony Birley notes this was a surprising choice. "As Libo had been consul only the previous year, 161," writes Birley, "he must have been in his early thirties, and as a patrician must have lacked military experience." Syria was an important province, and the men picked to govern it were usually senior men with much military and administrative experience. Birley answers his own question, "It seems that Marcus' intention was to have on the spot a man he could rely."As governor, Libo quarreled with the emperor Lucius, taking the attitude that he would only follow the instructions that Marcus gave him. This angered Lucius, so when Libo suddenly died, rumor claimed that Lucius had Libo poisoned.When Libo died, Lucius Verus defied Marcus and married Libo's widow to his Greek freedman called Agaclytus. Accordingly, Marcus Aurelius attended neither the ceremony nor the banquet. Passage 4: Marcus Annius Verus (praetor) Marcus Annius Verus (died 124 AD) was a distinguished Roman politician who lived in the 2nd century, served as a praetor and was the father of the Emperor Marcus Aurelius. Life He was the son of Roman senator Marcus Annius Verus and noblewoman Rupilia Faustina. His brother was the consul Marcus Annius Libo and his sister was Faustina the Elder, wife of Antoninus Pius. He married Domitia Lucilla, the heiress of a wealthy family which owned a tile factory. They had two children, Marcus Aurelius (born in 121, and who was also originally named Marcus Annius Verus), and Annia Cornificia Faustina (born in 123). Annius Verus died young while he held the office of praetor. Both his children were still young. The likeliest year of his death is 124.In his Meditations, Marcus Aurelius, who was only about 3 years old when his father died, says of him: "From what I heard of my father and my memory of him, modesty and manliness." Nerva–Antonine family tree Passage 5: Lucius Neratius Priscus Lucius Neratius Priscus was a Roman Senator and leading jurist, serving for a time as the head of the Proculeian school. He was suffect consul in the nundinium of May–June 97 as the colleague of Marcus Annius Verus. Family The origins of the gens Neratia lie in the Italian town of Saepinum in the heart of Samnium; Priscus' father was the homonymous suffect consul of the year 87. He is known to have a younger brother, Lucius Neratius Marcellus, who was adopted by their uncle Marcus Hirrius Fronto Neratius Pansa who was suffect consul in either 73 or 74 and co-opted into the Patrician class; Marcellus became suffect consul two years before Priscus, and ordinary consul in 129.The existence of a son with the identical name and consul in either 122 or 123, inferred from the existence of the possible governor of Pannonia Inferior, was disproved by a 1976 paper written by G. Camodeca, whose findings were embraced by Ronald Syme. Career Most of Priscus' advancement through the cursus honorum has been established. His first known office was as military tribune with Legio XXII Primigenia between c. 79 to c. 80, in Mogontiacum (modern Mainz). Next he held the office of quaestor (c. 83/84), and upon completion of this traditional Republican magistracy Priscus would be enrolled in the Senate. The two other magistracies followed: plebeian tribune (c. 85/86) and praetor (c. 88/89); usually a senator would govern either a public or imperial praetorian province before becoming a consul, but none is known for Priscus. After serving as suffect consul, Priscus was admitted to the collegia of the Septemviri epulonum, one of the four most prestigious ancient Roman priesthoods. He was also entrusted with governing, in succession, the imperial provinces of Germania Inferior (98-101), then Pannonia (102-105).The Digest of Justinian records that the emperor Trajan invoked the help of Priscus and Titius Aristo on a point of law. According to the Historia Augusta, there was a rumor that Trajan considered making Priscus his heir to the empire, before finally deciding on Hadrian to succeed him. Despite being a potential rival for the throne, Priscus was one of the legal experts the emperor Hadrian relied on for advice. Sir Ronald Syme looks to have considered Priscus as being another name used by or for Publius Cornelius Tacitus. Passage 6: Marcus Annius Flavius Libo Marcus Annius Flavius Libo was a Roman Senator who lived in the second half of the 2nd century and first half of the 3rd century. He was consul ordinarius in AD 204 with Lucius Fabius Cilo as his senior colleague. Libo was a Patrician and came from Hispania Baetica. His grandfather was Marcus Annius Libo, who was made suffect consul in 161. His father of the same name was a legatus of Syria and may have been poisoned, possibly by his cousin, Lucius Verus. Libo was related to Lucius Verus through their mutual ancestor, Marcus Annius Verus, who was consul three times, and by marriage to Emperor Antoninus Pius, who married his grandfather's sister. Passage 7: Rupilia The gens Rupilia, occasionally written Rupillia, was a minor plebeian family at ancient Rome. Members of this gens are first mentioned in the latter part of the Republic, and Publius Rupilius obtained the consulship in 132 BC. Few others achieved any prominence, but the name occurs once or twice in the consular fasti under the Empire. The name is frequently confounded with the similar Rutilius. Praenomina The main praenomina of the Rupilii were Publius and Lucius, two of the most common names throughout Roman history. Branches and cognomina None of the Rupilii bore cognomina under the Republic, but as with other plebeian families most of them had individual surnames in imperial times. Members This list includes abbreviated praenomina. For an explanation of this practice, see filiation.Publius Rupilius P. f. P. n., a fierce opponent of the Gracchi, became consul in 132 BC, the year after the murder of Tiberius Gracchus, whose followers he persecuted. He brought the First Servile War to a close, then remained in Sicily to reorganize the province, receiving a triumph on his return. He was prosecuted and condemned during the tribunate of Gaius Gracchus in 123, and died soon afterward. Lucius Rupilius P. f. P. n., brother of Publius Rupilius, the consul, sought the aid of Scipio Aemilianus to obtain the consulship, but was not elected. Lucius Rupilius, an actor known to the young Cicero. Aulus Rupilius, a physician employed by Aulus Cluentius Habitus, whose mother, Sassia, bought a slave, Strato, from Rupilius, and had him tortured in the hope of obtaining evidence against her own son. The slave knew nothing of value, and Sassia's scheme came to naught. Publius Rupilius, a man of equestrian rank, was magister of the publicani of Bithynia. Gaius Rupilius, an argentarius, or silversmith, named in an inscription. Lucius Scribonius Libo Rupilius M. f. M. n. Frugi Bonus, consul suffectus, serving from May to August in AD 88. He was the great-grandfather of Marcus Aurelius. His descent from the Rupilii is unclear. Rupilia L. f. M. n. Faustina, the grandmother of Marcus Aurelius, married Marcus Annius Verus. Lucius Rupilius Appianus, one of the septemviri epulones at Brixia in Venetia and Histria. Decimus Rupilius Severus, legate in Lycia and Pamphylia in AD 151, perhaps the same Severus who was consul suffectus at the end of 155. Lucius Rupilius Au[...], legate of Sextius Lateranus, proconsul of Africa in AD 176. Quintus Rupilius Q. f. Honoratus, of Mactar in Africa, raised to the equestrian order by Severus Alexander. Rupilius Pisonianus, curator at Mactar and Mididi between 290 and 293 AD. Rupilius Pisonianus, praefectus vigilum of Rome under Constans Caesar. See also List of Roman gentes Passage 8: Annia Fundania Faustina Annia Fundania Faustina (died 192) was a noble Roman woman who lived in the Roman Empire during the 2nd century AD. She was the paternal cousin of Roman Emperor Marcus Aurelius and his sister Annia Cornificia Faustina. Life Fundania Faustina was the daughter of the Roman consul Marcus Annius Libo and wife Fundania. Her brother was the younger Marcus Annius Libo who served as governor of Syria in 162. Fundania Faustina's maternal grandparents are inferred to be Lucius Fundanius Lamia Aelianus and his unknown wife; however her paternal grandparents are the Roman consul Marcus Annius Verus and Rupilia Faustina. She was born and raised in Rome. Through her paternal grandmother, she was related to the ruling Nerva–Antonine dynasty of the Roman Empire. Her paternal aunt was Empress Faustina the Elder (wife of Emperor Antoninus Pius and mother of Empress Faustina the Younger) and her paternal uncle was praetor Marcus Annius Verus (father of Emperor Marcus Aurelius, the paternal grandmother of Empress Lucilla and Emperor Commodus). Fundania Faustina, married the Roman Politician Titus Pomponius Proculus Vitrasius Pollio. She had two children with him who were: Titus Fundanius Vitrasius Pollio; he was executed in 182 on the orders of Commodus on the charge of conspiracy against the emperor. Vitrasia FaustinaBefore 180, her husband had died and Fundania Faustina never remarried. During the reign of her unstable paternal cousin Commodus (180-192), she decided to withdraw from public life and chose to live in retirement in Achaea. Before he was assassinated in 192, Commodus ordered Fundania Faustina's death and she was later executed in that year. Sources Septimius Severus: the African emperor, by Anthony Richard Birley Edition: 2 – 1999 From Tiberius to the Antonines: a history of the Roman Empire AD 14-192, by Albino Garzetti, 1974 Mutilation and transformation: damnatio memoriae and Roman imperial portraiture By Eric R. Varner 2004 Passage 9: Marcus Annius Libo Marcus Annius Libo was a Roman Senator active in the early second century AD. Life Libo came from the upper ranks of the Roman aristocracy. He was the son of Marcus Annius Verus, consul III in 126, and Rupilia Faustina. Annius Verus was Spanish of Roman descent. Rupilia was the daughter of Lucius Scribonius Libo Rupilius Frugi Bonus and Vitellia (daughter of emperor Vitellius). Libo is known to have had three siblings, two sisters and one brother. His elder sister was the Empress Faustina the Elder (mother of the Empress Faustina the Younger) and his younger sister (whose name is missing, but surmised to be Annia) was the wife of Gaius Ummidius Quadratus Sertorius Severus, suffect consul in 118. His brother was Marcus Annius Verus, the father of Marcus Aurelius.He was consul in 128 as the colleague of Lucius Nonius Calpurnius Torquatus Asprenas. Libo was the paternal uncle of the Emperor Marcus Aurelius. Beyond his consulship, almost nothing is known of his senatorial career. During the reign of his brother-in-law, Antoninus Pius, he was one of seven witnesses to a Senatus consultum issued to the city of Cyzicus in 138, which sought approval for establishing a corpus juvenum for the education of young men. Family Libo married a noblewoman whose name has been surmised as Fundania, daughter of Lucius Fundanius Lamia Aelianus, consul in 116, and wife Rupilia Annia. They are known to have together two children: Marcus Annius Libo, suffect consul in 161. He is known to have a son, Marcus Annius Flavius Libo. Annia Fundania Faustina, wife of Titus Pomponius Proculus Vitrasius Pollio, consul II in 176 Nerva–Antonine family tree == Sources == Passage 10: Kawamura Sumiyoshi Count Kawamura Sumiyoshi (川村 純義, 18 December 1836 – 12 August 1904), was an admiral in the Imperial Japanese Navy. Kawamura's wife Haru was the aunt of Saigō Takamori. Biography A native of Satsuma, Kawamura studied navigation at Tokugawa bakufu naval school at Nagasaki, the Nagasaki Naval Training Center. In 1868, he joined his Satsuma clansmen, and fought on the imperial side in the Boshin War of the Meiji Restoration as an army general. He was especially noted for his role in the Battle of Aizu-Wakamatsu. Under the new Meiji government, he became an officer in the fledgling Imperial Japanese Navy, and steadily rose through the ranks. He became first Director of the Imperial Japanese Naval Academy in 1870 and taifu (senior vice minister) of Navy in 1872. He was in command of Japanese naval forces during the Taiwan Expedition of 1874. During the Satsuma Rebellion, he was placed in command of all Imperial troops in September 1877 at the final Battle of Shiroyama near Kumamoto, when Saigō Takamori was killed (or committed seppuku). This battle, Saigō's last stand against the Meiji government, was the historical basis for the 2003 film The Last Samurai. In 1878, Kawamura became sangi (councillor) and the second Navy Minister. He remained in that position until 1885 except when he was temporarily replaced by Enomoto Takeaki, and during that period he expanded the influence of people from Satsuma within the navy. In 1884, he was ennobled with the title of hakushaku (count) under the kazoku peerage system. Later serving as court councillor and Privy Councillor, in 1901 he was given responsibility for the upbringing of the newborn Prince Michi (the future Emperor Hirohito) and his younger brother Prince Chichibu (Yasuhito).In 1904, Kawamura was posthumously appointed to the rank of admiral, setting a precedent for such honors. His cause of death remains unknown, setting a mystery.
Who is Marcus Annius Libo's aunt?
Vibia Sabina
3,690
2wikimqa
4k
Passage 1: Kekuʻiapoiwa II Kekuʻiapoiwa II was a Hawaiian chiefess and the mother of the king Kamehameha I. Biography She was named after her aunt Kekuʻiapoiwa Nui (also known as Kekuʻiapoiwa I), the wife of King Kekaulike of Maui. Her father was High Chief Haʻae, the son of Chiefess Kalanikauleleiaiwi and High Chief Kauaua-a-Mahi of the Mahi family of the Kohala district of Hawaiʻi island, and brother of Alapainui. Her mother was Princess Kekelakekeokalani-a-Keawe (also known as Kekelaokalani), daughter of the same Kalanikauleleiaiwi and Keaweʻīkekahialiʻiokamoku, king of Hawaii. Her mother had been sought after by many who wished to marry into the Keawe line. She was the niece of Alapainui through both her father and mother. She married the High Chief Keōua to whom she had been betrothed since childhood. Through her double grandmother Kalanikauleleiaiwi, Keōua's own paternal grandmother, she was the double cousin of Keōua. When her uncle was staying at Kohala superintending the collection of his fleet and warriors from the different districts of the island preparatory to the invasion of Maui, in the month of Ikuwa (probably winter) Kamehameha was born probably in November 1758.: 135–136  He had his birth ceremony at the Moʻokini Heiau, an ancient temple which is preserved in Kohala Historical Sites State Monument.Many stories are told about the birth of Kamehameha. One says that when Kekuʻiapoiwa was pregnant with Kamehameha, she had a craving for the eyeball of a chief. She was given the eyeball of a man-eating shark and the priests prophesied that this meant the child would be a rebel and a killer of chiefs. Alapainui, the old ruler of the island of Hawaiʻi, secretly made plans to have the newborn infant killed.Kekuʻiapoiwa's time came on a stormy night in the Kohala district, when a strange star with a tail of white fire appeared in the western sky. This could have been Halley's Comet which appeared near the end of 1758. According to one legend, the baby was passed through a hole in the side of Kekuiapoiwa's thatched hut to a local Kohala chief named Naeʻole, who carried the child to safety at Awini on the island's north coast. By the time the infant in Naeʻole's care was five, Alapainui had accepted him back into his household.After Kamehameha, Kekuʻiapoiwa bore a second son, Keliimaikai. A few years later, Keōua died in Hilo, and the family moved with Alapainui to an area near Kawaihae, where she married a chief of the Kona district (and her uncle) Kamanawa. She had one daughter, Piʻipiʻi Kalanikaulihiwakama, from this second husband, who would later become an important military ally of Kamehameha, who was both step son and cousin through several relationships. Piʻipiʻi became first the wife of Keholoikalani, the father of her son Kanihonui, and later she married Kaikioewa, who she had a daughter Kuwahine with.: 18 Kamehameha dynasty Passage 2: Dana Blankstein Dana Blankstein-Cohen (born March 3, 1981) is the executive director of the Sam Spiegel Film and Television School. She was appointed by the board of directors in November 2019. Previously she was the CEO of the Israeli Academy of Film and Television. She is a film director, and an Israeli culture entrepreneur. Biography Dana Blankstein was born in Switzerland in 1981 to theatre director Dedi Baron and Professor Alexander Blankstein. She moved to Israel in 1983 and grew up in Tel Aviv. Blankstein graduated from the Sam Spiegel Film and Television School, Jerusalem in 2008 with high honors. During her studies she worked as a personal assistant to directors Savi Gabizon on his film Nina's Tragedies and to Renen Schorr on his film The Loners. She also directed and shot 'the making of' film on Gavison's film Lost and Found. Her debut film Camping competed at the Berlin International Film Festival, 2007. Film and academic career After her studies, Dana founded and directed the film and television department at the Kfar Saba municipality. The department encouraged and promoted productions filmed in the city of Kfar Saba, as well as the established cultural projects, and educational community activities. Blankstein directed the mini-series "Tel Aviviot" (2012). From 2016-2019 was the director of the Israeli Academy of Film and Television. In November 2019 Dana Blankstein Cohen was appointed the new director of the Sam Spiegel Film and Television School where she also oversees the Sam Spiegel International Film Lab. In 2022, she spearheaded the launch of the new Series Lab and the film preparatory program for Arabic speakers in east Jerusalem. Filmography Tel Aviviot (mini-series; director, 2012) Growing Pains (graduation film, Sam Spiegel; director and screenwriter, 2008) Camping (debut film, Sam Spiegel; director and screenwriter, 2006) Passage 3: Ian Barry (director) Ian Barry is an Australian director of film and TV. Select credits Waiting for Lucas (1973) (short) Stone (1974) (editor only) The Chain Reaction (1980) Whose Baby? (1986) (mini-series) Minnamurra (1989) Bodysurfer (1989) (mini-series) Ring of Scorpio (1990) (mini-series) Crimebroker (1993) Inferno (1998) (TV movie) Miss Lettie and Me (2002) (TV movie) Not Quite Hollywood: The Wild, Untold Story of Ozploitation! (2008) (documentary) The Doctor Blake Mysteries (2013) Passage 4: Dalida (2017 film) Dalida is a 2017 French biographical drama film about the life of singer and actress Dalida. It is written, directed and co-produced by Lisa Azuelos, and stars Sveva Alviti as Dalida. Plot In 1967 Dalida goes to a hotel and unsuccessfully attempts suicide. Rushing to her side during recovery are her ex-husband Lucien Morisse, her ex-lover Jean Sobieski and her brother Orlando (born Bruno). The three men explain different facets of Dalida's personality: Dalida grew up a passionate music lover thanks to her violinist father in Cairo but always felt herself to be ugly because of the large glasses she wore. She was discovered in Paris by Lucien Morisse, a Parisian radio programmer who eventually fell for her and left his wife for her. Dalida became disillusioned with Morisse when he put off marriage and a child to focus on building her career. Nevertheless, she married him, but quickly began an affair with artist Jean Sobieski. She eventually left Sobieski as well, to have an affair with Luigi Tenco, a temperamental musician. Luigi commits suicide after having a breakdown and walking off stage at the 1967 Sanremo Music Festival. Dalida finds his body and it is this her friends and family believe has contributed to her mental breakdown and suicide attempt. With the help of her brother Dalida recovers and begins to record new music and find new loves. Going to Italy to perform, she encounters a young 22-year-old student and the two embark upon a love affair. Discovering she is pregnant Dalida decides not to keep the child as she feels her lover is too young to be a responsible parent and that she does not want to raise a child without a father. She has an abortion and breaks things off with her lover. Dalida's brother Orlando begins to manage her career causing a new period of success for her. Lucien Morisse meanwhile commits suicide in their old apartment. Dalida is introduced to media personality Richard Chanfray (Nicolas Duvauchelle) and the two begin a relationship. Dalida feels safe and secure for the first time in her life, but eventually their relationship begins to crumble. Richard accidentally shoots the boyfriend of her housekeeper believing he is an intruder and Dalida is forced to pay off the family to keep him out of jail. After Richard gets jealous of her career, she records an album with him despite the fact that he is a poor singer. Dalida believes she is pregnant only to learn her abortion destroyed her uterus and any chance she may have had of becoming pregnant. At a New Year's Eve party after Richard is unpleasant to her and publicly mocks her eating disorder, Dalida finally kicks him out of her life. Sometime after he commits suicide as well. Her career doing better than ever, Dalida acts in the film Le Sixième Jour to much acclaim and returns to Egypt where she is feted by the people. Nevertheless, she dissolves into a deep depression, becoming a shut-in with her bulimia spiralling out of control. She finally commits suicide leaving behind a note explaining that life is too difficult. Cast Sveva Alviti as Dalida Riccardo Scamarcio as Orlando Jean-Paul Rouve as Lucien Morisse Nicolas Duvauchelle as Richard Chanfray Alessandro Borghi as Luigi Tenco Valentina Carli as Rosy Brenno Placido as Lucio Niels Schneider as Jean Sobieski Hamarz Vasfi as Pietro Gigliotti Davide Lorino as elder Orlando F. Haydee Borelli as Giuseppina Gigliotti Vincent Perez as Eddie Barclay Patrick Timsit as Bruno Coquatrix Michaël Cohen as Arnaud Desjardins Elena Rapisarda as young Dalida Production Principal photography took place from 8 February to 22 April 2016, in France, Italy and Morocco. Reception In a statement to the Agence France-Presse, Catherine Morisse, the daughter of Lucien Morisse, criticised the film for the inaccurate portrayal of her father, adding that she was not consulted during the film's production. Passage 5: Trinidad Tecson Trinidad Perez Tecson (November 18, 1848 – January 28, 1928), known as the "Mother of Biak-na-Bato" and "Mother of Mercy", fought to gain Philippines independence. She was given the title "Mother of Biak-na-Bato" by Gen. Emilio Aguinaldo. She was also cited as the "Mother of the Philippine National Red Cross" for her service to her fellow Katipuneros. Early life Tecson was born in San Miguel de Mayumo, Bulacan, one of sixteen children of Rafael Tecson and Monica Perez. She learned to read and write from schoolmaster Quinto. She practiced fencing with Juan Zeto and was feared throughout the province, called "Tangkad" (tall) by her peers. Orphaned at a very young age, she stopped school and went with her siblings to live with relatives. She married at 19 and had two children, Sinforoso and Desiderio, who both died. Tecson and her husband were engaged in the purchase and sale of cattle, fish, oysters, and lobsters to be sold in Manila. Revolutionary Philippine-American War She joined the revolutionary forces led by Gen. Gregorio del Pilar and participated in the assault on the province of Bulacan and Calumpit. She also served in the Malolos Republic and was designated as the Commissary of War. During the American drive northward, she was in Cabanatuan. Bringing with her sick and wounded revolutionaries, Tecson crossed the Zambales mountains to Santa Cruz then to Iba. Life after the war After the war, her second husband died and she continued in business in Nueva Ecija, concentrating on selling meat in the towns of San Antonio and Talavera. She married her third husband, Doroteo Santiago, and after his death, married Francisco Empainado. On January 28, 1928, she died in Philippine General Hospital at age 79. Her remains lie in the Plot of the Veterans of the Revolution in Cementerio del Norte. Passage 6: Lisa Azuelos Lisa Azuelos (born Elise-Anne Bethsabée Azuelos; 6 November 1965 in Neuilly-sur-Seine) is a French director, writer, and producer. She is the daughter of singer Marie Laforêt. Biography Lisa Azuelos is the daughter of French singer and actress Marie Laforêt and of Judas Azuelos, a Moroccan Jew of Sephardic descent. She has a younger brother and a step-sister, Deborah. Her parents separated when she was 2 years old. Her mother kept her and sent her with her brother to a Swiss boarding school, "Les Sept Nains", where children were allegedly maltreated physically and mentally. Afterwards the two siblings were sent to live with someone in a small village in the department of Sarthe. She stayed with her father since the age of twelve. That is the time she discovered his Sephardic heritage.  Lisa Azuelos was introduced to her future husband, film producer Patrick Alessandrin, by Luc Besson. The couple has three children, Carmen, Illan and Thaïs. They divorced after 11 years of marriage. Lisa Azuelos has a film production company, which she named Bethsabée Mucho after her paternal great-grandmother Bethsabée. Filmography Passage 7: Peter Levin Peter Levin is an American director of film, television and theatre. Career Since 1967, Levin has amassed a large number of credits directing episodic television and television films. Some of his television series credits include Love Is a Many Splendored Thing, James at 15, The Paper Chase, Family, Starsky & Hutch, Lou Grant, Fame, Cagney & Lacey, Law & Order and Judging Amy.Some of his television film credits include Rape and Marriage: The Rideout Case (1980), A Reason to Live (1985), Popeye Doyle (1986), A Killer Among Us (1990), Queen Sized (2008) and among other films. He directed "Heart in Hiding", written by his wife Audrey Davis Levin, for which she received an Emmy for Best Day Time Special in the 1970s. Prior to becoming a director, Levin worked as an actor in several Broadway productions. He costarred with Susan Strasberg in "[The Diary of Ann Frank]" but had to leave the production when he was drafted into the Army. He trained at the Carnegie Mellon University. Eventually becoming a theatre director, he directed productions at the Long Wharf Theatre and the Pacific Resident Theatre Company. He also co-founded the off-off-Broadway Theatre [the Hardware Poets Playhouse] with his wife Audrey Davis Levin and was also an associate artist of The Interact Theatre Company. Passage 8: Susan B. Nelson Susan B. Nelson (April 13, 1927 – May 4, 2003) was an American environmental activist who is best known as the mother of the Santa Monica Mountains National Recreation Area. Early life Sue Nelson was born Susan Louise Barr in Syracuse, New York, on April 13, 1927, the child of an accountant and a teacher. Her family moved to Los Angeles where she attended Alexander Hamilton High School and UCLA, graduating in 1948 with a degree in political science. She later earned a master's degree from UCLA in urban planning in 1969. Environmental activism Nelson started her conservationist career as a housewife in Mandeville Canyon. She later became an active member in the Sierra Club, the Peace and Freedom Party, and the Green Party. In 1964 she helped to found the Friends of the Santa Monica Mountains, Parks and Seashore, and also became this group's president. She is credited by congressman Anthony Beilenson as being the single greatest driver behind the establishment by Congress in 1978 of the Santa Monica Mountains National Recreation Area, the first truly urban national park. Along with Nelson, two other women (Jill Swift and Margot Feuer) were instrumental in bringing about federal, legal recognition of the SMMNRA. In the years following this federal legislation, Nelson lobbied Congress to provide more funding to expand and improve the parkland. Nelson also worked on a variety of other conservation projects throughout the Los Angeles region in the 1980s and 1990s, including areas such as Malibu Creek State Park, Point Mugu, Hollywood, Temescal Canyon, and Topanga Canyon. She also voiced her vocal opposition, through newspaper opinion pieces and town hall meetings, to development projects such as the Malibu Canyon Freeway, the Pacific Coast Freeway, and the Mulholland Highway. In addition, Nelson sounded a warning bell against the privatization of public parklands. Her persistence led some to call her ruthless, but also warmhearted and feisty. Personal life Nelson married Earl Nelson in 1948. Together they had four children, but the marriage ended in divorce. Nelson's son-in-law was the composer James Horner. She died on May 4, 2003, after she was hit by a car near her home in Echo Park, Los Angeles. Legacy Nelson's archives are held in Special Collections and Archives at the University Library of California State University, Northridge. Passage 9: Fatima bint Mubarak Al Ketbi Sheikha Fatima bint Mubarak Al Ketbi (Arabic: فاطمة بنت مبارك الكتبي) is the third wife of Sheikh Zayed bin Sultan Al Nahyan, the founder and inaugural president of United Arab Emirates. She is referred to as the mother of sheikhs, the mother of the UAE and as The mother of Nation. Early life Sheikha Fatima was born in Al-Hayer, Al Ain Region, as the only daughter to her parents. Her family is Bedouin and religious. Achievements Sheikha Fatima is a supporter of women's rights in the UAE. She is the supreme chairperson of the Family Development Foundation (FDF) and significantly contributed to the foundation of the first women's organization in 1976, the Abu Dhabi Society for the Awakening of Women. She was also instrumental in a nationwide campaign advocating for girls' education and heads the UAE's General Women Union (GWU), which she founded in 1975. She is also the President of the Motherhood and Childhood Supreme Council. At the end of the 1990s, she publicly announced that women should be members of the Federal National Council of the Emirates.Sheikha Fatima also supports efforts concerning adult literacy and provision of free public education for girls. An award named the Sheikha Fatima Award for Excellence has been presented in her honor since 2005 for the outstanding academic performance and commitment to the environment and world citizenship of the female recipients. The reward includes a full-tuition scholarship that extends to schools across the Middle East and in 2010 expanded to India. She has consistently supported women in sport and initiated an award called the Sheikha Fatima bint Mubarak Award for Woman Athletes. Sheikha Fatima bint Mubarak also created a women's sports academy called Fatima Bint Mubarak Ladies Academy in Abu Dhabi. The Sheikha Fatima Institute of Nursing and Health Sciences in Lahore, Pakistan, is named after her.On 30 March 2021, Sheikha Fatima launched a National Action Plan on women, peace and security - the first National Action Plan developed in a Gulf Cooperation Council (GCC) country. The plan aims to empower and support women globally by promoting the UN Security Council Resolution 1325. Awards In 1997, five different organizations of the United Nations had awarded Sheikha Fatima for her significant efforts for women's rights. The UNIFEM stated, "she is the champion of women's rights." She was also awarded the Grand Cordon of the Order of November 7th by the Tunisian president Zine El Abidine Ben Ali on 26 June 2009 for her contributions to raise the status of Arab women. She was also given the UNESCO Marie Curie Medal for her efforts in education, literacy and women's rights, being the third international and the first Arab recipient of the award.On March 16, 2005, she received the Athir Class of the National Order of Merit of Algeria. Marriage and children Fatima bint Mubarak Al Ketbi married Sheikh Zayed Al Nahyan when he was the ruler of the Eastern region in 1960. Sheikh Zayed met her in a mosque. They moved to Abu Dhabi when Sheikh Zayed became the ruler in August 1966. She was his most influential and favorite spouse because of her influential personality. She is the mother of Sheikh Mohamed, the current President of the United Arab Emirates and the ruler of Abu Dhabi; Sheikh Hamdan, Sheikh Hazza, Sheikh Tahnoun, Sheikh Mansour, Sheikh Abdullah, Sheikha Shamma and Sheikha Alyazia. They are the most powerful block in the ruling family of Abu Dhabi, the Al Nahyans. Passage 10: Minamoto no Chikako Minamoto no Chikako (源 親子) was the daughter of Kitabatake Morochika, and Imperial consort to Emperor Go-Daigo. She had earlier been Imperial consort to Go-Daigo's father, Emperor Go-Uda. She was the mother of Prince Morinaga.
Who is the mother of the director of film Dalida (2017 Film)?
Marie Laforêt
3,219
2wikimqa
4k
Passage 1: Eleanor of Aragon, Countess of Toulouse Eleanor of Aragon, Countess of Toulouse (1182–1226) was a daughter of King Alfonso II of Aragon and Sancha of Castile. She married Raymond VI, Count of Toulouse. Life According to the Ex Gestis Comitum Barcinonensium, she was the second daughter and fourth of nine children of the troubadour king, Alfonso II of Aragon and his wife Sancha of Castile. She had for older brothers Pierre II the Catholic and Alphonse II, Count of Provence and Forcalquier, and for sisters Constance, first queen of Hungary, then empress by her marriage with Frederick II, and Sancie, countess of Toulouse. According to the Crónica of San Juan de la Peña, her brother Peter II sealed the union of Eleanor, with Raymond VI of Toulouse, Duke of Narbonne and Marquis of Provence, in order to put an end to the dissensions with the counts of Toulouse. Raymond VI was the eldest son of Raymond V and Constance of France, daughter of King Louis VI and Adelaide de Maurienne. Eleanor was Raymond VI's 6th wife, having divorced an unknown daughter and sole heiress of Emperor Isaac Komnenos of Cyprus just two years earlier. Raymond and Eleanor did not have children. By this marriage she became countess of Toulouse which would suffer the pangs of the war and the Albigensian Crusade, in the following years. The crusade was initiated by Pope Innocent III and headed by the French Crown against Toulouse and Catharism. Passage 2: Berengaria of Barcelona Berengaria of Barcelona (1116 – 15 January 1149), called in Spanish Berenguela de Barcelona and also known as Berengaria of Provence, was Queen consort of Castile, León and Galicia. She was the daughter of Ramon Berenguer III, Count of Barcelona, and Douce I, Countess of Provence.On 10/17 November 1128 in Saldaña, Berengaria married Alfonso VII, King of Castile, León and Galicia.Their children were: Sancho III of Castile (1134–1158) Ramon, living 1136, died in infancy Ferdinand II of León (1137–1188) Constance (c. 1138–1160), married Louis VII of France Sancha (c. 1139–1179), married Sancho VI of Navarre García (c. 1142–1145/6) Alfonso (c. 1144–c. 1149)According to a description, "She was a very beautiful and extremely graceful young girl who loved chastity and truth and all God-fearing people."She died in Palencia, and was buried at the Cathedral of Santiago de Compostela. In fiction A parody version of queen Berengaria and king Alfonso is presented in the tragicomedy La venganza de Don Mendo by Pedro Muñoz Seca. In its film version, Lina Canalejas played Berengaria. Passage 3: Sancha of León Sancha of León (c. 1018 – 8 November 1067) was a princess and queen of León. She was married to Ferdinand I, the Count of Castile who later became King of León after having killed Sancha's brother in battle. She and her husband commissioned the Crucifix of Ferdinand and Sancha. Life Sancha was a daughter of Alfonso V of León by his first wife, Elvira Menéndez. She became a secular abbess of the Monastery of San Pelayo.In 1029, a political marriage was arranged between her and count García Sánchez of Castile. However, having traveled to León for the marriage, García was assassinated by a group of disgruntled vassals. In 1032, Sancha was married to García's nephew and successor, Ferdinand I of Castile, when the latter was 11 years old.At the Battle of Tamarón in 1037 Ferdinand killed Sancha's brother Bermudo III of León, making Sancha the heir and allowing Ferdinand to have himself crowned King of León. Sancha's own position as queen of León is unclear and contradictory. She succeeded to the throne of León as the heir of her brother and in her "own right" but despite this, she is not clearly referred to as queen regnant, and after the death of her husband the throne passed to her son, despite the fact that she was still alive.Following Ferdinand's death in 1065 and the division of her husband's kingdom, she is said to have played the futile role of peacemaker among her sons.She was a devout Catholic, who, with her husband, commissioned the crucifix that bears their name as a gift for the Basilica of San Isidoro. Children Sancha had five children: Urraca of Zamora Sancho II of León and Castile Elvira of Toro Alfonso VI of León and Castile García II of Galicia Death and burial She died in the city of León on 8 November 1067. She was interred in the Royal Pantheon of the Basilica of San Isidoro, along with her parents, brother, husband, and her children Elvira, Urraca and García. The following Latin inscription was carved in the tomb in which were deposited the remains of Queen Sancha: "H. R. SANCIA REGINA TOTIUS HISPANIAE, MAGNI REGIS FERDINANDI UXOR. FILIA REGIS ADEFONSI, QUI POPULAVIT LEGIONEM POS DESTRUCTIONEM ALMANZOR. OBIIT ERA MCVIIII. III N. M." Which translates to: "Here lies Sancha, Queen of All Spain, wife of the great king Ferdinand and daughter of king Alfonso, who populated León after the destruction of Almanzor. Died in the one thousand one hundred eighth era on the third nones of May [5 May 1071]." Passage 4: Isabella of Navarre, Viscountess of Rohan Isabel d'Albret of Navarre (1512–aft. 1560) was a princess of Navarre. She was the daughter of John III of Navarre (died 1516) and queen Catherine I of Navarre. The same year she was born, the greater part of Navarre was conquered by Aragon, and she was raised in France. In 1528, there were unsuccessful suggestions for a marriage between her and the Hungarian king John Zápolya, an ally of the king of France. In 16 August 1532, Isabel married René I de Rohan, Viscount of Rohan (d. 1552).Isabel became the godmother of her grand nephew Henry III of Navarre, whom she carried to his baptism in 1554. Isabel came to feel sympathy for Calvinism early on, but did not convert during the lifetime of her spouse, who remained a Catholic. In 1556, she met admiral de Coligny, and was present in Béarn in 1557 when queen Joan III of Navarre introduced the Reformation in Navarre. She converted to Protestantism in 1558, and her Castle of Blain became a center of Protestantism in the area. It was at her Castle of Blain that the first Breton church was organized. In Blain, she received the Protestant reformer d'Andelot, who had a mission in Nantes and held the first Protestant sermon there with the reformers Fleurer and Loiseleur de Villiers. In 1560, she was granted personal religious freedom for herself and her household on her own domains by the king of France. Issue Isabel and Rene had: Françoise de Rohan Louis de Rohan, seigneur de Gié Henri I, Viscount of Rohan, 19th Viscount of Rohan, married Françoise of Tournemine Jean de Rohan, married Diane of Barbançon René II, de Rohan, 20th Viscount of Rohan, married Catherine of Parthenay Passage 5: Beatrice of Navarre, Countess of La Marche Beatrice of Navarre (1392–1412/1415) was a daughter of Charles III of Navarre and his wife, Eleanor of Castile. Biography She was a member of the House of Évreux. Her surviving siblings were Blanche I of Navarre, wife of John II of Aragon, and Isabella of Navarre, wife of John IV of Armagnac. In 1406 in Pamplona, Beatrice married James II, Count of La Marche, son of John I, Count of La Marche, and Catherine of Vendôme. The couple had three children: Isabelle (1408 – aft. 1445), a nun at Besançon Marie (1410 – aft. 1445), a nun at Amiens Eleanor of Bourbon-La Marche (1412 – aft. 21 August 1464), married Bernard d'Armagnac, Count of Pardiac (d. 1462)It is not certain when Beatrice died. She died between 1412 and 1415, possibly while giving birth to her daughter Eleanor in 1412. == Ancestry == Passage 6: René I, Viscount of Rohan René I de Rohan, (1516–1552) 18th Viscount of Rohan, Viscount and Prince de Léon, and Marquis de Blain married Isabella of Navarre daughter of jure uxoris King John III of Navarre and Catherine of Navarre, Queen of Navarre. Life René I was the son of Pierre II de Rohan and Anne de Rohan, who upon her death transmitted the titles of her brother, Jacques de Rohan, who died without heirs. Queen Margaret of Navarre, sister of Francis I of France served as Guardian of René I de Rohan, and arranged for René I de Rohan to marry her sister-in-law Isabella. This introduced Protestantism into the House of Rohan. A family who would fight on Protestant side in the Huguenot rebellions. René I de Rohan died in 1552 fighting on the German frontier during the Siege of Metz. Children René I de Rohan and Isabella of Navarre had: Françoise de Rohan, married to Jacques de Savoie, duc de Nemours Louis de Rohan, seigneur de Gié Henri I, Viscount of Rohan, 19th Viscount of Rohan, married Françoise of Tournemine Jean de Rohan, married Diane of Barbançon René II, de Rohan, 20th Viscount of Rohan, married Catherine of Parthenay Passage 7: Joan of Navarre (regent) Joan of Navarre (French: Jeanne, Spanish: Juana; 1382 – July 1413) was the heir presumptive to the throne of Navarre in 1402–1413, and regent of Navarre in the absence of her father in 1409–1411. Life Joan was the eldest child of King Charles III of Navarre and his wife Eleanor, daughter of King Henry II of Castile. Her younger sisters were Blanche, Beatrice, and Isabella.Joan was originally betrothed in 1401 to Martin I of Sicily, the heir to the throne of Aragón. He was widower of Maria of Sicily, who had not given him surviving children. Plans were however changed and Martin married Joan's sister Blanche. Joan herself married at Olite on 12 November 1402 to John, Viscount of Castellbò, the heir to the County of Foix in France. The couple were married for eleven years but failed to produce any children. A month after her wedding, Joan was recognized as heir presumptive to the throne of Navarre at Olite on 3 December 1402. There the Estates of Navarre swore an oath to Joan and John as their future sovereigns. This was after the early death of Joan's only brothers, Charles and Louis, in quick succession earlier in the year.In 1404, Joan contracted smallpox and was treated by the Jewish doctor Abraham Comineto. During her regency she had her own personal salaried doctor, Salomon Gotheynno, also a Jew.Joan governed Navarre in the name of her father while he was in Paris between 1409 and 1411. In 1412 she became Countess of Foix when her husband succeeded his father in the county. She died in the Principality of Béarn in July 1413, childless. Her younger sister Blanche became heir presumptive to the throne of Navarre, and succeeded their father Charles III on 8 September 1425. Passage 8: Sancha of Castile, Queen of Navarre Sancha of Castile (c. 1139–1179) was daughter of Alfonso VII of León and Castile and his first wife Berengaria of Barcelona. Sancha was the fifth child of seven born to her parents. On 20 July 1153, Sancha married Sancho VI of Navarre. He is responsible for bringing his kingdom into the political orbit of Europe. As "la reyna de Navarra, filla del emperador" (the queen of Navarre, daughter of the Emperor) her August 1179 death was reported in the Annales Toledanos. Issue Sancho and Sancha's children were: Sancho VII Ferdinand Ramiro, Bishop of Pamplona Berengaria (died 1230 or 1232), married King Richard I of England Constance Blanche, married Count Theobald III of Champagne, then acted as regent of Champagne, and finally as regent of Navarre TheresaSancha was buried in Pamplona. Family tree Passage 9: Eleanor of Castile, Queen of Navarre Eleanor of Castile (after 1363 – 1415/1416) was Queen of Navarre by marriage to King Charles III of Navarre. She acted as regent of Navarre during the absence of her spouse in France in 1397–1398, 1403–1406 and 1409–1411. Biography Early life She was the daughter of King Henry II of Castile and his wife Juana Manuel of Castile, who was descended from a cadet branch of the Castilian royal house. Eleanor was a member of the House of Trastámara. Eleanor was involved with plans to marry King Ferdinand I of Portugal in 1371, however he refused the match as he had secretly married the noblewoman Leonor Telles de Menezes. She was betrothed in Burgos in 1373 to Prince Charles, the heir of King Charles II of Navarre. The couple was married at Soria in May 1375. A testament dated at Burgos on 29 May 1374 shows that King Henry II bequeathed property to his daughter Eleanor as a part of her dowry. The Years in Castile The marriage of Charles and Eleanor was marked by a number of unusual marital disputes. In 1388, Eleanor asked at a meeting between her husband and her brother John I of Castile for permission to retire for some time to her homeland of Castile in order to recover from an illness caught in Navarre. She believed this course of action would be best for her health. The two young daughters in her care at the time went with her. During their absence from Navarre, Eleanor and her children resided in Valladolid. By 1390, Eleanor bore two more daughters to Charles, and two years later, her husband requested her to return to Navarre because both of them needed to be crowned King and Queen of Navarre upon the death of her father-in-law King Charles II. Eleanor's brother King John supported the request of Charles III. Eleanor did not consent, claiming that she was ill-treated in Navarre and believed members of the Navarrese nobility wished to poison her. As a result, Eleanor remained in Castile while her husband was crowned in February 1390 in Pamplona. By the end of the 1390s, Eleanor had born her husband six daughters, all of whom survived infancy, but no sons. For this reason, Eleanor handed her oldest daughter Joanna over to Charles III to be groomed for her future role as ruler of the Kingdom of Navarre. On 9 October 1390, Eleanor's brother John died and was succeeded by his minor son Henry as king of Castile. Charles then requested Eleanor's return to Navarre again, but she refused once more. Eleanor opposed her nephew Henry's accession and she formed the League of Lillo along with her illegitimate half-brother Fadrique and her cousin Pedro. King Henry opposed the League, besieged Eleanor in her castle at Roa around mid-1394, and obliged her to return to her husband in February 1395. Queen of Navarre Eleanor became very involved in the political life of Navarre upon her return. Her relationship with her husband improved, and they had the long-awaited sons Charles and Louis. Both died young, however. On 3 June 1403, her coronation as Queen of Navarre took place in Pamplona. Upon several occasions when Charles stayed in France, Eleanor took to the role of regent. She also helped to maintain good relations between Navarre and Castile. As a result of these good relations, members of the Castillian nobility, including the Duke of Benavente and members of the powerful families of Dávalos, Mendoza and Zuñiga, settled in Navarre. Upon the couple's absences, their daughter Joanna acted as regent, as she was heiress to the kingdom. Joanna died in 1413 without issue and in the lifetime of both her parents, therefore the succession turned to their second daughter Blanche, who would eventually succeed as Queen of Navarre upon Charles' death. There is confusion surrounding Eleanor's death. She is believed to have died at Olite on 27 February 1415 or at Pamplona 5 March 1416. Her husband died in 1425, and they were buried together at Pamplona in the Cathedral of Santa María la Real. Issue Eleanor and Charles had eight children, five of which lived to adulthood: Joanna (1382–1413), married John I, Count of Foix, no issue. Blanche (1385-1441), married John II of Aragon, became Queen of Navarre and had issue. Maria (1388–1406), died unmarried and childless. Margaret (1390–1403), died young Beatrice (1392–1412), married to James II, Count of La Marche, and had issue. Isabella (1395–1435), married in 1419 to John IV of Armagnac, had issue; they were great-great grandparents of King Henry IV of France. Charles (1397–1402), Prince of Viana, died young Louis (1402), Prince of Viana, died young Passage 10: Blanche of Navarre, Countess of Champagne Blanche of Navarre (c. 1177–1229) was Countess of Champagne by marriage to Theobald III, Count of Champagne, and regent of Champagne during the minority of her son Theobald I of Navarre between 1201 and 1222. Life Early life She was the youngest daughter of Sancho VI of Navarre and Sancha of Castile, who died in 1179, about two years after Blanche's birth. Her eldest brother, Sancho VII, succeeded their father and was the last agnatic descendant of the first dynasty of kings of Navarre, the Pamplona dynasty, dying childless. Her elder sister Berengaria married Richard I of England. Blanche married Theobald III, Count of Champagne, on 1 July 1199 at Chartres, when she was 22-years-old and he was 20-years-old. Regent of Champagne Theobald III died young on 24 May 1201, leaving her pregnant. When she gave birth to a son on 30 May 1201, he immediately became Theobald IV, Count of Champagne (Theobald I of Navarre). Blanche ruled the county as regent until Theobald turned 21 years old in 1222. The regency was plagued by a number of difficulties. Blanche's brother-in-law, count Henry II had left behind a great deal of debt. Henry was the elder son but had transferred the land to his younger brother, Theobald III. Furthermore, their son Theobald IV's right to the succession of Champagne was challenged by Henry's daughter Philippa and her husband, Erard I of Brienne, Count of Ramerupt and one of the more powerful Champagne nobles. The conflict with the Briennes broke into open warfare in 1215, in what became known as the Champagne War of Succession, and was not resolved until after Theobald came of age in 1222. After the death of her husband, however, Blanche had taken immediate action to secure the county of Champagne for her son. She found King Philip at Sens and paid him homage, which was the first homage rendered by a countess. She did this to maintain wardship and the right over her lands and in exchange she promised to not marry without the king's permission. Prince Louis then proclaimed in a letter to Jean of Brienne, that neither he nor King Philip would hear a challenge against Theobald IV's claim until he was twenty-one. In this letter, Prince Louis also confirmed that Henry II did indeed transfer the land to his brother. At that time Theobald and Blanche bought out their rights for a substantial monetary payment. Blanche had also arranged the dowry of Henry II's elder daughter Alice of Champagne, when she married the young Hugh I of Cyprus. In the 1230s, in order to settle with Alice, Theobald IV had to sell his overlordship over the counties of Blois, Sancerre, and Châteaudun to Louis IX of France. With her regency completed, in 1222 Blanche withdrew to the Cistercian convent of Argensolles, whose foundation she had funded herself, for her retirement. Later years Since some barons suspected Theobald for having a hand in the death of Louis VIII (in November 1226), Blanche of Castile withdrew his invitation to the coronation of Louis IX and proffered it to Blanche instead.Blanche died on 13 March 1229, seven years after the end of her regency, at the age of 52. In her will she left 5 marks of gold to the Cathedral of Reims, which was used to build a statue to contain the Holy Milk of the Virgin.After Blanche's death, her brother in retirement remained as King of Navarre and her son Theobald continued as Count of Champagne. Their eldest sister, Berengaria of Navarre, Queen of England (widow of Richard the Lionheart), died without issue in 1230, leaving Sancho as the sole surviving child of Sancho VI. When he died in 1234, Blanca's son Theobald IV of Champagne was recognized as the next King of Navarre. Theobald had married twice during Blanche's lifetime and had one daughter by the time of her death, who was also named Blanche. Children Blanche had two children with Theobald III of Champagne: Marie – Blanche is noted as having borne an older daughter named Marie to Theobald III before his death in May 1201. References to this Marie in documentation are scant, but as Blanche was married in July 1199, Marie would have been under two years old at the time of her father's death. One of the conditions of Blanche's treaty with Philip II confirming her son's inheritance was that Marie had to be sent away to be raised in the royal court at Paris. Theobald I of Navarre. == Notes ==
Where was the place of death of Sancha Of Castile, Queen Of Navarre's mother?
Palencia
3,496
2wikimqa
4k
Passage 1: Dance with a Stranger Dance with a Stranger is a 1985 British film directed by Mike Newell. Telling the story of Ruth Ellis, the last woman to be hanged in Britain (1955), the film won critical acclaim, and aided the careers of two of its leading actors, Miranda Richardson and Rupert Everett. The screenplay was by Shelagh Delaney, author of A Taste of Honey, and was her third major screenplay. The story of Ellis has resonance in Britain because it provided part of the background to the extended national debates that led to the progressive abolition of capital punishment from 1965. The theme song "Would You Dance with a Stranger?" was performed by Mari Wilson and was released as a single. Plot A former nude model and prostitute, Ruth is manageress of a drinking club in London that has racing drivers as its main clients. Ruth lives in a flat above the bar with her illegitimate son Andy. Another child is in the custody of her estranged husband's family. In the club, she meets David, an immature, young man from a well-off family who wants to succeed in motor racing but suffers from lack of money and overuse of alcohol. Ruth falls for his looks and charm, but it is a doomed relationship. Without a job, he cannot afford to marry her, and his family would never accept her. When he makes a drunken scene in the club, she is discharged from her job, which means that she is made homeless. Desmond, a wealthy admirer, secures a flat for her and her son, but she still sees David. When she tells him she is pregnant, he does nothing about it, and she miscarries. Distraught, she goes to a house in Hampstead where she believes David is at a party. He comes out and goes with a girl to a pub. Ruth waits outside the pub, and when he emerges, she shoots him dead with four shots. She is arrested, tried and hanged. Cast Miranda Richardson as Ruth Ellis Rupert Everett as David Blakely Ian Holm as Desmond Cussen Stratford Johns as Morrie Conley Joanne Whalley as Christine Tom Chadbon as Anthony Findlater Jane Bertish as Carole Findlater David Troughton as Cliff Davis Tracy Louise Ward as Girl with Blakeley Matthew Carroll as Andy Lesley Manville as Maryanne David Beale as Man in Little Club Charon Bourke as Ballroom Singer Reception The film made a comfortable profit. Goldcrest Films invested £253,000 in the film and received £361,000, making them a profit of £108,000. Critical response On Rotten Tomatoes, the film has an approval rating of 91%, based on reviews from 11 critics. Accolades Mike Newell won Award of the Youth at the 1985 Cannes Film Festival for Dance with a Stranger. Miranda Richardson won Best Actress at the Evening Standard British Film Awards, and Ian Holm won Boston Society of Film Critics Awards 1985 for this and other performances. Passage 2: Call Me (film) Call Me is a 1988 American erotic thriller film about a woman who strikes up a relationship with a stranger over the phone, and in the process becomes entangled in a murder. The film was directed by Sollace Mitchell, and stars Patricia Charbonneau, Stephen McHattie, and Boyd Gaines. Plot Anna, a young and energetic journalist, receives an obscene call from an unknown caller whom she mistakes for her boyfriend. As a result of this mistake she agrees to meet with the caller at a local bar. There she witnesses a murder in the women's bathroom. She finds herself drawn into a mystery involving both the killer and the mysterious caller who she shares increasingly personal conversations with. Cast Patricia Charbonneau as Anna Stephen McHattie as "Jellybean" Boyd Gaines as Bill Sam Freed as Alex Steve Buscemi as "Switchblade" Patti D'Arbanville as Cori David Strathairn as Sam Olek Krupa as Hennyk John Seitz as "Pressure" Pi Douglass as Nikki George Gerdes as Fred Ernest Abuba as Boss Kevin Harris as Dude Gy Mirano as The Waitress Reception The film was reviewed by the television show At the Movies, on May 28, 1988. Roger Ebert called the film a "directorial mess", citing laborious scenes which serve only to set up plot points, some of which are never followed up on. Gene Siskel felt the premise had potential, but it was ruined by the lead character's relentless stupidity, and that the film did not take the sexual elements far enough. The critics gave the film two thumbs down. External links Call Me at IMDb Call Me at AllMovie Passage 3: Dance with a Stranger (band) Dance with a Stranger is a Norwegian rock band from Kristiansund. Biography The band was founded in Bergen 1984 and had great success until they parted in 1994. Since then, they have had a few reunion concerts, as well as releasing compilation CDs. They were, among other things, voted Player of the Year at the Spellemannprisen 1991. The band took a longer break in the period 2002 to 2005. In 2007, they released the double compilation album Everyone Needs a Friend... The Very Best of Dance with a Stranger with three new songs and previously unreleased soundtracks from the 1980s, as well as highlights from the band's many releases. In 2013, bassist Yngve Moe died in an accident. The band still completed their farewell tour in 2014, now joined by Per Mathisen on bass. The band has continued concert activities after this. Discography Dance with a Stranger (1987) To (1989) Atmosphere (1991) Look What You've Done (1994) Unplugged (1994) The Best of Dance with a Stranger (1995) Happy Sounds (1998) Everyone Needs a Friend... The Very Best Of ( 2007) Members Present membersFrode Alnæs – guitar, vocals Øivind "Elg" Elgenes – vocals Per Mathisen – bass (2014) Bjørn Jenssen – drumsFormer memberYngve Moe – bass (1983–1994; died 2013) Sources Pop-lexicon (Norwegian) About Dance with a Stranger at the music guide Groove.no (Norwegian) Website Passage 4: Coney Island Baby (film) Coney Island Baby is a 2003 comedy-drama in which film producer Amy Hobby made her directorial debut. Karl Geary wrote the film and Tanya Ryno was the film's producer. The music was composed by Ryan Shore. The film was shot in Sligo, Ireland, which is known locally as "Coney Island". The film was screened at the Newport International Film Festival. Hobby won the Jury Award for "Best First Time Director". The film made its premiere television broadcast on the Sundance Channel. Plot After spending time in New York City, Billy Hayes returns to his hometown. He wants to get back together with his ex-girlfriend and take her back to America in hopes of opening up a gas station. But everything isn't going Billy's way - the townspeople aren't happy to see him, and his ex-girlfriend is engaged and pregnant. Then, Billy runs into his old friends who are planning a scam. Cast Karl Geary - Billy Hayes Laura Fraser - Bridget Hugh O'Conor - Satchmo Andy Nyman - Franko Patrick Fitzgerald - The Duke Tom Hickey - Mr. Hayes Conor McDermottroe - Gerry David McEvoy - Joe Thor McVeigh - Magician Sinead Dolan - Julia Music The film's original score was composed by Ryan Shore. External links Coney Island Baby (2006) at IMDb MSN - Movies: Coney Island Baby Passage 5: Dance with a Stranger (disambiguation) Dance with a Stranger may refer to one of the following: Dance with a Stranger, a 1985 film Jack and Jill (dance), a dance competition format Dance with a Stranger (band), a Norwegian rock band Passage 6: Miley Naa Miley Hum Miley Naa Miley Hum (transl. If we meet or don't) is a 2011 Indian film directed by Tanveer Khan, and marking the debut of Chirag Paswan, son of politician Ram Vilas Paswan. The film stars Kangana Ranaut, Neeru Bajwa and Sagarika Ghatge. The film released on 4 November 2011.The film went unnoticed and was considered a box office disaster. Subsequently, Paswan turned to politics and was elected to the Jamui seat in Bihar in the 2014 Lok Sabha elections. Plot Chirag comes from a wealthy background and assists his father, Siddharth Mehra, in managing and maintaining their land. Chirag's parents have been divorced due to incompatibility arising mainly due to his businesswoman mother, Shalini's hatred of tennis, a sport that Chirag wants to play professionally. Shalini and Siddharth would like to see Chirag married and accordingly Shalini picks London-based Kamiah, while Siddharth picks Bhatinda-based Manjeet Ahluwalia. Chirag, who sneaks off to practice tennis at night, is asked to make a choice but informs them that he is in love with a model named Anishka (Kangana Ranaut). The displeased couple decide to confront and put pressure on a struggling and unknowing klutz-like Anishka to leave their son alone but they fail. In the end, Chirag's parents realize their mistake and together attend Chirag's tennis match and give blessings to Chirag and Anishka. Critical reception Taran Adarsh of gave the film 2.5 stars and claimed that Miley Naa Miley Hum is an absorbing fare with decent merits.Komal Nahta of Koimoi.com gave the film 0.5 stars out of 5 saying that the film lacks merits to work at the box office. Cast Chirag Paswan as Chirag Mehra Kangana Ranaut as Anishka Srivastava Kabir Bedi as Siddharth Mehra Poonam Dhillon as Shalini Mehra Sagarika Ghatge as Kamiah Neeru Bajwa as Manjeet Dalip Tahil Suresh Menon Tanya Abrol Kunal Kumar Shweta Tiwari (Special Appearance in a song) Soundtrack Passage 7: Sex with a Stranger Sex with a Stranger is a 1986 pornographic horror film directed by Chris Monte and written by Cash Markman and Chad Randolph. Plot A group of seven seemingly unconnected people each receive a letter containing half of a thousand dollar bill, an invitation to a mansion, and the promise of money and prizes if they show up. Arriving at the house, the recipients of the envelopes find a note, which informs them that rooms have been prepared for them, and that their host (known only as "J.M.") will arrive soon to explain everything to them. The guests conclude that they have been called together due to a tontine made by relatives, who all died in a hotel fire during their last annual meeting. Trevor and Priscilla have sex in a bedroom, and Joy and Inspector #6 (who was in the midst of donning women's undergarments when Joy walked in on him) do the same elsewhere. Afterward, the inspector is killed when he falls or is shoved down a flight of stairs, and his body disappears shortly after the others find it. Wanting to know who summoned them, and in need of the money they have been promised, the remaining guests decide to stay despite the risk of being murdered. Slick and Sugar go off to have sex, and Priscilla is found dead, having been electrocuted while using a sabotaged vibrator. Thinking Priscilla's automatic camera could offer a clue as to what happened to her, Slick and Sugar try to develop the film in it, while Trevor mourns Priscilla's death by downing a glass of wine, which has been spiked with rodenticide. Joy coerces Doctor Rivameter into having sex on the bed containing Priscilla and Trevor's bodies, but they are interrupted mid-coitus by screams coming from another room. Rivameter discovers that Sugar has been drowned in a sink, and as she and Joy conclude that the killer must be Slick, he stumbles into the room with a spike through his head, and a knife in his back. Slick drops dead before he can reveal his killer, but then he instantly recovers, and it is revealed that he and all the other victims were not actually dead. The inspector had merely been knocked out by an accidental fall down the stairs, and the others had faked their deaths to stop themselves from being targeted by the nonexistent killer. Jacob Myers, the man who called everyone to the mansion, enters the room, and introduces himself as the attorney handling the tontine case. Myers states that all that is left of the tontine is the thousand dollar bills he sent to the inheritors to get them there, the rest of the money having been lost on a failed investment in liquid prophylactics. Joy follows Myers to his bedroom, and the others decide to pass the time until daylight by having an orgy. Cast Ebony Ayes as Sugar, a high class prostitute. Greg Derek as Trevor Fairbanks, an actor. Nina Hartley as Priscilla Vogue, a fashion model. Sheena Horne as Joy, a ditz with a fetish for anonymous sex. Scott Irish as Inspector #6, a clothing inspector. Keisha as Rivameter, a Doctor of Philosophy. Randy West as Sylvester "Slick" Rhodes, a shyster. Reception Adam Film World gave the film a three out of five, marking it as "Hot". AVN stated that while it was "a technically-sound production that features a capable cast" it was brought down by a ridiculous and overwrought plot, and mostly lukewarm sex.A one and a half was awarded by Popcorn for Breakfast, which called Sex with a Stranger "painfully derivative" and "a poster child for bad porn" before concluding "As a curiosity, it may have some archival value in that it's about as tasteless as mainstream porn gets in places". A two out of five was given by The Bloody Pit of Horror, which wrote "It's cheap (and shot-on-video, naturally), silly, has a few dumb laughs and there's lots of sex, so mission accomplished, I guess". Passage 8: The Wonderful World of Captain Kuhio The Wonderful World of Captain Kuhio (クヒオ大佐, Kuhio Taisa, lit. "Captain Kuhio") is a 2009 Japanese comedy-crime film, directed by Daihachi Yoshida, based on Kazumasa Yoshida's 2006 biographical novel, Kekkon Sagishi Kuhio Taisa (lit. "Marriage swindler Captain Kuhio"), that focuses on a real-life marriage swindler, who conned over 100 million yen (US$1.2 million) from a number of women between the 1970s and the 1990s.The film was released in Japan on 10 October 2009. Cast Masato Sakai - Captain Kuhio Yasuko Matsuyuki - Shinobu Nagano Hikari Mitsushima - Haru Yasuoka Yuko Nakamura - Michiko Sudo Hirofumi Arai - Tatsuya Nagano Kazuya Kojima - Koichi Takahashi Sakura Ando - Rika Kinoshita Masaaki Uchino - Chief Fujiwara Kanji Furutachi - Shigeru Kuroda Reila Aphrodite Sei Ando Awards At the 31st Yokohama Film Festival Best Actor – Masato Sakai Best Supporting Actress – Sakura Ando Passage 9: Dance with Death (film) Dance with Death is an American film starring Barbara Alyn Woods and Maxwell Caulfield. It is a reworking of Stripped to Kill, a previous film from 1987 produced by Roger Corman's Concorde Pictures studio. It is notable for featuring an early acting role for Lisa Kudrow. Plot Kelly is a reporter for a Los Angeles newspaper who finds out that strippers at a club called Bottoms Up are getting brutally murdered. With the prodding of her Hopper, her editor and ex-boyfriend, she goes undercover by winning an amateur night contest to get a job at the club. Once embedded, Kelly gets to know the other employees, particularly the snide owner Art, the hapless DJ Dermot, and mercurial dancer Jodie. She also discovers a regular patron, Shaughnessy, is an undercover detective investigating the murders. He soon discovers her true identity as a reporter, and they team up to investigate. As she continues working at the club, she is made aware of several suspects in the murders: Henry, a shy regular who is fixated on lingerie, Art, who has a connection to one of the dead women, and even Hopper, whom she learns covered a string of similar stripper murders in Atlanta and was interrogated by police. As they share their information, Kelly and Shaughnessy become infatuated with each other. After one night spent together, Kelly looks in on Jodie, who had not reported to work the previous evening, and discovers her murdered. Shaughnessy follows Henry to a park he regularly visits, and after confronting him, causes him to be shot dead by backup police. That night at the club, after a performance, Kelly hears noises from Art's office, and discovers him dead; Hopper seizes her, insisting he killed Art by accident, in a dispute over blackmail involving him and another of the club dancers. She escapes him, and Shaughnessy intercepts her and shoots Hopper dead. She is relieved at first, but as he holds her, she notices the stone from his ring is missing, and remembers that she found a stone in Jodie's hand; she realizes Shaughnessy is the murderer. She tries to escape him, but is followed by him into a next door warehouse. After repeated attempts to kill him which he recovers from, she finally sets a trap with gasoline and sets him on fire. Sometime later at the newspaper office, Kelly begins typing her story on the murders, called "Dance with Death." Cast Maxwell Caulfield as Shaughnessy Barbara Alyn Woods as Kelly Martin Mull as Art Catya Sassoon as Jodie Tracey Burch as Whitney Jill Pierce as Lola Alretha Baker as Sunny Michael McDonald as Henry Drew Snyder as Hopper Lisa Kudrow as Millie Maria Ford as Stripper (uncredited) Production Katt Shea wrote the original story for the 1987 with her husband. She later recalled: I just didn't get paid for it. It was weird. Basically my script from Stripped to Kill was re-worked and re-used by Roger Corman and a very bad movie was the result of that. That’s my opinion and I just don’t think that film was well done. I don’t like that Roger Corman does that. I love Roger, but I just didn’t like that. Passage 10: Lisa (1990 film) Lisa is a 1990 American thriller film directed by Gary Sherman and starring Staci Keanan, D. W. Moffett, Cheryl Ladd and Jeffrey Tambor. Its plot follows a teenage girl's infatuation with a stranger that, unknown to her, is a serial killer-stalker. Plot Fourteen-year-old Lisa Holland lives with her mother Katherine, a successful florist, in Venice Beach, California. Lisa is beginning to show a keen interest in boys but is not allowed to date due to her mother’s strict rule about not dating until she is 16. It is revealed that Katherine had Lisa when she was 14 years old. Abandoned by Lisa's father, Katherine was forced to leave home after her parents demanded that she put Lisa up for adoption. These facts have made Katherine very wary about Lisa dating, feeling she would end up like her mother. Lisa’s desire to have a boyfriend is furthered by her best friend Wendy Marks, whose less-strict mother and father have allowed her to start dating. Meanwhile, there is a serial killer running loose in Venice Beach, nicknamed the Candlelight Killer, so called because he rapes his victims by candlelight before killing them. The Candlelight Killer is a suave, good-looking, and successful restaurateur named Richard, who looks more like a sexy model than a serial killer. Richard stalks good-looking women once he finds out where they live. Uniquely, Richard calls his victims over the telephone leaving messages on their answering machines saying he is in their house and is going to kill them. As the women are listening to his message, Richard grabs them from behind and then begins his vicious attacks. One night, Lisa is coming home from the convenience store, and accidentally runs into Richard, leaving the house of another victim. Lisa is mesmerized by his good looks and follows him to his car once he leaves, copying down his license plate number. Through the DMV she is able to get his address and telephone number. Lisa then begins to call up Richard on the phone and engages him in seductive conversation. Richard is intrigued by their conversations, yet is more interested in finding out who she is, mainly because he is the one now being stalked. Lisa and Wendy follow Richard, finding out where he lives and works. Lisa even gets into Richard's car alone at one point only to have to hide in the back seat when he unexpectedly shows up. All this goes on unknown to Katherine, and with each succeeding conversation, in which Lisa reveals more about herself, Richard pushes Lisa towards meeting him for a date. Still at a standoff with her mother when it comes to dating, Wendy suggests that Lisa set up Katherine with Richard, implying that maybe if her mother "gets some", she will ease up and allow Lisa to date. As Easter weekend approaches, Lisa plans to go away with Wendy and her family to Big Bear, California. Katherine and Lisa decide to have a girls' night out dinner before she leaves, and Lisa makes reservations at Richard's restaurant. Lisa calls Richard informing him that she will be at the restaurant that night. Katherine goes to the bathroom ordering Lisa to pay the bill with her credit card. Richard gets a love note from Lisa with the bill, which reveals Katherine's credit card information, which he uses to track her down. When Lisa and Katherine arrive home, the two start bickering over Lisa's dating. Lisa immediately shouts back at Katherine and her stupid rules and that maybe if she got it once in a while, she wouldn't be such a bitch to her mother's dismay. Katherine orders Lisa to go to her room and grounds her, while taking her phone from her room. Meanwhile, Richard begins to stalk the unsuspecting Katherine. While in Big Bear, Lisa decides to give Richard a call. He reveals to her that he knows her name is Katherine, and that he knows where she lives. On the night Lisa is to return from Big Bear, Katherine enters the apartment and hears a message from Richard. Meanwhile, Lisa returns home and enters the apartment. Running into her room, she is attacked by Richard who has knocked her mother unconscious. Richard brings Lisa into Katherine's bedroom and plans to assault her; Lisa sees the candles and realizes he is the Candlelight Killer. However, Katherine regains consciousness and knocks out Richard from behind and sends him through a window to his death. Relieved to be alive, mother and daughter collapse into each other's arms. Cast Cheryl Ladd as Katherine Holland D. W. Moffett as Richard / The Candlelight Killer Staci Keanan as Lisa Holland Tanya Fenmore as Wendy Marks Jeffrey Tambor as Mr. Marks, Wendy's Father Julie Cobb as Mrs. Marks, Wendy's Mother Edan Gross as Ralph Marks, Wendy's Brother Release Lisa released to theaters on April 20, 1990 through United Artists. It achieved a domestic gross of $4,347,648, with an opening night of $1,119,895. Home media Lisa received a home video release in December 1990. The movie received a DVD release as part of MGM MOD Wave 16 and was released on June 28, 2012. A Blu-ray edition, featuring a commentary track from director Gary Sherman and an interview with D. W. Moffett supervised by Scorpion Releasing, was released in December 2015 by Kino Lorber. Reception Critical reception for the film was negative; praise tended to center upon Ladd's performance while criticism centered around the script and tropes. Roger Ebert gave the film 1 1/2 stars, stating that it was "a bludgeon movie with little respect for the audience's intelligence, and simply pounds us over the head with violence whenever there threatens to be a lull." A reviewer for The Ottawa Citizen was also critical, praising Ladd's performance while also criticizing the film as "hysterical and transparent in its attempt to scare audience members into hosing down their hormones."
Which film was released more recently, Dance With A Stranger or Miley Naa Miley Hum?
Miley Naa Miley Hum
3,934
2wikimqa
4k
Passage 1: Lynn Reynolds Lynn Fairfield Reynolds (May 7, 1889 – February 25, 1927) was an American director and screenwriter. Reynolds directed more than 80 films between 1915 and 1928. He also wrote for 58 films between 1914 and 1927. Reynolds was born in Harlan, Iowa and died in Los Angeles, California, from a self-inflicted gunshot wound. Death Returning home in 1927 after being snowbound in the Sierras for three weeks, Reynolds telephoned his wife, actress Kathleen O'Connor, to arrange a dinner party at their Hollywood home with another couple. During the dinner, Reynolds and O'Connor engaged in a heated quarrel in which each accused the other of infidelity. With his guests following in an attempt to calm him down, Reynolds left the table to retrieve a pistol from another room where he shot himself in the head. Selected filmography Passage 2: Thomas Kennedy Thomas or Tom Kennedy may refer to: Politics Thomas Kennedy (Scottish judge) (1673–1754), joint Solicitor General for Scotland 1709–14, Lord Advocate 1714, Member of Parliament for Ayr Burghs 1720–21 Thomas Kennedy, 9th Earl of Cassilis (bef. 1733–1775), Scottish peer, Marquess of Ailsa Thomas Kennedy (1776–1832), politician in Maryland, United States Thomas Francis Kennedy (1788–1879), Scottish Member of Parliament for Ayr Burghs 1818–1834 Thomas Daniel Kennedy (1849?–1877), Connecticut state legislator Thomas Kennedy (Australian politician) (1860–1929), Australian politician Tom Kennedy (British politician) (1874–1954), Scottish Member of Parliament for Kirkcaldy Burghs Thomas Laird Kennedy (1878–1959), politician in Ontario, Canada Thomas Kennedy (unionist) (1887–1963), American miner, president of the UMWA 1960–1963, Lieutenant Governor of Pennsylvania 1935–1939 Thomas Kennedy (Irish politician) (died 1947), Irish Labour Party politician and trade union official Thomas P. Kennedy (1951–2015), American politician, Massachusetts state senator Thomas Blake Kennedy (1874–1957), United States federal judge Entertainment Thomas E. Kennedy (born 1944), American fiction writer, essayist and translator Tom Kennedy (actor) (1885–1965), American actor Tom Kennedy (television host) (1927–2020), American television game show host Tom Kennedy (producer) (c. 1948–2011), American film trailer producer, director and film editor Tom Kennedy (musician) (born 1960), jazz double-bass and electric bass player Tom Kennedy (Neighbours), a character on the Australian soap opera Neighbours, played by Bob Hornery Sports Tom Kennedy (Australian footballer) (1906–1968), Australian rules footballer Tom Kennedy (wheelchair rugby) (born 1957), Australian Paralympic wheelchair rugby player Tom Kennedy (English footballer) (born 1985), English footballer Thomas J. Kennedy (1884–1937), American Olympic marathon runner Thomas Kennedy (basketball) (born 1987), American basketball player Tom Kennedy (quarterback) (1939–2006), American football quarterback Tom Kennedy (wide receiver) (born 1996), American football wide receiver Others Tom Kennedy (journalist) (born 1952), Canadian journalist Thomas Kennedy (unionist) (1887–1963), president of the United Mine workers Thomas Fortescue Kennedy (1774–1846), Royal Navy officer Thomas Kennedy (RAF officer) (1928–2013), British pilot Thomas Kennedy (violin maker) (1784–1870), British luthier Thomas A. Kennedy (born 1955), American CEO and chairman, Raytheon Company Thomas Francis Kennedy (bishop) (1858–1917), bishop of the Catholic Church in the United States See also Thomas L. Kennedy Secondary School (established 1953), high school in Mississauga, Ontario, Canada Passage 3: Space Probe Taurus Space Probe Taurus (a.k.a. Space Monster) is a 1965 low budget black-and-white science fiction/action/drama film from American International Pictures, written and directed by Leonard Katzman, and starring Francine York, James E. Brown, Baynes Barrow, and Russ Fender. Plot In the late 20th century, when crewed missions to outer space have become routine, a distress call from the spaceship Faith One requests its immediate destruction. It has been contaminated by an infectious gas, leaving all crew dead except for its commander (Bob Legionaire). The mission is aborted and the spaceship is destroyed.By 2000, a new propulsion technology has been developed. Four astronauts aboard the spaceship Hope One set off to find new planets for colonization. Their mission takes them past a space platform circling Earth. General Mark Tillman (James Macklin) at Earth Control HQ tells a TV reporter (John Willis) that all is going according to the pre-flight plan. The crew of gravity-controlled Hope One consists of the pilot/commanding officer, Colonel Hank Stevens (James Brown), and three scientists: Dr. John Andros (Baynes Barron), Dr. Paul Martin (Russ Bender), and Dr. Lisa Wayne (Francine York). It is quickly revealed that Stevens did not want a woman on the mission, but he is stuck with Dr. Wayne anyway. Not long into their voyage, Hope One comes upon an unknown spacecraft. Earth Control instructs them to investigate and they encounter a grotesque alien. The alien attacks Dr. Andros, forcing Stevens to shoot and kill it. Radiation levels then rise on the alien spacecraft, so Stevens sets a bomb to blow it up. After a fiery meteorite storm leads to an emergency landing in the ocean of an Earth-like escaped moon, Tillman takes time to apologize to Wayne for his sexist remarks, which results in a quick reconciliation and a more-than-friendly kiss. While repairs continue, giant crabs take an interest in the spaceship. The crew decides to test the atmosphere to see if it contains breathable air, which it does. Andros then volunteers to go scout the nearest land mass. A sea monster almost intercepts him, but the scientist reaches shore, while his comrades continue repairs and worry about him. Upon his return, Andros is again attacked by the sea monster and, after making it back safely to the spaceship, perishes after confirming that the planet can support human life and plants can grow. The crew confirms this to Earth, names the planet Andros One, and rockets back safely to Earth. Cast Francine York as Dr. Lisa Wayne James Brown as Col. Hank Stevens Baynes Barron as Dr. John Andros Russ Bender as Dr. Paul Martin John Willis as TV Reporter Bob Legionaire as Faith I Crewman James Macklin as Gen. Mark Tilman Phyllis Selznick as Earth Control Secretary John Lomma as Earth Control Passage 4: Mix in Mix in may refer to: A mix-in is some type of confectionery added to ice cream Mixin is a class in object-oriented programming languages Passage 5: Tom Mix in Arabia Tom Mix in Arabia is a 1922 American silent adventure film directed by Lynn Reynolds and starring Tom Mix, Barbara Bedford and George Hernandez. Cast Tom Mix as Billy Evans Barbara Bedford as Janice Terhune George Hernandez as Arthur Edward Terhune Norman Selby as Pussy Foot Bogs Edward Peil Sr. as Ibrahim Bulamar Ralph Yearsley as Waldemar Terhune Hector V. Sarno as Ali Hasson Passage 6: Tom Tom or TOM may refer to: Tom (given name), a diminutive of Thomas or Tomás or an independent Aramaic given name (and a list of people with the name) Characters Tom Anderson, a character in Beavis and Butt-Head Tom Beck, a character in the 1998 American science-fiction disaster movie Deep Impact Tom Buchanan, the main antagonist from the 1925 novel The Great Gatsby Tom Cat, a character from the Tom and Jerry cartoons Tom Lucitor, a character from the American animated series Star vs. the Forces of Evil Tom Natsworthy, from the science fantasy novel Mortal Engines Tom Nook, a character in Animal Crossing video game series Tom Servo, a robot character from the Mystery Science Theater 3000 television series Tom Sloane, a non-adult character from the animated sitcom Daria Talking Tom, the protagonist from the Talking Tom & Friends franchise Tom, a character from the Deltora Quest books by Emily Rodda Tom, a character from the 1993 action/martial arts movie Showdown Tom, a character from the cartoon series Tom and Jerry (Van Beuren) Tom, a character from the anime and manga series One Piece Tom (Paralympic mascot), the official mascot of the 2016 Summer Paralympics Tom, a fictional dinosaur from the children's cartoon Tom Tom, the main protagonist from the British children's live-action series Tree Fu Tom Tom, a character from the children's series Tots TV T.O.M., the robot host/mascot of Adult Swim's Toonami action block Entertainment Tom (1973 film), a blaxploitation film Tom (2002 film), a documentary film directed by Mike Hoolboom Tom (instrument) Tom (American TV series) Tom (Spanish TV series) Tom, a 1970 album by Tom Jones Tom-tom drum Geography Tom (Amur Oblast), in Russia, a left tributary of the Zeya Tom (river), in Russia, a right tributary of the Ob Biology A male cat A male turkey Transport Thomson Airways ICAO code Tottenham Hale station, London, England (National Rail station code) Acronyms Territoire d'outre-mer or overseas territory Text Object Model, a Microsoft Windows programming interface Theory of mind, the ability to attribute mental states to oneself and others and to understand that others have states that are different from one's own Translocase of the outer membrane, a protein for intracellular protein-equilibrium Troops Out Movement, campaigned against British involvement in Northern Ireland Tune-o-matic, a guitar bridge design Target operating model, a description of the desired state of an organizational model in a business at a chosen date Other uses TOM (mascot), three Bengal tigers that have been the mascot of the University of Memphis sports teams Tom (pattern matching language), a programming language Tom, Oklahoma TOM Group, a Chinese media company TOM Online, a Chinese mobile internet company TOM (psychedelic) Tom (gender identity), a gender identity in Thailand See also Tom Tom (disambiguation) Mount Tom (disambiguation) Peeping Tom (disambiguation) Thomas (disambiguation) Tom Thumb (disambiguation) Tomás (disambiguation) Tomm (disambiguation) Tommy (disambiguation) Toms (disambiguation) Passage 7: Leonard Katzman Leonard Katzman (September 2, 1927 – September 5, 1996) was an American film and television producer, writer and director. He was most notable for being the showrunner of the CBS oil soap opera Dallas. Early life and career Leonard Katzman was born in New York City on September 2, 1927, to a Jewish family. He began his career in the 1940s, while still in his teens, working as an assistant director for his uncle, Hollywood producer Sam Katzman. He started out on adventure movie serials such as Brenda Starr, Reporter (1945), Superman (1948), Batman and Robin (1949), The Great Adventures of Captain Kidd (1951), Riding with Buffalo Bill (1954), et al. During the 1950s he continued working as assistant director, mostly with his uncle, in feature films such as A Yank in Korea (1951), The Giant Claw (1957), Face of a Fugitive (1959) and Angel Baby (1961). Besides his big screen work, Katzman also served on television shows, including The Adventures of Wild Bill Hickok, The Mickey Rooney Show and Bat Masterson. In 1960, Katzman made his production debut, serving not only as assistant director, but also as associate producer, on all four seasons of adventure drama Route 66 (1960-1964), which he would later regard as his favorite production. His additional early work in television production (and occasional writing and directing) includes shows crime drama Tallahassee 7000 (1961), western drama The Wild Wild West (1965-1969), the second season of crime drama Hawaii Five-O (1969-1970), legal drama Storefront Lawyers (1970-1971), the final five seasons of western drama Gunsmoke (1970-1975) as well as its spinoff series Dirty Sally (1974), legal drama Petrocelli (1974-1976) for which he was nominated an Edgar Allan Poe Award, and the two science fiction dramas The Fantastic Journey (1977) and Logan's Run (1977-1978). In 1965, he wrote, produced and directed the science fiction film Space Probe Taurus (also known as Space Monster). Aside from his work as assistant director, this was his only venture into feature films. Dallas In 1978, Katzman served as producer for the five-part miniseries Dallas, which would evolve into one of television's longest running dramas, lasting until 1991. While the series was created by David Jacobs, Katzman became the de facto show runner during the second season of the show, as Jacobs stepped down to create and later run Dallas spin-off series Knots Landing. Under Katzman's lead, Dallas, whose first episodes had consisted of self-contained stories, evolved into a serial, leading into the '80s trend of prime time soap operas.While Katzman headed Dallas' writing staff from the show's second season, he remained producer, with Philip Capice serving as executive producer. The creative conflicts between Capice and Katzman eventually led to Katzman stepping down from his production duties on the show for season nine, instead being billed as "creative consultant" (during this time he also worked on the short-lived drama series Our Family Honor). However, increased production costs and decreasing ratings caused production company Lorimar—along with series star Larry Hagman (J. R. Ewing)—to ask Katzman to return to the show in his old capacity. Katzman agreed, reportedly under the condition that he would have "total authority" on the show, and as of the tenth season premiere he was promoted to executive producer, and Capice was let go. Katzman remained as executive producer on Dallas until the series finale in May 1991. Besides his production work, he also wrote and directed more episodes of the series than anyone else. After Dallas Following "Dallas", Katzman went on to create the short-lived crime drama Dangerous Curves (1992-1993), which aired as a part of CBS' late-night drama block Crimetime After Primetime, and serve as executive producer for the second season of the action drama Walker, Texas Ranger (1994-1995). His last work was the 1996 "Dallas" reunion movie J.R. Returns, which he also wrote and directed. Personal life and death Katzman fathered his first child, Gary Katzman, with Eileen Leener (1929-2019). Katzman did not raise his first child and left his mother when he was 4 years old. The child was eventually adopted and took the surname Klein. Through Gary Klein, Katzman is the biological grandfather of Ethan Klein of the Israeli-American YouTube comedy channel h3h3Productions.Leonard Katzman and his wife LaRue Farlow Katzman had three children. His daughter, actress Sherril Lynn Rettino (1956-1995), predeceased her father by one year. She played the recurring character Jackie Dugan on Dallas from 1979-91. His sons Mitchell Wayne Katzman and Frank Katzman, as well as son-in-law John Rettino, all worked on the production of Dallas' later seasons. Both sons were also involved in the production of Dangerous Curves; Walker, Texas Ranger; and J. R. Returns. Katzman died of a heart attack in Malibu, California on September 5, 1996, three days after his 69th birthday, and more than two months prior to the airing of his last production, Dallas: J.R. Returns. He was interred in the Mount Sinai Memorial Park Cemetery in Los Angeles. Filmography Excluding work as assistant director. Awards 1997: Lone Star Film & Television Awards - Special Award Passage 8: Thomas Ford Thomas or Tom Ford may refer to: Thomas Ford (martyr) (died 1582), English martyr Thomas Ford (composer) (c. 1580–1648), English composer, lutenist, and viol player Thomas Ford (minister) (1598–1674), English nonconformist minister Thomas Ford (politician) (1800–1850), governor of Illinois Thomas Ford (rower), British rower Thomas H. Ford (1814–1868), American politician in Ohio Tom Ford (baseball) (1866–1917), baseball pitcher Thomas F. Ford (1873–1958), California politician Thomas Ford (architect) (1891–1971), British architect Thomas Gardner Ford (1918–1995), Member of the Michigan House of Representatives Tom Ford (born 1961), American designer Thomas Mikal Ford (1964–2016), American actor Tom Ford (presenter) (born 1977), British television presenter Tom Ford (snooker player) (born 1983), English snooker player Tom Ford (squash player) (born 1993), British squash player "Tom Ford" (song), a 2013 song by Jay-Z See also Tommy Ford (disambiguation) Passage 9: Thomas Walker Thomas or Tom Walker may refer to: Entertainment Thomas Walker (actor) (1698–1744), English actor and dramatist Thomas Walker (author) (1784–1836), English barrister, police magistrate and writer of a one-man periodical, The Original Thomas Bond Walker (1861–1933), Irish painter Tom Walker (singer) (born 1991), Scottish singer-songwriter Tom Walker (Homeland), a character in the TV series Homeland Tom Walker, British actor and comedian known for his character Jonathan Pie, a fictional British news reporter Tom Walker (comedian), Australian comedian, mime and Twitch streamer Law Thomas Joseph Walker (1877–1945), Judge for the United States Customs Court Thomas Glynn Walker (1899–1993), United States federal judge Thomas Walker (attorney) (born 1964), U.S. attorney Politics Thomas Walker (died 1748) (1660s–1748), Member of Parliament for Plympton Erle, 1735–1741 Thomas Walker (merchant) (1749–1817), English political radical in Manchester Thomas Eades Walker (1843–1899), British Member of Parliament for East Worcestershire, 1874–1880 Thomas Gordon Walker (1849–1917), British Indian civil servant Thomas Walker (Australian politician) (1858–1932), member of two different state parliaments Thomas Walker (Canadian politician) (died 1812), Canadian lawyer and politician Thomas J. Walker (1927–1998), provincial MLA from Alberta, Canada Thomas Walker (American politician) (1850–1935), Alabama state legislator Sports Tom Walker (cricketer) (1762–1831), English cricketer Thomas Walker (Yorkshire cricketer) (1854–1925), English cricketer Tom Walker (1900s pitcher) (1881–1944), baseball player Tom Walker (1970s pitcher) (born 1948), American baseball player Tommy Walker (footballer, born 1915) (1915–1993), Scottish footballer and manager Tom Walker (footballer) (born 1995), English footballer Other Thomas Walker (academic) (died 1665), English academic at Oxford University Thomas Walker (explorer) (1715–1794), American explorer Thomas Walker (slave trader) (1758–1797), British slave trader Thomas Walker (died 1805), Irish publisher of Walker's Hibernian Magazine Thomas Walker (philanthropist) (1804–1886), Australian politician and banker Thomas Larkins Walker (c.1811–1860), Scottish architect Thomas Walker (journalist) (1822–1898), English editor of The Daily News Thomas A. Walker (1828–1889), English civil engineering contractor T. B. Walker (1840–1928), Minneapolis businessman who founded the Walker Art Center Thomas William Walker (1916–2010), soil scientist Thomas Walker (naval officer) (1919–2003), United States Navy officer Thomas B. Walker Jr. (1923–2016), American investment banker, corporate director and philanthropist Tom Walker (priest) (born 1933), Anglican priest and author Thomas J. Walker, namesake of the Thomas J. Walker House in Knoxville, Tennessee Thomas Walker & Son, manufacturers of nautical instruments, Birmingham, England See also Tommy Walker (disambiguation) Passage 10: Thomas Baker Thomas or Tom Baker may refer to: Politicians Thomas Cheseman or Thomas Baker (c. 1488–1536 or later), Member of Parliament for Rye Thomas Baker (died 1625), Member of Parliament for Arundel Tom Baker (Nebraska politician) (born 1948), member of Nebraska Legislature Thomas Guillaume St. Barbe Baker (1895–1966), Fascist activist and former British Army and RAF officer Colonel Thomas Baker (1810–1872), founder of Bakersfield, California Sports Thomas Baker (cricketer) (born 1981), English cricketer who played for Yorkshire County Cricket Club and Northamptonshire County Cricket Club Tom Baker (footballer, born 1934), Wales international football player, commonly called George Tom Baker (bowler) (born 1954), American bowler Tom Baker (1930s pitcher) (1913–1991), Major League Baseball pitcher for the Brooklyn Dodgers and New York Giants Tom Baker (1960s pitcher) (1934–1980), Major League Baseball pitcher for the Chicago Cubs Tom Baker (footballer, born 1905) (1905–1975), British footballer Thomas Southey Baker (1848–1902), English amateur rower and footballer Military Thomas Baker (Royal Navy officer) (1771–1845), Royal Navy admiral Thomas Durand Baker (1837–1893), Quartermaster-General to the Forces Thomas Baker (Medal of Honor recipient) (1916–1944), World War II Medal of Honor recipient Thomas Baker (aviator) (1897–1918), Australian soldier and aviator of the First World War Thomas Baker (general) (born 1935), United States Air Force general Religion Thomas Baker (missionary) (1832–1867), English Christian missionary cannibalised in Fiji Sir Thomas Baker (Unitarian) (1810–1886), English Unitarian minister and Mayor of Manchester Thomas Nelson Baker Sr. (1860–1941), African-American minister, author and philosopher Tom Baker (priest) (1920–2000), Anglican clergyman Actors Tom Baker (born 1934), played The Doctor on Doctor Who from 1974 to 1981 Tom Baker (American actor) (1940–1982) Education Tom Baker (professor) (born 1959), law professor at the University of Pennsylvania Law School Thomas E. Baker, professor of Constitutional law and former administrative assistant to William Rehnquist Thomas Baker (college president) (1871–1939), president of Carnegie Mellon University Thomas Baker (entomologist), American professor at Penn State University Others Thomas Baker (antiquarian) (1656–1740), English antiquarian Thomas Baker (artist) (1809–1864), English landscape painter and watercolourist Thomas Baker (Peasants' Revolt leader) (died 1381), English landowner Thomas Baker (musician), composer and producer of musical stage productions Thomas Baker (mathematician) (1625?–1689), English mathematician Thomas Baker (dramatist) (c. 1680–1749), English dramatist and lawyer Other uses Tom Baker Cancer Centre, a hospital in Canada Tom Baker (24 character) DC Tom Baker, a character on The Bill Tom Baker, protagonist in the 2003 film Cheaper by the Dozen and its sequel "Tom Baker", a song by Human League on some versions of Travelogue
Which film has the director who is older, Space Probe Taurus or Tom Mix In Arabia?
Tom Mix In Arabia
3,324
2wikimqa
4k
Passage 1: Peter Levin Peter Levin is an American director of film, television and theatre. Career Since 1967, Levin has amassed a large number of credits directing episodic television and television films. Some of his television series credits include Love Is a Many Splendored Thing, James at 15, The Paper Chase, Family, Starsky & Hutch, Lou Grant, Fame, Cagney & Lacey, Law & Order and Judging Amy.Some of his television film credits include Rape and Marriage: The Rideout Case (1980), A Reason to Live (1985), Popeye Doyle (1986), A Killer Among Us (1990), Queen Sized (2008) and among other films. He directed "Heart in Hiding", written by his wife Audrey Davis Levin, for which she received an Emmy for Best Day Time Special in the 1970s. Prior to becoming a director, Levin worked as an actor in several Broadway productions. He costarred with Susan Strasberg in "[The Diary of Ann Frank]" but had to leave the production when he was drafted into the Army. He trained at the Carnegie Mellon University. Eventually becoming a theatre director, he directed productions at the Long Wharf Theatre and the Pacific Resident Theatre Company. He also co-founded the off-off-Broadway Theatre [the Hardware Poets Playhouse] with his wife Audrey Davis Levin and was also an associate artist of The Interact Theatre Company. Passage 2: Dana Blankstein Dana Blankstein-Cohen (born March 3, 1981) is the executive director of the Sam Spiegel Film and Television School. She was appointed by the board of directors in November 2019. Previously she was the CEO of the Israeli Academy of Film and Television. She is a film director, and an Israeli culture entrepreneur. Biography Dana Blankstein was born in Switzerland in 1981 to theatre director Dedi Baron and Professor Alexander Blankstein. She moved to Israel in 1983 and grew up in Tel Aviv. Blankstein graduated from the Sam Spiegel Film and Television School, Jerusalem in 2008 with high honors. During her studies she worked as a personal assistant to directors Savi Gabizon on his film Nina's Tragedies and to Renen Schorr on his film The Loners. She also directed and shot 'the making of' film on Gavison's film Lost and Found. Her debut film Camping competed at the Berlin International Film Festival, 2007. Film and academic career After her studies, Dana founded and directed the film and television department at the Kfar Saba municipality. The department encouraged and promoted productions filmed in the city of Kfar Saba, as well as the established cultural projects, and educational community activities. Blankstein directed the mini-series "Tel Aviviot" (2012). From 2016-2019 was the director of the Israeli Academy of Film and Television. In November 2019 Dana Blankstein Cohen was appointed the new director of the Sam Spiegel Film and Television School where she also oversees the Sam Spiegel International Film Lab. In 2022, she spearheaded the launch of the new Series Lab and the film preparatory program for Arabic speakers in east Jerusalem. Filmography Tel Aviviot (mini-series; director, 2012) Growing Pains (graduation film, Sam Spiegel; director and screenwriter, 2008) Camping (debut film, Sam Spiegel; director and screenwriter, 2006) Passage 3: Oskar Roehler Oskar Roehler (born 21 January 1959) is a German film director, screenwriter and journalist. He was born in Starnberg, the son of writers Gisela Elsner and Klaus Roehler. Since the mid-1980s, he has been working as a screenwriter, for, among others, Niklaus Schilling, Christoph Schlingensief and Mark Schlichter. Since the early 1990s, he has also been working as a film director. For his film No Place to Go he won the Deutscher Filmpreis. His 2010 film Jew Suss: Rise and Fall was nominated for the Golden Bear at the 60th Berlin International Film Festival. Partial filmography Gentleman (1995) Silvester Countdown (1997) Gierig (1999) No Place to Go (2000) Suck My Dick (2001) Beloved Sister (2002) Angst (2003) Agnes and His Brothers (2004) The Elementary Particles (2006) Lulu and Jimi (2009) Jew Suss: Rise and Fall (2010) Sources of Life (2013) Punk Berlin 1982 (2015) Subs (2017) Enfant Terrible (2020) Passage 4: Atomised (film) Atomised (German: Elementarteilchen; also known as The Elementary Particles) is a 2006 German drama film written and directed by Oskar Roehler and produced by Oliver Berben and Bernd Eichinger. It is based on the novel Les Particules élémentaires by Michel Houellebecq. The film stars Moritz Bleibtreu as Bruno, Christian Ulmen as Michael, Martina Gedeck as Christiane, Franka Potente as Annabelle, and Nina Hoss as Jane. The film had its premiere at the Berlin Film Festival in Germany in February 2006. In contrast to the book setting in Paris, the film was shot entirely and is mainly situated in various places in Germany. Cities and states in Germany used for filming included Thuringia and Berlin. Contrary to the book, the film does not have cultural pessimism as a main theme, and it has an alternative ending. Plot The film focuses on Michael (Michael Djerzinski) and Bruno and their disturbed sexuality. They are half-brothers who are very different from each other. They both had an unusual childhood because their mother was a hippie, instead growing up with their grandmothers and in boarding schools. Michael grows up to become a molecular biologist and in doing so becomes more fascinated with genetics and separating reproduction and sexuality by cloning rather than having actual sexual relationships. He is frustrated by his current job in Berlin and decides to continue his research on cloning at an institution in Ireland. Bruno, a secondary school teacher and unsuccessful author, on the other hand, is obsessed with his own sexual desires and systematically drowns himself in failed attempts with women and nights with prostitutes. He voluntarily checks himself into a mental institution after having sexually harassed one of his students. Before his departure to Ireland, Michael visits the village of his childhood for the first time in years. To his surprise, he meets his childhood friend Annabelle there and finds that she is still single and they start a sexual relationship. Bruno leaves the mental institution and goes on holiday to a hippie camp after being faced with divorce by his wife. At the camp he meets Christiane, who is also sexually open. Although they have an open relationship, he falls in love with her. During a sex orgy at one of their visits to a swing club, Christiane collapses and Bruno is faced in hospital with the news that Christiane is paralysed forever because of a chronic illness. Nonetheless Bruno wants to live with her until the end. However Christiane insists that he should take some time for consideration. Michael moves to Ireland and learns that, despite his doubts, his old research on cloning was a revolutionary breakthrough. However he misses Annabelle but does not manage to get her on the phone. Annabelle is informed that she is pregnant but must have an abortion and her womb removed due to life-threatening abnormalities. Bruno calls Christiane but always replaces the receiver after just one ring. He finally drives to her apartment only to learn that she has committed suicide shortly before. Subsequently he re-enters mental institution totally devastated. Michael is told by Annabelle's mother that Annabelle had an abortion and a severe surgery. He immediately leaves Ireland for Annabelle and finally openly admits his deep love to her. In hospital Bruno has hallucinations of Christiane who explains to him that her suicide was not his fault. In his imagination he tells her that he ultimately has decided to stay with her forever. After Annabelle recovers and before their departure to Ireland, Michael and Annabelle visit Bruno in hospital and take him to the beach. Michael asks Bruno if he wants to come with Annabelle and him to Ireland but Bruno decides to live happily in hospital with Christiane in his mind forever. The film ends with title cards stating that Michael Djerzinski received the Nobel Prize. This too is fiction. Cast Moritz Bleibtreu Christian Ulmen Martina Gedeck Franka Potente Nina Hoss Reception The film has a 100 percent rating in the review aggregating website Rotten Tomatoes based on seven reviews. Passage 5: Ian Barry (director) Ian Barry is an Australian director of film and TV. Select credits Waiting for Lucas (1973) (short) Stone (1974) (editor only) The Chain Reaction (1980) Whose Baby? (1986) (mini-series) Minnamurra (1989) Bodysurfer (1989) (mini-series) Ring of Scorpio (1990) (mini-series) Crimebroker (1993) Inferno (1998) (TV movie) Miss Lettie and Me (2002) (TV movie) Not Quite Hollywood: The Wild, Untold Story of Ozploitation! (2008) (documentary) The Doctor Blake Mysteries (2013) Passage 6: Susan B. Nelson Susan B. Nelson (April 13, 1927 – May 4, 2003) was an American environmental activist who is best known as the mother of the Santa Monica Mountains National Recreation Area. Early life Sue Nelson was born Susan Louise Barr in Syracuse, New York, on April 13, 1927, the child of an accountant and a teacher. Her family moved to Los Angeles where she attended Alexander Hamilton High School and UCLA, graduating in 1948 with a degree in political science. She later earned a master's degree from UCLA in urban planning in 1969. Environmental activism Nelson started her conservationist career as a housewife in Mandeville Canyon. She later became an active member in the Sierra Club, the Peace and Freedom Party, and the Green Party. In 1964 she helped to found the Friends of the Santa Monica Mountains, Parks and Seashore, and also became this group's president. She is credited by congressman Anthony Beilenson as being the single greatest driver behind the establishment by Congress in 1978 of the Santa Monica Mountains National Recreation Area, the first truly urban national park. Along with Nelson, two other women (Jill Swift and Margot Feuer) were instrumental in bringing about federal, legal recognition of the SMMNRA. In the years following this federal legislation, Nelson lobbied Congress to provide more funding to expand and improve the parkland. Nelson also worked on a variety of other conservation projects throughout the Los Angeles region in the 1980s and 1990s, including areas such as Malibu Creek State Park, Point Mugu, Hollywood, Temescal Canyon, and Topanga Canyon. She also voiced her vocal opposition, through newspaper opinion pieces and town hall meetings, to development projects such as the Malibu Canyon Freeway, the Pacific Coast Freeway, and the Mulholland Highway. In addition, Nelson sounded a warning bell against the privatization of public parklands. Her persistence led some to call her ruthless, but also warmhearted and feisty. Personal life Nelson married Earl Nelson in 1948. Together they had four children, but the marriage ended in divorce. Nelson's son-in-law was the composer James Horner. She died on May 4, 2003, after she was hit by a car near her home in Echo Park, Los Angeles. Legacy Nelson's archives are held in Special Collections and Archives at the University Library of California State University, Northridge. Passage 7: Fatima bint Mubarak Al Ketbi Sheikha Fatima bint Mubarak Al Ketbi (Arabic: فاطمة بنت مبارك الكتبي) is the third wife of Sheikh Zayed bin Sultan Al Nahyan, the founder and inaugural president of United Arab Emirates. She is referred to as the mother of sheikhs, the mother of the UAE and as The mother of Nation. Early life Sheikha Fatima was born in Al-Hayer, Al Ain Region, as the only daughter to her parents. Her family is Bedouin and religious. Achievements Sheikha Fatima is a supporter of women's rights in the UAE. She is the supreme chairperson of the Family Development Foundation (FDF) and significantly contributed to the foundation of the first women's organization in 1976, the Abu Dhabi Society for the Awakening of Women. She was also instrumental in a nationwide campaign advocating for girls' education and heads the UAE's General Women Union (GWU), which she founded in 1975. She is also the President of the Motherhood and Childhood Supreme Council. At the end of the 1990s, she publicly announced that women should be members of the Federal National Council of the Emirates.Sheikha Fatima also supports efforts concerning adult literacy and provision of free public education for girls. An award named the Sheikha Fatima Award for Excellence has been presented in her honor since 2005 for the outstanding academic performance and commitment to the environment and world citizenship of the female recipients. The reward includes a full-tuition scholarship that extends to schools across the Middle East and in 2010 expanded to India. She has consistently supported women in sport and initiated an award called the Sheikha Fatima bint Mubarak Award for Woman Athletes. Sheikha Fatima bint Mubarak also created a women's sports academy called Fatima Bint Mubarak Ladies Academy in Abu Dhabi. The Sheikha Fatima Institute of Nursing and Health Sciences in Lahore, Pakistan, is named after her.On 30 March 2021, Sheikha Fatima launched a National Action Plan on women, peace and security - the first National Action Plan developed in a Gulf Cooperation Council (GCC) country. The plan aims to empower and support women globally by promoting the UN Security Council Resolution 1325. Awards In 1997, five different organizations of the United Nations had awarded Sheikha Fatima for her significant efforts for women's rights. The UNIFEM stated, "she is the champion of women's rights." She was also awarded the Grand Cordon of the Order of November 7th by the Tunisian president Zine El Abidine Ben Ali on 26 June 2009 for her contributions to raise the status of Arab women. She was also given the UNESCO Marie Curie Medal for her efforts in education, literacy and women's rights, being the third international and the first Arab recipient of the award.On March 16, 2005, she received the Athir Class of the National Order of Merit of Algeria. Marriage and children Fatima bint Mubarak Al Ketbi married Sheikh Zayed Al Nahyan when he was the ruler of the Eastern region in 1960. Sheikh Zayed met her in a mosque. They moved to Abu Dhabi when Sheikh Zayed became the ruler in August 1966. She was his most influential and favorite spouse because of her influential personality. She is the mother of Sheikh Mohamed, the current President of the United Arab Emirates and the ruler of Abu Dhabi; Sheikh Hamdan, Sheikh Hazza, Sheikh Tahnoun, Sheikh Mansour, Sheikh Abdullah, Sheikha Shamma and Sheikha Alyazia. They are the most powerful block in the ruling family of Abu Dhabi, the Al Nahyans. Passage 8: Kekuʻiapoiwa II Kekuʻiapoiwa II was a Hawaiian chiefess and the mother of the king Kamehameha I. Biography She was named after her aunt Kekuʻiapoiwa Nui (also known as Kekuʻiapoiwa I), the wife of King Kekaulike of Maui. Her father was High Chief Haʻae, the son of Chiefess Kalanikauleleiaiwi and High Chief Kauaua-a-Mahi of the Mahi family of the Kohala district of Hawaiʻi island, and brother of Alapainui. Her mother was Princess Kekelakekeokalani-a-Keawe (also known as Kekelaokalani), daughter of the same Kalanikauleleiaiwi and Keaweʻīkekahialiʻiokamoku, king of Hawaii. Her mother had been sought after by many who wished to marry into the Keawe line. She was the niece of Alapainui through both her father and mother. She married the High Chief Keōua to whom she had been betrothed since childhood. Through her double grandmother Kalanikauleleiaiwi, Keōua's own paternal grandmother, she was the double cousin of Keōua. When her uncle was staying at Kohala superintending the collection of his fleet and warriors from the different districts of the island preparatory to the invasion of Maui, in the month of Ikuwa (probably winter) Kamehameha was born probably in November 1758.: 135–136  He had his birth ceremony at the Moʻokini Heiau, an ancient temple which is preserved in Kohala Historical Sites State Monument.Many stories are told about the birth of Kamehameha. One says that when Kekuʻiapoiwa was pregnant with Kamehameha, she had a craving for the eyeball of a chief. She was given the eyeball of a man-eating shark and the priests prophesied that this meant the child would be a rebel and a killer of chiefs. Alapainui, the old ruler of the island of Hawaiʻi, secretly made plans to have the newborn infant killed.Kekuʻiapoiwa's time came on a stormy night in the Kohala district, when a strange star with a tail of white fire appeared in the western sky. This could have been Halley's Comet which appeared near the end of 1758. According to one legend, the baby was passed through a hole in the side of Kekuiapoiwa's thatched hut to a local Kohala chief named Naeʻole, who carried the child to safety at Awini on the island's north coast. By the time the infant in Naeʻole's care was five, Alapainui had accepted him back into his household.After Kamehameha, Kekuʻiapoiwa bore a second son, Keliimaikai. A few years later, Keōua died in Hilo, and the family moved with Alapainui to an area near Kawaihae, where she married a chief of the Kona district (and her uncle) Kamanawa. She had one daughter, Piʻipiʻi Kalanikaulihiwakama, from this second husband, who would later become an important military ally of Kamehameha, who was both step son and cousin through several relationships. Piʻipiʻi became first the wife of Keholoikalani, the father of her son Kanihonui, and later she married Kaikioewa, who she had a daughter Kuwahine with.: 18 Kamehameha dynasty Passage 9: Minamoto no Chikako Minamoto no Chikako (源 親子) was the daughter of Kitabatake Morochika, and Imperial consort to Emperor Go-Daigo. She had earlier been Imperial consort to Go-Daigo's father, Emperor Go-Uda. She was the mother of Prince Morinaga. Passage 10: Trinidad Tecson Trinidad Perez Tecson (November 18, 1848 – January 28, 1928), known as the "Mother of Biak-na-Bato" and "Mother of Mercy", fought to gain Philippines independence. She was given the title "Mother of Biak-na-Bato" by Gen. Emilio Aguinaldo. She was also cited as the "Mother of the Philippine National Red Cross" for her service to her fellow Katipuneros. Early life Tecson was born in San Miguel de Mayumo, Bulacan, one of sixteen children of Rafael Tecson and Monica Perez. She learned to read and write from schoolmaster Quinto. She practiced fencing with Juan Zeto and was feared throughout the province, called "Tangkad" (tall) by her peers. Orphaned at a very young age, she stopped school and went with her siblings to live with relatives. She married at 19 and had two children, Sinforoso and Desiderio, who both died. Tecson and her husband were engaged in the purchase and sale of cattle, fish, oysters, and lobsters to be sold in Manila. Revolutionary Philippine-American War She joined the revolutionary forces led by Gen. Gregorio del Pilar and participated in the assault on the province of Bulacan and Calumpit. She also served in the Malolos Republic and was designated as the Commissary of War. During the American drive northward, she was in Cabanatuan. Bringing with her sick and wounded revolutionaries, Tecson crossed the Zambales mountains to Santa Cruz then to Iba. Life after the war After the war, her second husband died and she continued in business in Nueva Ecija, concentrating on selling meat in the towns of San Antonio and Talavera. She married her third husband, Doroteo Santiago, and after his death, married Francisco Empainado. On January 28, 1928, she died in Philippine General Hospital at age 79. Her remains lie in the Plot of the Veterans of the Revolution in Cementerio del Norte.
Who is the mother of the director of film Atomised (Film)?
Gisela Elsner
3,211
2wikimqa
4k
Passage 1: S. N. Mathur S.N. Mathur was the Director of the Indian Intelligence Bureau between September 1975 and February 1980. He was also the Director General of Police in Punjab. Passage 2: Sweepstakes (film) Sweepstakes is a 1931 American pre-Code comedy film directed by Albert S. Rogell from a screenplay written by Lew Lipton and Ralph Murphy. The film stars Eddie Quillan, James Gleason, Marian Nixon, Lew Cody, and Paul Hurst, which centers around the travails and romances of jockey Buddy Doyle, known as the "Whoop-te-doo Kid" for his trademark yell during races. Produced by the newly formed RKO Pathé Pictures, this was the first film Charles R. Rogers would produce for the studio, after he replaced William LeBaron as head of production. The film was released on July 10, 1931, through RKO Radio Pictures. Plot Bud Doyle is a jockey who has discovered the secret to get his favorite mount, Six-Shooter, to boost his performance. If he simply chants the phrase, "Whoop-te-doo", the horse responds with a burst of speed. There is a special bond between the jockey and his mount, but there is increasing tension between Doyle and the horse's owner, Pop Blake (who also raised Doyle), over Doyle's relationship with local singer Babe Ellis. Blake sees Ellis as a distraction prior to the upcoming big race, the Camden Stakes. The owner of the club where Babe sings, Wally Weber, has his eyes on his horse winning the Camden Stakes. When the issues between Pop and Doyle come to a head, Pop tells Doyle that he has to choose: either he stops seeing Babe, or he'll be replaced as Six-Shooter's jockey in the big race. Angry and frustrated, Doyle quits. Weber approaches him to become the jockey for Rose Dawn, Weber's horse, and Doyle agrees, with the precondition that he not ride Royal Dawn in the Camden Stakes, for he wants Six-Shooter to still win the race. Weber accedes to that one precondition, however, on the day of the race, he makes it clear that Doyle is under contract, and that he will ride Rose Dawn in the race. Upset, Doyle has no choice but to ride Rose Dawn. However, during the race, he manages to chant his signature "Whoop-te-doo" to Six-Shooter, causing his old mount to win the race. Furious that his horse lost, Weber goes to the judges, who rule that Doyle threw the race, pulling back on Rose Dawn, to allow Six-Shooter to win, and suspend Doyle from horse-racing. Devastated, Doyle wanders from town to town, riding in small local races, until his identity is uncovered, and he is forced to move on. Soon, he is out of racing all together, and forced to taking one odd-job after another. Eventually, he ends up south of the border, in Tijuana, Mexico, working as a waiter. Doyle's friend, Sleepy Jones, hears of Doyle's plight. Jones gets the racing commission to lift the ban, by proving Doyle's innocence. He then, accompanied by Babe, gets a group to buy Six-Shooter from Pop, and they take the horse down to Tijuana, where there is another big race in the near future, the Tijuana Handicap. Doyle is reluctant to ride at first, however, he is eventually cajoled into it by Sleepy and Babe, and of course, his bond with Six-Shooter is there. He rides the horse to victory, re-establishing his credentials as a rider. The film ends by jumping a few years into the future, which shows Doyle and Babe happily married, with a child of their own. Cast (Cast list as per AFI database) Eddie Quillan as Bud Doyle Lew Cody as Wally Weber James Gleason as Sleepy Jones Marian Nixon as Babe Ellis King Baggot as Mike Paul Hurst as Cantina Bartender Clarence Wilson as Mr. Emory Frederick Burton as Pop Blake Billy Sullivan as Speed Martin Lillian Leighton as Ma Clancy Mike Donlin as The Dude Production Critical response Mordaunt Hall of The New York Times gave a very non-committal review of this film, with neither much praise or criticism. While he gave no indication of what he thought about the quality of the film, he enjoyed the performances of James Gleason and Lew Cody, and he called Quillan's performance as Doyle "original". See also List of films about horse racing Passage 3: Albert S. Rogell Albert S. Rogell (August 21, 1901 Oklahoma City, Oklahoma - April 7, 1988 Los Angeles, California) was an American film director.Rogell directed more than a hundred movies between 1921 and 1958. He was the uncle of producer Sid Rogell. Filmography Passage 4: Ian Barry (director) Ian Barry is an Australian director of film and TV. Select credits Waiting for Lucas (1973) (short) Stone (1974) (editor only) The Chain Reaction (1980) Whose Baby? (1986) (mini-series) Minnamurra (1989) Bodysurfer (1989) (mini-series) Ring of Scorpio (1990) (mini-series) Crimebroker (1993) Inferno (1998) (TV movie) Miss Lettie and Me (2002) (TV movie) Not Quite Hollywood: The Wild, Untold Story of Ozploitation! (2008) (documentary) The Doctor Blake Mysteries (2013) Passage 5: Peter Levin Peter Levin is an American director of film, television and theatre. Career Since 1967, Levin has amassed a large number of credits directing episodic television and television films. Some of his television series credits include Love Is a Many Splendored Thing, James at 15, The Paper Chase, Family, Starsky & Hutch, Lou Grant, Fame, Cagney & Lacey, Law & Order and Judging Amy.Some of his television film credits include Rape and Marriage: The Rideout Case (1980), A Reason to Live (1985), Popeye Doyle (1986), A Killer Among Us (1990), Queen Sized (2008) and among other films. He directed "Heart in Hiding", written by his wife Audrey Davis Levin, for which she received an Emmy for Best Day Time Special in the 1970s. Prior to becoming a director, Levin worked as an actor in several Broadway productions. He costarred with Susan Strasberg in "[The Diary of Ann Frank]" but had to leave the production when he was drafted into the Army. He trained at the Carnegie Mellon University. Eventually becoming a theatre director, he directed productions at the Long Wharf Theatre and the Pacific Resident Theatre Company. He also co-founded the off-off-Broadway Theatre [the Hardware Poets Playhouse] with his wife Audrey Davis Levin and was also an associate artist of The Interact Theatre Company. Passage 6: Jason Moore (director) Jason Moore (born October 22, 1970) is an American director of film, theatre and television. Life and career Jason Moore was born in Fayetteville, Arkansas, and studied at Northwestern University. Moore's Broadway career began as a resident director of Les Misérables at the Imperial Theatre in during its original run. He is the son of Fayetteville District Judge Rudy Moore.In March 2003, Moore directed the musical Avenue Q, which opened Off-Broadway at the Vineyard Theatre and then moved to Broadway at the John Golden Theatre in July 2003. He was nominated for a 2004 Tony Award for his direction. Moore also directed productions of the musical in Las Vegas and London and the show's national tour. Moore directed the 2005 Broadway revival of Steel Magnolias and Shrek the Musical, starring Brian d'Arcy James and Sutton Foster which opened on Broadway in 2008. He directed the concert of Jerry Springer — The Opera at Carnegie Hall in January 2008.Moore, Jeff Whitty, Jake Shears, and John "JJ" Garden worked together on a new musical based on Armistead Maupin's Tales of the City. The musical premiered at the American Conservatory Theater, San Francisco, California in May 2011 and ran through July 2011.For television, Moore has directed episodes of Dawson's Creek, One Tree Hill, Everwood, and Brothers & Sisters. As a writer, Moore adapted the play The Floatplane Notebooks with Paul Fitzgerald from the novel by Clyde Edgerton. A staged reading of the play was presented at the New Play Festival at the Charlotte, North Carolina Repertory Theatre in 1996, with a fully staged production in 1998.In 2012, Moore made his film directorial debut with Pitch Perfect, starring Anna Kendrick and Brittany Snow. He also served as an executive producer on the sequel. He directed the film Sisters, starring Tina Fey and Amy Poehler, which was released on December 18, 2015. Moore's next project will be directing a live action Archie movie. Filmography Films Pitch Perfect (2012) Sisters (2015) Shotgun Wedding (2022)Television Soundtrack writer Pitch Perfect 2 (2015) (Also executive producer) The Voice (2015) (1 episode) Passage 7: Brian Kennedy (gallery director) Brian Patrick Kennedy (born 5 November 1961) is an Irish-born art museum director who has worked in Ireland and Australia, and now lives and works in the United States. He was the director of the Peabody Essex Museum in Salem for 17 months, resigning December 31, 2020. He was the director of the Toledo Museum of Art in Ohio from 2010 to 2019. He was the director of the Hood Museum of Art from 2005 to 2010, and the National Gallery of Australia (Canberra) from 1997 to 2004. Career Brian Kennedy currently lives and works in the United States after leaving Australia in 2005 to direct the Hood Museum of Art at Dartmouth College. In October 2010 he became the ninth Director of the Toledo Museum of Art. On 1 July 2019, he succeeded Dan Monroe as the executive director and CEO of the Peabody Essex Museum. Early life and career in Ireland Kennedy was born in Dublin and attended Clonkeen College. He received B.A. (1982), M.A. (1985) and PhD (1989) degrees from University College-Dublin, where he studied both art history and history. He worked in the Irish Department of Education (1982), the European Commission, Brussels (1983), and in Ireland at the Chester Beatty Library (1983–85), Government Publications Office (1985–86), and Department of Finance (1986–89). He married Mary Fiona Carlin in 1988.He was Assistant Director at the National Gallery of Ireland in Dublin from 1989 to 1997. He was Chair of the Irish Association of Art Historians from 1996 to 1997, and of the Council of Australian Art Museum Directors from 2001 to 2003. In September 1997 he became Director of the National Gallery of Australia. National Gallery of Australia (NGA) Kennedy expanded the traveling exhibitions and loans program throughout Australia, arranged for several major shows of Australian art abroad, increased the number of exhibitions at the museum itself and oversaw the development of an extensive multi-media site. Although he oversaw several years of the museum's highest ever annual visitation, he discontinued the emphasis of his predecessor, Betty Churcher, on showing "blockbuster" exhibitions. During his directorship, the NGA gained government support for improving the building and significant private donations and corporate sponsorship. However, the initial design for the building proved controversial generating a public dispute with the original architect on moral rights grounds. As a result, the project was not delivered during Dr Kennedy's tenure, with a significantly altered design completed some years later. Private funding supported two acquisitions of British art, including David Hockney's A Bigger Grand Canyon in 1999, and Lucian Freud's After Cézanne in 2001. Kennedy built on the established collections at the museum by acquiring the Holmgren-Spertus collection of Indonesian textiles; the Kenneth Tyler collection of editioned prints, screens, multiples and unique proofs; and the Australian Print Workshop Archive. He was also notable for campaigning for the construction of a new "front" entrance to the Gallery, facing King Edward Terrace, which was completed in 2010 (see reference to the building project above). Kennedy's cancellation of the "Sensation exhibition" (scheduled at the NGA from 2 June 2000 to 13 August 2000) was controversial, and seen by some as censorship. He claimed that the decision was due to the exhibition being "too close to the market" implying that a national cultural institution cannot exhibit the private collection of a speculative art investor. However, there were other exhibitions at the NGA during his tenure, which could have raised similar concerns. The exhibition featured the privately owned Young British Artists works belonging to Charles Saatchi and attracted large attendances in London and Brooklyn. Its most controversial work was Chris Ofili's The Holy Virgin Mary, a painting which used elephant dung and was accused of being blasphemous. The then-mayor of New York, Rudolph Giuliani, campaigned against the exhibition, claiming it was "Catholic-bashing" and an "aggressive, vicious, disgusting attack on religion." In November 1999, Kennedy cancelled the exhibition and stated that the events in New York had "obscured discussion of the artistic merit of the works of art". He has said that it "was the toughest decision of my professional life, so far."Kennedy was also repeatedly questioned on his management of a range of issues during the Australian Government's Senate Estimates process - particularly on the NGA's occupational health and safety record and concerns about the NGA's twenty-year-old air-conditioning system. The air-conditioning was finally renovated in 2003. Kennedy announced in 2002 that he would not seek extension of his contract beyond 2004, accepting a seven-year term as had his two predecessors.He became a joint Irish-Australian citizen in 2003. Toledo Museum of Art The Toledo Museum of Art is known for its exceptional collections of European and American paintings and sculpture, glass, antiquities, artist books, Japanese prints and netsuke. The museum offers free admission and is recognized for its historical leadership in the field of art education. During his tenure, Kennedy has focused the museum's art education efforts on visual literacy, which he defines as "learning to read, understand and write visual language." Initiatives have included baby and toddler tours, specialized training for all staff, docents, volunteers and the launch of a website, www.vislit.org. In November 2014, the museum hosted the International Visual Literacy Association (IVLA) conference, the first Museum to do so. Kennedy has been a frequent speaker on the topic, including 2010 and 2013 TEDx talks on visual and sensory literacy. Kennedy has expressed an interest in expanding the museum's collection of contemporary art and art by indigenous peoples. Works by Frank Stella, Sean Scully, Jaume Plensa, Ravinder Reddy and Mary Sibande have been acquired. In addition, the museum has made major acquisitions of Old Master paintings by Frans Hals and Luca Giordano.During his tenure the Toledo Museum of Art has announced the return of several objects from its collection due to claims the objects were stolen and/or illegally exported prior being sold to the museum. In 2011 a Meissen sweetmeat stand was returned to Germany followed by an Etruscan Kalpis or water jug to Italy (2013), an Indian sculpture of Ganesha (2014) and an astrological compendium to Germany in 2015. Hood Museum of Art Kennedy became Director of the Hood Museum of Art in July 2005. During his tenure, he implemented a series of large and small-scale exhibitions and oversaw the production of more than 20 publications to bring greater public attention to the museum's remarkable collections of the arts of America, Europe, Africa, Papua New Guinea and the Polar regions. At 70,000 objects, the Hood has one of the largest collections on any American college of university campus. The exhibition, Black Womanhood: Images, Icons, and Ideologies of the African Body, toured several US venues. Kennedy increased campus curricular use of works of art, with thousands of objects pulled from storage for classes annually. Numerous acquisitions were made with the museum's generous endowments, and he curated several exhibitions: including Wenda Gu: Forest of Stone Steles: Retranslation and Rewriting Tang Dynasty Poetry, Sean Scully: The Art of the Stripe, and Frank Stella: Irregular Polygons. Publications Kennedy has written or edited a number of books on art, including: Alfred Chester Beatty and Ireland 1950-1968: A study in cultural politics, Glendale Press (1988), ISBN 978-0-907606-49-9 Dreams and responsibilities: The state and arts in independent Ireland, Arts Council of Ireland (1990), ISBN 978-0-906627-32-7 Jack B Yeats: Jack Butler Yeats, 1871-1957 (Lives of Irish Artists), Unipub (October 1991), ISBN 978-0-948524-24-0 The Anatomy Lesson: Art and Medicine (with Davis Coakley), National Gallery of Ireland (January 1992), ISBN 978-0-903162-65-4 Ireland: Art into History (with Raymond Gillespie), Roberts Rinehart Publishers (1994), ISBN 978-1-57098-005-3 Irish Painting, Roberts Rinehart Publishers (November 1997), ISBN 978-1-86059-059-7 Sean Scully: The Art of the Stripe, Hood Museum of Art (October 2008), ISBN 978-0-944722-34-3 Frank Stella: Irregular Polygons, 1965-1966, Hood Museum of Art (October 2010), ISBN 978-0-944722-39-8 Honors and achievements Kennedy was awarded the Australian Centenary Medal in 2001 for service to Australian Society and its art. He is a trustee and treasurer of the Association of Art Museum Directors, a peer reviewer for the American Association of Museums and a member of the International Association of Art Critics. In 2013 he was appointed inaugural eminent professor at the University of Toledo and received an honorary doctorate from Lourdes University. Most recently, Kennedy received the 2014 Northwest Region, Ohio Art Education Association award for distinguished educator for art education. == Notes == Passage 8: Jesse E. Hobson Jesse Edward Hobson (May 2, 1911 – November 5, 1970) was the director of SRI International from 1947 to 1955. Prior to SRI, he was the director of the Armour Research Foundation. Early life and education Hobson was born in Marshall, Indiana. He received bachelor's and master's degrees in electrical engineering from Purdue University and a PhD in electrical engineering from the California Institute of Technology. Hobson was also selected as a nationally outstanding engineer.Hobson married Jessie Eugertha Bell on March 26, 1939, and they had five children. Career Awards and memberships Hobson was named an IEEE Fellow in 1948. Passage 9: Dana Blankstein Dana Blankstein-Cohen (born March 3, 1981) is the executive director of the Sam Spiegel Film and Television School. She was appointed by the board of directors in November 2019. Previously she was the CEO of the Israeli Academy of Film and Television. She is a film director, and an Israeli culture entrepreneur. Biography Dana Blankstein was born in Switzerland in 1981 to theatre director Dedi Baron and Professor Alexander Blankstein. She moved to Israel in 1983 and grew up in Tel Aviv. Blankstein graduated from the Sam Spiegel Film and Television School, Jerusalem in 2008 with high honors. During her studies she worked as a personal assistant to directors Savi Gabizon on his film Nina's Tragedies and to Renen Schorr on his film The Loners. She also directed and shot 'the making of' film on Gavison's film Lost and Found. Her debut film Camping competed at the Berlin International Film Festival, 2007. Film and academic career After her studies, Dana founded and directed the film and television department at the Kfar Saba municipality. The department encouraged and promoted productions filmed in the city of Kfar Saba, as well as the established cultural projects, and educational community activities. Blankstein directed the mini-series "Tel Aviviot" (2012). From 2016-2019 was the director of the Israeli Academy of Film and Television. In November 2019 Dana Blankstein Cohen was appointed the new director of the Sam Spiegel Film and Television School where she also oversees the Sam Spiegel International Film Lab. In 2022, she spearheaded the launch of the new Series Lab and the film preparatory program for Arabic speakers in east Jerusalem. Filmography Tel Aviviot (mini-series; director, 2012) Growing Pains (graduation film, Sam Spiegel; director and screenwriter, 2008) Camping (debut film, Sam Spiegel; director and screenwriter, 2006) Passage 10: Olav Aaraas Olav Aaraas (born 10 July 1950) is a Norwegian historian and museum director. He was born in Fredrikstad. From 1982 to 1993 he was the director of Sogn Folk Museum, from 1993 to 2010 he was the director of Maihaugen and from 2001 he has been the director of the Norwegian Museum of Cultural History. In 2010 he was decorated with the Royal Norwegian Order of St. Olav.
What is the place of birth of the director of film Sweepstakes (Film)?
Oklahoma City, Oklahoma
3,277
2wikimqa
4k
Passage 1: Place of birth The place of birth (POB) or birthplace is the place where a person was born. This place is often used in legal documents, together with name and date of birth, to uniquely identify a person. Practice regarding whether this place should be a country, a territory or a city/town/locality differs in different countries, but often city or territory is used for native-born citizen passports and countries for foreign-born ones. As a general rule with respect to passports, if the place of birth is to be a country, it's determined to be the country that currently has sovereignty over the actual place of birth, regardless of when the birth actually occurred. The place of birth is not necessarily the place where the parents of the new baby live. If the baby is born in a hospital in another place, that place is the place of birth. In many countries, this also means that the government requires that the birth of the new baby is registered in the place of birth. Some countries place less or no importance on the place of birth, instead using alternative geographical characteristics for the purpose of identity documents. For example, Sweden has used the concept of födelsehemort ("domicile of birth") since 1947. This means that the domicile of the baby's mother is the registered place of birth. The location of the maternity ward or other physical birthplace is considered unimportant. Similarly, Switzerland uses the concept of place of origin. A child born to Swiss parents is automatically assigned the place of origin of the parent with the same last name, so the child either gets their mother's or father's place of origin. A child born to one Swiss parent and one foreign parent acquires the place of origin of their Swiss parent. In a Swiss passport and identity card, the holder's place of origin is stated, not their place of birth. In Japan, the registered domicile is a similar concept. In some countries (primarily in the Americas), the place of birth automatically determines the nationality of the baby, a practice often referred to by the Latin phrase jus soli. Almost all countries outside the Americas instead attribute nationality based on the nationality(-ies) of the baby's parents (referred to as jus sanguinis). There can be some confusion regarding the place of birth if the birth takes place in an unusual way: when babies are born on an airplane or at sea, difficulties can arise. The place of birth of such a person depends on the law of the countries involved, which include the nationality of the plane or ship, the nationality(-ies) of the parents and/or the location of the plane or ship (if the birth occurs in the territorial waters or airspace of a country). Some administrative forms may request the applicant's "country of birth". It is important to determine from the requester whether the information requested refers to the applicant's "place of birth" or "nationality at birth". For example, US citizens born abroad who acquire US citizenship at the time of birth, the nationality at birth will be USA (American), while the place of birth would be the country in which the actual birth takes place. Reference list 8 FAM 403.4 Place of Birth Passage 2: Motherland (disambiguation) Motherland is the place of one's birth, the place of one's ancestors, or the place of origin of an ethnic group. Motherland may also refer to: Music "Motherland" (anthem), the national anthem of Mauritius National Song (Montserrat), also called "Motherland" Motherland (Natalie Merchant album), 2001 Motherland (Arsonists Get All the Girls album), 2011 Motherland (Daedalus album), 2011 "Motherland" (Crystal Kay song), 2004 Film and television Motherland (1927 film), a 1927 British silent war film Motherland (2010 film), a 2010 documentary film Motherland (2015 film), a 2015 Turkish drama Motherland (2022 film), a 2022 documentary film about the Second Nagorno-Karabakh War Motherland (TV series), a 2016 British television series Motherland: Fort Salem, a 2020 American science fiction drama series Other uses Motherland Party (disambiguation), the name of several political groups Personifications of Russia, including a list of monuments called Motherland See also All pages with titles containing Motherland Mother Country (disambiguation) Passage 3: Dag Ole Teigen Dag Ole Teigen (born 10 August 1982 in Volda) is a Norwegian politician for the Labour Party (AP). He represented Hordaland in the Norwegian Parliament, where he met from 2005-2009 in the place of Anne-Grete Strøm-Erichsen, who was appointed to a government position. He was elected on his own right to serve a full term from 2009-2013. Teigen was a member of the Standing Committee on Health and Care Services from 2005-2009, and a member of the Standing Committee on Finance and Economic Affairs from 2009-2013. He holds a master's degree in public policy and management from the University of Agder (2014), and a Bachelor of Arts from the University of Bergen (2004). He participated at The Oxford Experience in 2013. He was elected to the municipality council of Fjell in 2003. He is a member of Mensa. Parliamentary Committee duties 2005 - 2009 member of the Standing Committee on Health and Care Services. 2009 - 2013 member of the Standing Committee on Finance and Economic Affairs. External links "Dag Ole Teigen" (in Norwegian). Storting. Passage 4: William Herbert, 1st Earl of Pembroke (died 1469) William Herbert, 1st Earl of Pembroke KG (c. 1423 – 27 July 1469), known as "Black William", was a Welsh nobleman, soldier, politician, and courtier. Life He was the son of William ap Thomas, founder of Raglan Castle, and Gwladys ferch Dafydd Gam, and grandson of Dafydd Gam, an adherent of King Henry V of England. His father had been an ally of Richard of York, and Herbert supported the Yorkist cause in the Wars of the Roses. In 1461 Herbert was rewarded by King Edward IV with the title Baron Herbert of Raglan (having assumed an English-style surname in place of the Welsh patronymic), and was invested as a Knight of the Garter. Soon after the decisive Yorkist victory at the Battle of Towton in 1461, Herbert replaced Jasper Tudor as Earl of Pembroke which gave him control of Pembroke Castle – and with it, he gained the wardship of young Henry Tudor. However, he fell out with Lord Warwick "the Kingmaker" in 1469, when Warwick turned against the King. Herbert was denounced by Warwick and the Duke of Clarence as one of the king's "evil advisers". William and his brother Richard were executed by Warwick in Northampton, after the Battle of Edgcote, which took place in South Northamptonshire, near Banbury.Herbert was succeeded by his son, William, but the earldom was surrendered in 1479. It was later revived for a grandson, another William Herbert, the son of Black William's illegitimate son, Sir Richard Herbert of Ewyas. Marriage and children He married Anne Devereux, daughter of Walter Devereux, Lord Chancellor of Ireland and Elizabeth Merbury. They had at least ten children: William Herbert, 2nd Earl of Pembroke (5 March 1451 – 16 July 1491). Sir Walter Herbert. (c. 1452 – d. 16 September 1507) Married Lady Anne Stafford, sister to the Duke of Buckingham. Sir George Herbert of St. Julians. Philip Herbert of Lanyhangel. Cecilie Herbert. Maud Herbert. Married Henry Percy, 4th Earl of Northumberland. Katherine Herbert. Married George Grey, 2nd Earl of Kent. Anne Herbert. Married John Grey, 1st Baron Grey of Powis, 9th Lord of Powys (died 1497). Isabel Herbert. Married Sir Thomas Cokesey. Margaret Herbert. Married first Thomas Talbot, 2nd Viscount Lisle and secondly Sir Henry Bodringham.William had three illegitimate sons but the identities of their mothers are unconfirmed: Sir Richard Herbert of Ewyas. Father of William Herbert, 1st Earl of Pembroke (10th Creation). Probably son of Maud, daughter of Adam ap Howell Graunt (Gwynn). Sir George Herbert. The son of Frond verch Hoesgyn. Married Sybil Croft. Sir William Herbert of Troye. Son of Frond verch Hoesgyn. Married, second, Blanche Whitney (née Milborne) see Blanche Milborne. They had two sons. See also The White Queen (miniseries) Passage 5: Angelitha Wass Angelitha Wass (Hungarian: [ˈɒŋɡɛlitɒ ˈvɒʃʃ]; 15th century – after 1521) was a Hungarian lady's maid of Anne of Foix-Candale, Queen consort of Bohemia and Hungary, and later a mistress of Anne's son, Louis II Jagiellon, King of Hungary. Life She became pregnant by King Louis and gave birth to an illegitimate son, János (John) Wass, self-titled "Prince John". John was never officially recognized as the son of the king. His and his mother's names appear in the sources of the Chamber in Pozsony (now Bratislava) as either János Wass or János Lanthos, which could refer to the fact that he used his mother's name first, then that of his occupation (lantos means 'lutanist, bard'). Angelitha Wass married a Hungarian nobleman but did not have any further issue. She died as a widow. Passage 6: Anne Devereux Anne Devereux, Countess of Pembroke (c. 1430 – after 25 June 1486), was an English noblewoman, who was Countess of Pembroke during the 15th century by virtue of marriage to William Herbert, 1st Earl of Pembroke. She was born in Bodenham, the daughter of Sir Walter Devereux, the Lord Chancellor of Ireland, and his wife Elizabeth Merbury. Anne's grandfather, Walter, was the son of Agnes Crophull. By Crophull's second marriage to Sir John Parr, Anne was a cousin to the Parr family which included Sir Thomas Parr; father of King Henry VIII's last queen consort, Catherine Parr. Marriage About 1445, Anne married William Herbert, 1st Earl of Pembroke, in Herefordshire, England. He was the second son of Sir William ap Thomas of Raglan, a member of the Welsh Gentry Family, and his second wife Gwladys ferch Dafydd Gam.William Herbert was a very ambitious man. During the War of the Roses, Wales heavily supported the Lancastrian cause. Jasper Tudor, 1st Earl of Pembroke and other Lancastrians remained in control of fortresses at Pembroke, Harlech, Carreg Cennen, and Denbigh. On 8 May 1461, as a loyal supporter of King Edward IV, Herbert was appointed Life Chamberlain of South Wales and steward of Carmarthenshire and Cardiganshire. King Edward's appointment signaled his intention to replace Jasper Tudor with Herbert, who thus would become the premier nobleman in Wales. Herbert was created Lord Herbert on 26 July 1461. Herbert was then ordered to seize the county and title of Earl of Pembroke from Jasper Tudor. By the end of August, Herbert had taken back control of Wales with the well fortified Pembroke Castle capitulating on 30 September 1461. With this victory for the House of York came the inmate at Pembroke; the five-year-old nephew of Jasper Tudor, Henry, Earl of Richmond. Determined to enhance his power and arrange good marriages for his daughters, in March 1462 he paid 1,000 for the wardship of Henry Tudor. Herbert planned a marriage between Tudor and his eldest daughter, Maud. At the same time, Herbert secured the young Henry Percy who had just inherited the title of Earl of Northumberland. Herbert's court at Raglan Castle was where young Henry Tudor would spend his childhood, under the supervision of Herbert's wife, Anne Devereux, who ensured that young Henry was well cared for. Issue The Earl and Countess of Pembroke had three sons and seven daughters: Sir William Herbert, 2nd Earl of Pembroke, Earl of Huntingdon, married firstly to Mary Woodville; daughter of Richard Woodville, 1st Earl Rivers and thus sister to King Edward IV's queen consort Elizabeth Woodville. He married secondly to Lady Katherine Plantagenet, the illegitimate daughter of King Richard III. Sir Walter Herbert, husband of Lady Anne Stafford Sir George Herbert Lady Maud Herbert, wife of Sir Henry Percy, 4th Earl of Northumberland, 7th Lord Percy. Lady Katherine Herbert, wife of Sir George Grey, 2nd Earl of Kent. Lady Anne Herbert, wife of Sir John Grey, 1st Baron Grey of Powis. Lady Margaret Herbert, wife of Sir Thomas Talbot, 2nd Viscount Lisle, and of Sir Walter Bodrugan. Lady Cecily Herbert, wife of John Greystoke. Lady Elizabeth Herbert, wife of Sir Thomas Cokesey. Lady Crisli Herbert, wife of Mr. Cornwall.The Earl of Pembroke also fathered several children by various mistresses. Passage 7: Anne Devereux-Mills Anne Devereux-Mills (born March 2, 1962) is an American businesswoman, author, public speaker and entrepreneur. Anne Devereux-Mills spent the first 25 years of her career building and leading advertising agencies in New York City. She is now co-host of the Bring a Friend podcast and the Chief Instigator (and Founder) of Parlay House, a 7000+ member organization in 12 cities worldwide that champions and inspires women to connect and make meaningful change for themselves and for others. Early life Anne Devereux-Mills was born in Seattle, Washington, the daughter of Gene Bruce Brandzel and Elizabeth Ettenheim Brandzel and sister to Rachel Brandzel Weil and Susan Brandzel. She attended John Muir Elementary School, Eckstein Middle School and the Lakeside School. Devereux-Mills left Seattle in 1980 to attend Wellesley College in Wellesley, Massachusetts where she became President of the Senior Class and an active member of College Government. Career Devereux-Mills began her career in the Political Risk Department of Marsh and McLennan in New York City, but after just a few years, realized that her strengths lay elsewhere. Parlaying her skills in communications and client management through a series of career experiments, she found herself in the field of advertising where she specialized in healthcare. Once landing in a field that combined her strengths and her passions, she quickly climbed the corporate ladder, helping found the first Direct-to-consumer advertising agency for healthcare brands, called Consumer Healthworks, part of WPP. A few years later, she moved to Omnicom Group, building a direct to consumer practice for Harrison and Star where she went on to become president, then to Merkley and Partners where she was CEO of the Healthcare Division. From Merkley, she moved to BBDO where she was CEO of BBDO World Health as well as managing director and Chief Integration Officer. She then transitioned to TBWA\Chiat\Day as CEO of the global healthcare practice as well as chairman and CEO of LLNS.Devereux-Mills left the field of advertising in 2009. Hit with the triple threat of progressing cancer, opting to have cancer surgery, Devereux-Mills moved to San Francisco where she founded Parlay House, a salon-style gathering for women that now has a national presence and thousands of members who come together to connect about what they care about rather than what they "do". She is an active mentor of the SHE-CAN organization which takes high-performing women from post-genocide countries and helps them gain an American education so that they can then return to their countries and become the next generation of leaders. Devereux-Mills was one of the first supporting members of the iHUG Foundation which helps break the cycle of poverty for children in Kabalagala, Uganda by augmenting education with nutrition, healthcare and support services. Until her retirement in April 2019, Devereux-Mills was one of a handful of women to serve as chairman of the board of a public company in her role at Marchex in Seattle, Washington. Devereux-Mills first served as a director on the Marchex board beginning in 2006, and was appointed Chairman in October 2016. Marchex is a leader in mobile marketing and call analytics. She was also on the Board of Lantern, a company that brought Cognitive Behavioral Therapy (CBT) to people through mobile technology, thereby expanded access to clinical help and reduced the cost of care. Combining her career success, her interest in creating opportunities to connect and empower women as well as her natural leadership skills, Devereux-Mills is now a public speaker, who is focusing on issues of female empowerment, reframing reciprocity, and creating a new version of feminism that can address the issues so prevalent in our society. Book: The Parlay Effect In The Parlay Effect: The Transformative Power of Female Connection, Anne Devereux-Mills uses her insights as Founder of Parlay House to show how small actions can result in a meaningful boost in self-awareness, confidence and vision. Through a combination of scientific research and personal stories, The Parlay Effect offers a blueprint for anyone who is going through a life transition who wants to find and create communities that have a positive and multiplying effect in their impact. Honours and awards Working Mother of the Year from the She Runs It, (formerly Advertising Women of New York) Leading Women in Technology from the All-Stars Foundation Activist of the Year from Project Kesher The Return, her documentary received a 2017 Emmy- nomination Recorded talks The Guild: Reframing Reciprocity, 2017 Watermark: Doing Well By Doing Good, 2016 The Battery: Small Actions Have Ripple Effects in Social Justice Reform, 2016 SHE-CAN: Pulling Women Forward SHE-CAN: Revolution 2.0, 2015 Passage 8: Where Was I "Where Was I?" may refer to: Books "Where Was I?", essay by David Hawley Sanford from The Mind's I Where Was I?, book by John Haycraft 2006 Where was I?!, book by Terry Wogan 2009 Film and TV Where Was I? (film), 1925 film directed by William A. Seiter. With Reginald Denny, Marian Nixon, Pauline Garon, Lee Moran. Where Was I? (2001 film), biography about songwriter Tim Rose Where Was I? (TV series) 1952–1953 Quiz show with the panelists attempting to guess a location by looking at photos "Where Was I?" episode of Shoestring (TV series) 1980 Music "Where was I", song by W. Franke Harling and Al Dubin performed by Ruby Newman and His Orchestra with vocal chorus by Larry Taylor and Peggy McCall 1939 "Where Was I", single from Charley Pride discography 1988 "Where Was I" (song), a 1994 song by Ricky Van Shelton "Where Was I (Donde Estuve Yo)", song by Joe Pass from Simplicity (Joe Pass album) "Where Was I?", song by Guttermouth from The Album Formerly Known as a Full Length LP (Guttermouth album) "Where Was I", song by Sawyer Brown (Billy Maddox, Paul Thorn, Anne Graham) from Can You Hear Me Now 2002 "Where Was I?", song by Kenny Wayne Shepherd from Live On 1999 "Where Was I", song by Melanie Laine (Victoria Banks, Steve Fox) from Time Flies (Melanie Laine album) "Where Was I", song by Rosie Thomas from With Love (Rosie Thomas album) Passage 9: Anne Devereux-Mills Anne Devereux-Mills (born March 2, 1962) is an American businesswoman, author, public speaker and entrepreneur. Anne Devereux-Mills spent the first 25 years of her career building and leading advertising agencies in New York City. She is now co-host of the Bring a Friend podcast and the Chief Instigator (and Founder) of Parlay House, a 7000+ member organization in 12 cities worldwide that champions and inspires women to connect and make meaningful change for themselves and for others. Early life Anne Devereux-Mills was born in Seattle, Washington, the daughter of Gene Bruce Brandzel and Elizabeth Ettenheim Brandzel and sister to Rachel Brandzel Weil and Susan Brandzel. She attended John Muir Elementary School, Eckstein Middle School and the Lakeside School. Devereux-Mills left Seattle in 1980 to attend Wellesley College in Wellesley, Massachusetts where she became President of the Senior Class and an active member of College Government. Career Devereux-Mills began her career in the Political Risk Department of Marsh and McLennan in New York City, but after just a few years, realized that her strengths lay elsewhere. Parlaying her skills in communications and client management through a series of career experiments, she found herself in the field of advertising where she specialized in healthcare. Once landing in a field that combined her strengths and her passions, she quickly climbed the corporate ladder, helping found the first Direct-to-consumer advertising agency for healthcare brands, called Consumer Healthworks, part of WPP. A few years later, she moved to Omnicom Group, building a direct to consumer practice for Harrison and Star where she went on to become president, then to Merkley and Partners where she was CEO of the Healthcare Division. From Merkley, she moved to BBDO where she was CEO of BBDO World Health as well as managing director and Chief Integration Officer. She then transitioned to TBWA\Chiat\Day as CEO of the global healthcare practice as well as chairman and CEO of LLNS.Devereux-Mills left the field of advertising in 2009. Hit with the triple threat of progressing cancer, opting to have cancer surgery, Devereux-Mills moved to San Francisco where she founded Parlay House, a salon-style gathering for women that now has a national presence and thousands of members who come together to connect about what they care about rather than what they "do". She is an active mentor of the SHE-CAN organization which takes high-performing women from post-genocide countries and helps them gain an American education so that they can then return to their countries and become the next generation of leaders. Devereux-Mills was one of the first supporting members of the iHUG Foundation which helps break the cycle of poverty for children in Kabalagala, Uganda by augmenting education with nutrition, healthcare and support services. Until her retirement in April 2019, Devereux-Mills was one of a handful of women to serve as chairman of the board of a public company in her role at Marchex in Seattle, Washington. Devereux-Mills first served as a director on the Marchex board beginning in 2006, and was appointed Chairman in October 2016. Marchex is a leader in mobile marketing and call analytics. She was also on the Board of Lantern, a company that brought Cognitive Behavioral Therapy (CBT) to people through mobile technology, thereby expanded access to clinical help and reduced the cost of care. Combining her career success, her interest in creating opportunities to connect and empower women as well as her natural leadership skills, Devereux-Mills is now a public speaker, who is focusing on issues of female empowerment, reframing reciprocity, and creating a new version of feminism that can address the issues so prevalent in our society. Book: The Parlay Effect In The Parlay Effect: The Transformative Power of Female Connection, Anne Devereux-Mills uses her insights as Founder of Parlay House to show how small actions can result in a meaningful boost in self-awareness, confidence and vision. Through a combination of scientific research and personal stories, The Parlay Effect offers a blueprint for anyone who is going through a life transition who wants to find and create communities that have a positive and multiplying effect in their impact. Honours and awards Working Mother of the Year from the She Runs It, (formerly Advertising Women of New York) Leading Women in Technology from the All-Stars Foundation Activist of the Year from Project Kesher The Return, her documentary received a 2017 Emmy- nomination Recorded talks The Guild: Reframing Reciprocity, 2017 Watermark: Doing Well By Doing Good, 2016 The Battery: Small Actions Have Ripple Effects in Social Justice Reform, 2016 SHE-CAN: Pulling Women Forward SHE-CAN: Revolution 2.0, 2015 Passage 10: Beaulieu-sur-Loire Beaulieu-sur-Loire (French pronunciation: ​[boljø syʁ lwaʁ], literally Beaulieu on Loire) is a commune in the Loiret department in north-central France. It is the place of death of Jacques MacDonald, a French general who served in the Napoleonic Wars. Population See also Communes of the Loiret department
Where was the place of death of Anne Devereux's husband?
Banbury
3,847
2wikimqa
4k
Passage 1: Eunoë (wife of Bogudes) Eunoë Maura was the wife of Bogudes, King of Western Mauretania. Her name has also been spelled Euries or Euryes or Eunoa. Biography Early life Eunoë Maura was thought to be descended from Berbers, but her name is Greek so it appears she might have been from there or had Greek ancestry. She was likely of very high status, as she is mentioned by historian Suetonius in the same context as Cleopatra. Marriage At an unspecified early date in her marriage to her husband Bogud he mounted an expedition along the Atlantic coast, seemingly venturing into the tropics. When he returned he presented his wife Eunoë with gigantic reeds and asparagus he had found on the journey.She is believed to have been a mistress of Julius Caesar. She may have replaced Cleopatra in Caesar's affections, when he arrived in North Africa prior to the Battle of Thapsus on 6 April 46 BC, the two were among several queens courted by Caesar. It is also possible that they first met in Spain if she accompanied her husband there on a campaign. Only a brief romance for the Roman, both Eunoe and Bogudes profited through gifts bestowed on them by Caesar. Caesar departed from Africa in June 46 BC, five and a half months after he landed. Cultural depictions Eunoë and Caesar's affair is greatly exaggerated and expanded on in the Medieval French prose work Faits des Romains. Jeanette Beer in her book A Medieval Caesar states that the Roman general is "transformed into Caesar, the medieval chevalier" in the text, and that the author is more interested in Caesar's sexual dominance over the queen than the political dominance he held over her husband Bogud. The text describes her; "Eunoe was the most beautiful woman in four kingdoms — nevertheless, she was Moorish", which Beer further analysed as being indicative of the fact that it was unimaginable to audiences of the time to believe that a lover of Caesar could be ugly, but that Moors still represented everything that was ugly to them.Eunoë has also been depicted in several novels about Caesar, as well as serialized stories in The Cornhill Magazine. In such fiction her character often serves as a foil for the relationship between Caesar and another woman, mostly Cleopatra, such as in The Memoirs of Cleopatra, The Bloodied Toga and When We Were Gods. In Song of the Nile she also plays a posthumous role as a person of interest for Cleopatra's daughter Selene II who became queen of Mauritania after her.Eunoe has also been depicted in a numismatic drawing by Italian artist and polymath Jacopo Strada, who lived in the 16th century. There is however no archaeological evidence of a coin that bears her name or picture. See also Women in ancient Rome Passage 2: Lou Breslow Lou Breslow (born Lewis Breslow; July 18, 1900 – November 10, 1987) was an American screenwriter and film director. He wrote for 70 films between 1928 and 1955. He also directed seven films between 1932 and 1951 and wrote scripts for both Laurel and Hardy in their first two films at 20th Century Fox, and Abbott and Costello. Breslow married film actress and comedian Marion Byron in 1932, and remained married until her death in 1985. Selected filmography The Human Tornado (1925) Sitting Pretty (1933) Punch Drunks (1934 - directed) Gift of Gab (1934) Music Is Magic (1935) The Man Who Wouldn't Talk (1940) Great Guns (1941) Blondie Goes to College (1942) A-Haunting We Will Go (1942) Follow the Boys (1944) Abbott and Costello in Hollywood (1945) You Never Can Tell (1951) Bedtime for Bonzo (1951) Passage 3: Artaynte Artaynte (f. 478 BC), was the wife of the Crown Prince Darius. Life Daughter of an unnamed woman and Prince Masistes, a marshall of the armies during the invasion of Greece in 480-479 BC, and the brother of King Xerxes I. During the Greek campaign Xerxes developed a passionate desire for the wife of Masistes, but she would constantly resist and would not bend to his will. Upon his return to Sardis, the king endeavoured to bring about the marriage of his son Daris to Artaynte, the daughter of this woman the wife of Masistes, supposing that by doing so he could obtain her more easily. After moving to Susa he brought Artaynte to the royal house with him for his son Daris, but fell in love with her himself, and after obtaining her they became lovers. At the behest of Xerxes, Artaynte committed adultery with him (Xerxes). When queen Amestris found out, she did not seek revenge against Artaynte, but against her mother, Masistes' wife, as Amestris thought that it was her connivance. On Xerxes' birthday, Amestris sent for his guards and mutilated Masistes' wife by cutting off her breasts and threw them to dogs, and her nose and ears and lips also, and cutting out her tongue as well. On seeing this, Masistes fled to Bactria to start a revolt, but was intercepted by Xerxes' army who killed him and his sons. Passage 4: Papianilla (wife of Tonantius Ferreolus) Papianilla (born 415) was a Roman noblewoman. She was the wife of Tonantius Ferreolus. Another Papianilla, the wife of the poet Sidonius Apollinaris, was a relative of hers.She had Tonantius Ferreolus and other sons. Notes Sources "Papianilla 1", Prosopography of the Later Roman Empire, Volume 2, p. 830. Passage 5: Catherine Exley Catherine Exley (1779–1857) was an English diarist. She was the wife of a soldier who accompanied her husband when he served in Portugal, Spain, and Ireland during the Napoleonic Wars. Exley is best known as the author of a diary that gives an account of military life in that era from the viewpoint of the wife of a common soldier. Background Catherine Whitaker was born at Leeds in 1779 and married Joshua Exley there in 1806. Between 1805 and 1815, Joshua served in the Second Battalion of the 34th Regiment of Foot, initially as a private and then for a little over two years, as a corporal. Exley accompanied her husband for a substantial portion of this time and in due course wrote an account that is probably unique in that it records and reflects on life in the British Army from the perspective of the wife of a soldier who did not reach the rank of an officer. The diary Catherine's diary was first published as a booklet issued shortly after her death. A single copy of the booklet is known to exist, it was also reprinted in The Dewsbury Reporter during August 1923. The text of the diary is included in full in a more recently issued book, edited by Professor Rebecca Probert, along with essays on its military and religious context, the treatment of prisoners of war and the role of women in the British, French and Spanish armed forces during the Peninsular War. The diary unfolds the hardships that both Catherine and her husband suffered during his military service, including one period when they both wrongly thought that the other had died. There are detailed accounts of the births and deaths of children, the cold, hunger and filthy conditions of military life and the horror of the aftermaths of battles. Details of the author's religious experiences which led her to membership of the Methodist church also appear. Exley wrote the diary during the last 20 years before her death, which took place in 1857 at Batley, Yorkshire. Passage 6: Waldrada of Lotharingia Waldrada was the mistress, and later the wife, of Lothair II of Lotharingia. Biography Waldrada's family origin is uncertain. The prolific 19th-century French writer Baron Ernouf suggested that Waldrada was of noble Gallo-Roman descent, sister of Thietgaud, the bishop of Trier, and niece of Gunther, archbishop of Cologne. However, these suggestions are not supported by any evidence, and more recent studies have instead suggested she was of relatively undistinguished social origins, though still from an aristocratic milieu. The Vita Sancti Deicoli states that Waldrada was related to Eberhard II, Count of Nordgau (included Strasbourg) and the family of Etichonids, though this is a late 10th-century source and so may not be entirely reliable on this question.In 855 the Carolingian king Lothar II married Teutberga, a Carolingian aristocrat and the daughter of Bosonid Boso the Elder. The marriage was arranged by Lothar's father Lothar I for political reasons. It is very probable that Waldrada was already Lothar II's mistress at this time.Teutberga was allegedly not capable of bearing children and Lothar's reign was chiefly occupied by his efforts to obtain an annulment of their marriage, and his relations with his uncles Charles the Bald and Louis the German were influenced by his desire to obtain their support for this endeavour. Lothair, whose desire for annulment was arguably prompted by his affection for Waldrada, put away Teutberga. However, Hucbert took up arms on his sister's behalf, and after she had submitted successfully to the ordeal of water, Lothair was compelled to restore her in 858. Still pursuing his purpose, he won the support of his brother, Emperor Louis II, by a cession of lands and obtained the consent of the local clergy to the annulment and to his marriage with Waldrada, which took place in 862. However, Pope Nicholas I was suspicious of this and sent legates to investigate at the Council of Metz in 863. The Council found in favour of Lothair's divorce, which led to rumours that the papal legates may have bribed and thus meant that Nicholas order Lothair to take Teutberga back or face excommunication. With the support of Charles the Bald and Louis the German, Teutberga appealed the annulment to Pope Nicholas. Nicholas refused to recognize the annulment and excommunicated Waldrada in 866, forcing Lothair to abandon Waldrada in favour of Teutberga. Lothair accepted this begrudgingly for a time, but shortly afterward at the end of 867 Pope Nicholas I died. Thus, Lothair began to seek the permission of the newly appointed Pope Adrian II to again put Teutberga aside and marry Waldrada, riding to Rome to speak with him on the matter in 869. However, on his way home, Lothair died. Children Waldrada and Lothair II had some sons and probably three daughters, all of whom were declared illegitimate: Hugh (c. 855–895), Duke of Alsace (867–885) Gisela (c. 865–908), who in 883 married Godfrey, the Viking leader ruling in Frisia, who was murdered in 885 Bertha (c. 863–925), who married Theobald of Arles (c. 854–895), count of Arles, nephew of Teutberga. They had two sons, Hugh of Italy and Boso of Tuscany. After Theobald's death, between 895 and 898 she married Adalbert II of Tuscany (c. 875–915) They had at least three children: Guy, who succeeded his father as count and duke of Lucca and margrave of Tuscany, Lambert succeeded his brother in 929, but lost the titles in 931 to his half-brother Boso of Tuscany, and Ermengard. Ermengarde (d. 90?) Odo (d. c.879) Passage 7: Marion Byron Marion Byron (born Miriam Bilenkin; 1911 – 1985) was an American movie comedian. Early years Born in Dayton, Ohio, Byron was one of five daughters of Louis and Bertha Bilenkin. Career She made her first stage appearance at the age of 13 and followed it with a role in Hollywood Music Box Review opposite Fanny Brice. It was while appearing in this production that she was given the nickname 'Peanuts' on account of her short stature. While appearing in 'The Strawberry Blonde', she came to the attention of Buster Keaton who signed her as his leading lady in the film Steamboat Bill, Jr. in 1928 when she was just 16. From there she was hired by Hal Roach who teamed her with Anita Garvin in a bid to create a female version of Laurel & Hardy. The pairing was not a commercial success and they made just three short features between 1928-9 - Feed 'Em and Weep (1928), Going Ga-Ga (1928) and A Pair of Tights (1929). She left the Roach studio before it made talking comedies, then worked in musical features, like the Vitaphone film Broadway Babies (1929) with Alice White, and the early Technicolor feature Golden Dawn (1930). Her parts slowly got smaller until they were unbilled walk-ons in movies like Meet the Baron (1933), starring Jack Pearl and Hips Hips Hooray (1934) with Wheeler & Woolsey; she returned to the Hal Roach studio for a bit part in the Charley Chase short It Happened One Day (1934). Her final screen appearance was as a baby nurse to the Dionne Quintuplets in Five of a Kind (1938). Family Byron married screenwriter Lou Breslow in 1932 and they had two sons, Lawrence and Daniel. They remained together until her death in Santa Monica on July 5, 1985, following a long illness. Her ashes were later scattered in the sea. Selected filmography Five of a Kind (1938) Swellhead (1935) Gift of Gab (1934) It Happened One Day (1934) Hips, Hips, Hooray! (1933) Only Yesterday (1933) Meet the Baron (1933) Husbands’ Reunion (1933) College Humor (1933) Melody Cruise (1933) Breed of the Border (1933) The Crime of the Century (1933) The Curse of a Broken Heart (1933) Lucky Devils (1933) Trouble in Paradise (1932) They Call It Sin (1932) Love Me Tonight (1933) The Hollywood Handicap (1932) Week Ends Only (1932) The Tenderfoot (1932) The Heart of New York (1932) Running Hollywood (1932) Working Girls (1931) Children of Dreams (1931) Girls Demand Excitement (1931) The Bad Man (1930) The Matrimonial Bed (1930) Golden Dawn (1930) Song of the West (1930) Playing Around (1930) Show of Shows (1929) The Forward Pass (1929) - Mazie So Long Letty (1929) Social Sinners (1929) Broadway Babies (1929) The Unkissed Man (1929) His Captive Woman (1929) A Pair of Tights (1929) Going Ga–Ga (1929) Is Everybody Happy? (1929) Feed’em and Weep (1928) The Boy Friend (1928) Plastered in Paris (1928) Steamboat Bill, Jr. (1928) Passage 8: Agatha (wife of Samuel of Bulgaria) Agatha (Bulgarian: Агата, Greek: Άγάθη; fl. late 10th century) was the wife of Emperor Samuel of Bulgaria. Biography According to a later addition to the history of the late-11th-century Byzantine historian John Skylitzes, Agatha was a captive from Larissa, and the daughter of the magnate of Dyrrhachium, John Chryselios. Skylitzes explicitly refers to her as the mother of Samuel's heir Gavril Radomir, which means that she was probably Samuel's wife. On the other hand, Skylitzes later mentions that Gavril Radomir himself also took a beautiful captive, named Irene, from Larissa as his wife. According to the editors of the Prosopographie der mittelbyzantinischen Zeit, this may have been a source of confusion for a later copyist, and Agatha's real origin was not Larissa, but Dyrrhachium. According to the same work, it is likely that she had died by ca. 998, when her father surrendered Dyrrhachium to the Byzantine emperor Basil II.Only two of Samuel's and Agatha's children are definitely known by name: Gavril Radomir and Miroslava. Two further, unnamed, daughters are mentioned in 1018, while Samuel is also recorded as having had a bastard son.Agatha is one of the central characters in Dimitar Talev's novel Samuil. Passage 9: Empress Shōken Empress Dowager Shōken (昭憲皇太后, Shōken-kōtaigō, 9 May 1849 – 9 April 1914), born Masako Ichijō (一条勝子, Ichijō Masako), was the wife of Emperor Meiji of Japan. She is also known under the technically incorrect name Empress Shōken (昭憲皇后, Shōken-kōgō). She was one of the founders of the Japanese Red Cross Society, whose charity work was known throughout the First Sino-Japanese War. Early life Lady Masako Ichijō was born on 9 May 1849, in Heian-kyō, Japan. She was the third daughter of Tadayoshi Ichijō, former Minister of the Left and head of the Fujiwara clan's Ichijō branch. Her adoptive mother was one of Prince Fushimi Kuniie's daughters, but her biological mother was Tamiko Niihata, the daughter of a doctor from the Ichijō family. Unusually for the time, she had been vaccinated against smallpox. As a child, Masako was somewhat of a prodigy: she was able to read poetry from the Kokin Wakashū by the age of 4 and had composed some waka verses of her own by the age of 5. By age seven, she was able to read some texts in classical Chinese with some assistance and was studying Japanese calligraphy. By the age of 12, she had studied the koto and was fond of Noh drama. She excelled in the studies of finances, ikebana and Japanese tea ceremony.The major obstacle to Lady Masako's eligibility to become empress consort was the fact that she was 3 years older than Emperor Meiji, but this issue was resolved by changing her official birth date from 1849 to 1850. They became engaged on 2 September 1867, when she adopted the given name Haruko (美子), which was intended to reflect her serene beauty and diminutive size. The Tokugawa Bakufu promised 15,000 ryō in gold for the wedding and assigned her an annual income of 500 koku, but as the Meiji Restoration occurred before the wedding could be completed, the promised amounts were never delivered. The wedding was delayed partly due to periods of mourning for Emperor Kōmei, for her brother Saneyoshi, and the political disturbances around Kyoto between 1867 and 1868. Empress of Japan Lady Haruko and Emperor Meiji's wedding was finally officially celebrated on 11 January 1869. She was the first imperial consort to receive the title of both nyōgō and of kōgō (literally, the emperor's wife, translated as "empress consort"), in several hundred years. However, it soon became clear that she was unable to bear children. Emperor Meiji already had 12 children by 5 concubines, though: as custom in Japanese monarchy, Empress Haruko adopted Yoshihito, her husband's eldest son by Lady Yanagihara Naruko, who became Crown Prince. On 8 November 1869, the Imperial House departed from Kyoto for the new capital of Tokyo. In a break from tradition, Emperor Meiji insisted that the Empress and the senior ladies-in-waiting should attend the educational lectures given to the Emperor on a regular basis about national conditions and developments in foreign nations. Influence On 30 July 1886, Empress Haruko attended the Peeresses School's graduation ceremony in Western clothing. On 10 August, the imperial couple received foreign guests in Western clothing for the first time when hosting a Western Music concert.From this point onward, the Empress' entourage wore only Western-style clothes in public, to the point that in January 1887 Empress Haruko issued a memorandum on the subject: traditional Japanese dress was not only unsuited to modern life, but Western-style dress was closer than the kimono to clothes worn by Japanese women in ancient times.In the diplomatic field, Empress Haruko hosted the wife of former US President Ulysses S. Grant during his visit to Japan. She was also present for her husband's meetings with Hawaiian King Kalākaua in 1881. Later that same year, she helped host the visit of the sons of future British King Edward VII: Princes Albert Victor and George (future George V), who presented her with a pair of pet wallabies from Australia.On 26 November 1886, Empress Haruko accompanied her husband to Yokosuka, Kanagawa to observe the new Imperial Japanese Navy cruisers Naniwa and Takachiho firing torpedoes and performing other maneuvers. From 1887, the Empress was often at the Emperor's side in official visits to army maneuvers. When Emperor Meiji fell ill in 1888, Empress Haruko took his place in welcoming envoys from Siam, launching warships and visiting Tokyo Imperial University. In 1889, Empress Haruko accompanied Emperor Meiji on his official visit to Nagoya and Kyoto. While he continued on to visit naval bases at Kure and Sasebo, she went to Nara to worship at the principal Shinto shrines.Known throughout her tenure for her support of charity work and women's education during the First Sino-Japanese War (1894–95), Empress Haruko worked for the establishment of the Japanese Red Cross Society. She participated in the organization's administration, especially in their peacetime activities in which she created a money fund for the International Red Cross. Renamed "The Empress Shōken Fund", it is presently used for international welfare activities. After Emperor Meiji moved his military headquarters from Tokyo to Hiroshima to be closer to the lines of communications with his troops, Empress Haruko joined her husband in March 1895. While in Hiroshima, she insisted on visiting hospitals full of wounded soldiers every other day of her stay. Death After Emperor Meiji's death in 1912, Empress Haruko was granted the title Empress Dowager (皇太后, Kōtaigō) by her adoptive son, Emperor Taishō. She died in 1914 at the Imperial Villa in Numazu, Shizuoka and was buried in the East Mound of the Fushimi Momoyama Ryo in Fushimi, Kyoto, next to her husband. Her soul was enshrined in Meiji Shrine in Tokyo. On 9 May 1914, she received the posthumous name Shōken Kōtaigō (昭憲皇太后). Her railway-carriage can be seen today in the Meiji Mura Museum, in Inuyama, Aichi prefecture. Honours National Grand Cordon of the Order of the Precious Crown, 1 November 1888 Foreign She received the following orders and decorations: Russian Empire: Grand Cross of the Order of St. Catherine, 13 December 1887 Spain: Dame of the Order of Queen Maria Luisa, 29 November 1889 Siam: Dame of the Order of the Royal House of Chakri, 12 October 1899 German Empire: Dame of the Order of Louise, 1st Class, 19 May 1903 Kingdom of Bavaria: Dame of Honour of the Order of Theresa, 29 February 1904 Korean Empire: Grand Cordon of the Order of the Auspicious Phoenix, 27 July 1908 Ancestry See also Empress of Japan Ōmiya Palace Notes Passage 10: Hafsa Hatun Hafsa Hatun (Ottoman Turkish: حفصه خاتون, "young lioness") was a Turkish princess, and a consort of Bayezid I, Sultan of the Ottoman Empire. Life Hafsa Hatun was the daughter of Isa Bey, the ruler of the Aydinids. She was married to Bayezid in 1390, upon his conquest of the Aydinids. Her father had surrendered without a fight, and a marriage was arranged between her and Bayezid. Thereafter, Isa was sent into exile in Iznik, shorn of his power, where he subsequently died. Her marriage strengthened the bonds between the two families. Charities Hafsa Hatun's public works are located within her father's territory and may have been built before she married Bayezid. She commissioned a fountain in Tire city and a Hermitage in Bademiye, and a mosque known as "Hafsa Hatun Mosque" between 1390 and 1392 from the money she received in her dowry. See also Ottoman dynasty Ottoman Empire
Where was the wife of Lou Breslow born?
Dayton, Ohio
3,761
2wikimqa
4k