The Faces of PhilSoc: Melanie Green

melanie_green

Name: Melanie Green

Position: Reader in Linguistics and English Language

Institution: University of Sussex

Role in PhilSoc: Council Member

 


About You

How did you become a linguist – was there a decisive event, or was it a gradual development?

Somewhere between doing my A-levels (in English, French and Latin) and applying for university, when I found the SOAS prospectus in the school cupboard. At that point I realised that studying language didn’t have to mean studying literature, and I applied to study Hausa at SOAS. In my final year, I took a course that focused on the linguistic description of Hausa (taught by Professor Philip Jaggar), and it was this course that led me upstairs to the Linguistics Department, where I then took my MA and PhD.

What was the topic of your doctoral thesis? Do you still believe in your conclusions?

My doctoral thesis was on focus and copular constructions in Hausa, and offered a minimalist analysis. I still believe in the descriptive conclusions, which relate to the grammaticalisation of non-verbal copula into focus marker, but I’m less convinced these days by formal theory. I still enjoy teaching it though, because I think it makes students think carefully (and critically) about formal similarities and differences between languages.

On what project / topic are you currently working?

Together with Gabriel Ozon at Sheffield and Miriam Ayafor at Yaounde I, I’ve just completed a BA/Leverhulme funded project to build a pilot spoken corpus of Cameroon Pidgin English. Based on this corpus, Miriam and I co-authored a descriptive grammar of the variety, which is in press.

What directions in the future do you see your research taking?

In my dreams, typologically-framed language documentation. In reality, probably more corpus linguistics, since this seems to be what attracts funding at the moment.

How did you get involved with the Philological Society?

The PhilSoc published my first book, Focus in Hausa.


‘Personal’ Questions

Do you have a favourite language – and if so, why?

No.

Minimalism or LFG?

Minimalism.

Teaching or Research?

Both.

Do you have a linguistic pet peeve?

No.

 


Looking to the Future

Is there something that you would like to change in academia / HE?

I would like there to be more funding for language documentation. Languages are dying faster than we can describe them.

(How) Do you manage to have a reasonable work-life balance?

I do, but that only became possible in mid-career. I achieve it with careful planning, so when I’m off work, I’m really off work.

What is your prime tip for younger colleagues?

Start publishing as early as possible. 

Natural Language Processing meets social media corpora

by Yin Yin Lu (University of Oxford)

From 17-19 May I attended the CLARIN workshop on the ‘Creation and Use of Social Media Resources’ in Kaunas, Lithuania. The thirty participants represented a broad range of backgrounds: computer science, corpus linguistics, political science, sociology, communication and media studies, sociolinguistics, psychology, and journalism. Our goal was to share best practises in the large-scale collection and analysis of social media data, particularly from a natural language processing (NLP) perspective.

As Michael Beißwenger noted during the first workshop session, there is a ‘social media gap’ in the corpus linguistics landscape. This is because social media corpora are the “naughty stepchild” of text and speech corpora. Traditional natural language processing tools (for, e.g., news articles, political documents, speeches, essays, books) are not always appropriate for social media texts, given the unique communicative characteristics of such texts. Part-of-speech tagging, tokenisation, dependency parsing, sentiment analysis, irony detection, and topic modelling are notoriously difficult. In addition, the personal nature of much social media creates legal and ethical challenges for the data mining and dissemination of social media corpora: Twitter, for example, forbids researchers from publishing collections of tweets; only their IDs can be shared.

I made invaluable connections with researchers at the intersection of NLP and social media data – and Twitter data in particular, which is the area of my own research. Dirk Hovy, an associate professor at the University of Copenhagen, spoke broadly about the challenges of NLP: engineers assume that all language is identically and independently distributed. This is clearly not true, as language is driven by demographic differences. How can we add extra-linguistic information to NLP models? His proposed solution is word embedding: transforming words into vectors, trained on large amounts of data from different demographic groups. These vectors should capture the linguistic peculiarities of the groups.

A variant of word embedding is document embedding – and tweets can be treated as documents. Thus, it should be possible to transform tweets into vectors to capture the demographic-driven linguistic differences that they contain. I will be applying this approach to my own corpus of 12 million tweets related to the EU referendum.

Andrea Cimino, a postdoc from the Italian NLP Lab, spoke about his work on adapting existing NLP tools—which are trained on traditional text—for social media text. The NLP Lab has developed the best POS tagger for social media based upon deep neural networks (long short-term memory), which are able to capture long relationships between words in a sentence. The tagger has achieved 93.2% accuracy, and is currently only valid on Italian texts. Similar taggers can be developed for English texts, given the appropriate training data.

Rebekah Tromble, an assistant professor at Leiden University, presented on the limitations and biases of data collected from Twitter’s Application Programming Interface (API). There are two public APIs that can be used: the historic Search API and the real-time Streaming API. Up to 18,000 tweets can be harvested from the former over the last seven to ten-day period, whichever limit is reached first. The Streaming API allows for up to 1% of all tweets to be collected in real time; as there are 500 million tweets a day, this is approximately 5 million tweets a day.

Continue reading “Natural Language Processing meets social media corpora”

Big and small data in ancient languages

by Nicholas Zair (University of Cambridge)

Back in November I gave a talk at the Society’s round table on ‘Sources of evidence for linguistic analysis’ on ‘Big and small data in ancient languages’. Here I’m going to focus on one of the case studies I considered under the heading of ‘small data’, which is based on an article that I and Katherine McDonald and I have written (more details below) about a particular document from ancient Italy known as the Tabula Bantina.

tabula_bantina

It comes from Bantia, modern day Banzi in Basilicata and is written in Oscan, a language which was spoken in Southern Italy in the second half of the first millennium BC, including in Pompeii prior to a switch to speaking Latin towards the end of that period. Since Oscan did not survive as a spoken language, we know it almost entirely from inscriptions written on non-perishable materials such as stone, metal and clay. There aren’t very many of these inscriptions: perhaps a few hundred, depending on definitions (for instance, do you include control marks consisting of a single letter?). We are lucky that Oscan is an Indo-European language, and, along with a number of other languages from ancient Italy, quite closely related to Latin, so we can make good headway with it. Nonetheless, our knowledge of Oscan and its speakers is fairly limited: it is certainly a language that comes under the heading of ‘small data’.

 

iron_age_italy

One of the ways scholars have addressed the problem of so-called corpus languages like Oscan, and even better-attested but still limited ones like Latin has been to combine as many relevant sources of information, from ancient historians to the insights of modern sociolinguistic theory as a way of squeezing as much information from what we have – and trying to fill in the blanks where information is lacking. This has been a huge success, but this approach can also be dangerous, especially when it comes to studying language death. Given that we know a language will die out in the end, it is very tempting to see every piece of evidence as a staging post in the process, and try to fit it into our narrative of language death. Often this provides very plausible histories, but we must remember that, while in hindsight history can look teleological, things are rarely so clear at the time.

The Tabula Bantina is a bronze tablet with a Latin law on one side and an Oscan law on the other side. It is generally agreed that the Latin text was written before the Oscan one, but the Oscan is not a translation of the Latin: the writer of the Oscan text simply used the conveniently blank side of the tablet to write the new material on. The striking things about the Oscan text are that it is written in the Latin alphabet, and there are lots of mistakes. It also strongly resembles Latin legal language. The date of this side is probably between about 100-90 BC, just before Rome’s ‘allies’, which is to say conquered peoples and cities in Italy, rose up against it in a rebellion generally known as the Social War. Continue reading “Big and small data in ancient languages”

‘Counting’: quality and quantity in literary language and tools for investigating it

by Jonathan Hope (Strathclyde University, Glasgow)

The transcription of a substantial proportion of Early Modern English books by the Text Creation Partnership has placed more than 60,000 digital texts in the hands of literary and linguistic researchers. Linguists are in many cases used to dealing with large electronic corpora, but for literary scholars this is a new experience. Used to arguing from the quality, rather than quantity of evidence, literary scholars have a new set of norms and procedures to learn, and are faced with the exciting, or perhaps depressing, prospect that their object of study has changed.

 In this talk I’ll look at some specific case studies that illustrate the potential, and the problems, of quantity-based studies – and will highlight key areas where literary scholars need to reassess their expectations of ‘evidence’, and the texts we use. A possible alternative title might be ‘Learning to live with error: gappy texts and crappy metadata’.

A screencast of the talk can be found below.

This paper was read at the Philological Society meeting in Oxford, Wolfson College, on Saturday, 11 March, 4.15pm.